WorldWideScience

Sample records for automated defect classification

  1. One step automated unpatterned wafer defect detection and classification

    International Nuclear Information System (INIS)

    Dou Lie; Kesler, Daniel; Bruno, William; Monjak, Charles; Hunt, Jim

    1998-01-01

    Automated detection and classification of crystalline defects on micro-grade silicon wafers is extremely important for integrated circuit (IC) device yield. High training cost, limited capability of classifying defects, increasing possibility of contamination, and unexpected human mistakes necessitate the need to replace the human visual inspection with automated defect inspection. The Laser Scanning Surface Inspection Systems (SSISs) equipped with the Reconvergent Specular Detection (RSD) apparatus are widely used for final wafer inspection. RSD, more commonly known as light channel detection (LC), is capable of detecting and classifying material defects by analyzing information from two independent phenomena, light scattering and reflecting. This paper presents a new technique including a new type of light channel detector to detect and classify wafer surface defects such as slipline dislocation, Epi spikes, Pits, and dimples. The optical system to study this technique consists of a particle scanner to detect and quantify light scattering events from contaminants on the wafer surface and a RSD apparatus (silicon photo detector). Compared with the light channel detector presently used in the wafer fabs, this new light channel technique provides higher sensitivity for small defect detection and more defect scattering signatures for defect classification. Epi protrusions (mounds and spikes), slip dislocations, voids, dimples, and some other common defect features and contamination on silicon wafers are studied using this equipment. The results are compared quantitatively with that of human visual inspection and confirmed by microscope or AFM. This new light channel technology could provide the real future solution to the wafer manufacturing industry for fully automated wafer inspection and defect characterization

  2. Automated Heuristic Defect Classification (AHDC) for haze-induced defect growth management and mask requalification

    Science.gov (United States)

    Munir, Saghir; Qidwai, Gul

    2012-03-01

    This article presents results from a heuristic automated defect classification algorithm for reticle inspection that mimics the classification rules. AHDC does not require CAD data, thus it can be rapidly deployed in a high volume production environment without the need for extensive design data management. To ensure classification consistency a software framework tracks every defect in repeated inspections. Through its various image based derived metrics it is shown that such a system manages and tracks repeated defects in applications such as haze induced defect growth.

  3. The use of eDR-71xx for DSA defect review and automated classification

    Science.gov (United States)

    Pathangi, Hari; Van Den Heuvel, Dieter; Bayana, Hareen; Bouckou, Loemba; Brown, Jim; Parisi, Paolo; Gosain, Rohan

    2015-03-01

    The Liu-Nealey (LiNe) chemo-epitaxy Directed Self Assembly flow has been screened thoroughly in the past years in terms of defects. Various types of DSA specific defects have been identified and best known methods have been developed to be able to get sufficient S/N for defect inspection to help understand the root causes for the various defect types and to reduce the defect levels to prepare the process for high volume manufacturing. Within this process development, SEM-review and defect classification play a key role. This paper provides an overview of the challenges that DSA brings also in this metrology aspect and we will provide successful solutions in terms of making the automated defect review. In addition, a new Real Time Automated Defect Classification (RT-ADC) will be introduced that can save up to 90% in the time required for manual defect classification. This will enable a much larger sampling for defect review, resulting in a better understanding of signatures and behaviors of various DSA specific defect types, such as dislocations, 1-period bridges and line wiggling.

  4. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    Science.gov (United States)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  5. Automated Diagnosis and Classification of Steam Generator Tube Defects

    International Nuclear Information System (INIS)

    Garcia, Gabe V.

    2004-01-01

    A major cause of failure in nuclear steam generators is tube degradation. Tube defects are divided into seven categories, one of which is intergranular attack/stress corrosion cracking (IGA/SCC). Defects of this type usually begin on the outer surface of the tubes and propagate both inward and laterally. In many cases these defects occur at or near the tube support plates. Several different methods exist for the nondestructive evaluation of nuclear steam generator tubes for defect characterization

  6. Automated Diagnosis and Classification of Steam Generator Tube Defects

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Gabe V. Garcia

    2004-10-01

    A major cause of failure in nuclear steam generators is tube degradation. Tube defects are divided into seven categories, one of which is intergranular attack/stress corrosion cracking (IGA/SCC). Defects of this type usually begin on the outer surface of the tubes and propagate both inward and laterally. In many cases these defects occur at or near the tube support plates. Several different methods exist for the nondestructive evaluation of nuclear steam generator tubes for defect characterization.

  7. Integrating image processing and classification technology into automated polarizing film defect inspection

    Science.gov (United States)

    Kuo, Chung-Feng Jeffrey; Lai, Chun-Yu; Kao, Chih-Hsiang; Chiu, Chin-Hsun

    2018-05-01

    In order to improve the current manual inspection and classification process for polarizing film on production lines, this study proposes a high precision automated inspection and classification system for polarizing film, which is used for recognition and classification of four common defects: dent, foreign material, bright spot, and scratch. First, the median filter is used to remove the impulse noise in the defect image of polarizing film. The random noise in the background is smoothed by the improved anisotropic diffusion, while the edge detail of the defect region is sharpened. Next, the defect image is transformed by Fourier transform to the frequency domain, combined with a Butterworth high pass filter to sharpen the edge detail of the defect region, and brought back by inverse Fourier transform to the spatial domain to complete the image enhancement process. For image segmentation, the edge of the defect region is found by Canny edge detector, and then the complete defect region is obtained by two-stage morphology processing. For defect classification, the feature values, including maximum gray level, eccentricity, the contrast, and homogeneity of gray level co-occurrence matrix (GLCM) extracted from the images, are used as the input of the radial basis function neural network (RBFNN) and back-propagation neural network (BPNN) classifier, 96 defect images are then used as training samples, and 84 defect images are used as testing samples to validate the classification effect. The result shows that the classification accuracy by using RBFNN is 98.9%. Thus, our proposed system can be used by manufacturing companies for a higher yield rate and lower cost. The processing time of one single image is 2.57 seconds, thus meeting the practical application requirement of an industrial production line.

  8. Automated Defect Classification (ADC) and Progression Monitoring (DPM) in wafer fab reticle requalification

    Science.gov (United States)

    Yen, T. H.; Lai, Rick; Tuo, Laurent C.; Tolani, Vikram; Chen, Dongxue; Hu, Peter; Yu, Jiao; Hwa, George; Zheng, Yan; Lakkapragada, Suresh; Wang, Kechang; Peng, Danping; Wang, Bill; Chiang, Kaiming

    2013-09-01

    As optical lithography continues to extend into low-k1 regime, resolution of mask patterns continue to diminish, and so do mask defect requirements due to increasing MEEF. Post-inspection, mask defects have traditionally been classified by operators manually based on visual review. This approach may have worked down to 65/55nm node layers. However, starting 45nm and smaller nodes, visually reviewing 50 to sometimes 100s of defects on masks with complex modelbased OPC, SRAF, and ILT geometries, is error-prone and takes up valuable inspection tool capacity. Both these shortcomings in manual defect review are overcome by adoption of the computational solution called Automated Defect Classification (ADC) wherein mask defects are accurately classified within seconds and consistent to guidelines used by production technicians and engineers.

  9. Increasing reticle inspection efficiency and reducing wafer print-checks using automated defect classification and simulation

    Science.gov (United States)

    Ryu, Sung Jae; Lim, Sung Taek; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan

    2013-09-01

    IC fabs inspect critical masks on a regular basis to ensure high wafer yields. These requalification inspections are costly for many reasons including the capital equipment, system maintenance, and labor costs. In addition, masks typically remain in the "requal" phase for extended, non-productive periods of time. The overall "requal" cycle time in which reticles remain non-productive is challenging to control. Shipping schedules can slip when wafer lots are put on hold until the master critical layer reticle is returned to production. Unfortunately, substituting backup critical layer reticles can significantly reduce an otherwise tightly controlled process window adversely affecting wafer yields. One major requal cycle time component is the disposition process of mask inspections containing hundreds of defects. Not only is precious non-productive time extended by reviewing hundreds of potentially yield-limiting detections, each additional classification increases the risk of manual review techniques accidentally passing real yield limiting defects. Even assuming all defects of interest are flagged by operators, how can any person's judgment be confident regarding lithographic impact of such defects? The time reticles spend away from scanners combined with potential yield loss due to lithographic uncertainty presents significant cycle time loss and increased production costs. Fortunately, a software program has been developed which automates defect classification with simulated printability measurement greatly reducing requal cycle time and improving overall disposition accuracy. This product, called ADAS (Auto Defect Analysis System), has been tested in both engineering and high-volume production environments with very successful results. In this paper, data is presented supporting significant reduction for costly wafer print checks, improved inspection area productivity, and minimized risk of misclassified yield limiting defects.

  10. Increasing reticle inspection efficiency and reducing wafer printchecks at 14nm using automated defect classification and simulation

    Science.gov (United States)

    Paracha, Shazad; Goodman, Eliot; Eynon, Benjamin G.; Noyes, Ben F.; Ha, Steven; Kim, Jong-Min; Lee, Dong-Seok; Lee, Dong-Heok; Cho, Sang-Soo; Ham, Young M.; Vacca, Anthony D.; Fiekowsky, Peter J.; Fiekowsky, Daniel I.

    2014-10-01

    IC fabs inspect critical masks on a regular basis to ensure high wafer yields. These requalification inspections are costly for many reasons including the capital equipment, system maintenance, and labor costs. In addition, masks typically remain in the "requal" phase for extended, non-productive periods of time. The overall "requal" cycle time in which reticles remain non-productive is challenging to control. Shipping schedules can slip when wafer lots are put on hold until the master critical layer reticle is returned to production. Unfortunately, substituting backup critical layer reticles can significantly reduce an otherwise tightly controlled process window adversely affecting wafer yields. One major requal cycle time component is the disposition process of mask inspections containing hundreds of defects. Not only is precious non-productive time extended by reviewing hundreds of potentially yield-limiting detections, each additional classification increases the risk of manual review techniques accidentally passing real yield limiting defects. Even assuming all defects of interest are flagged by operators, how can any person's judgment be confident regarding lithographic impact of such defects? The time reticles spend away from scanners combined with potential yield loss due to lithographic uncertainty presents significant cycle time loss and increased production costs An automatic defect analysis system (ADAS), which has been in fab production for numerous years, has been improved to handle the new challenges of 14nm node automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this study, we have created programmed defects on a production 14nm node critical-layer reticle. These defects have been analyzed with lithographic simulation software and compared to the results of both AIMS optical simulation and to actual wafer prints.

  11. Automated stent defect detection and classification with a high numerical aperture optical system

    Science.gov (United States)

    Bermudez, Carlos; Laguarta, Ferran; Cadevall, Cristina; Matilla, Aitor; Ibañez, Sergi; Artigas, Roger

    2017-06-01

    Stent quality control is a highly critical process. Cardiovascular stents have to be inspected 100% so as no defective stent is implanted in a human body. However, this visual control is currently performed manually and every stent could need tenths of minutes to be inspected. In this paper, a novel optical inspection system is presented. By the combination of a high numerical aperture (NA) optical system, a rotational stage and a line-scan camera, unrolled sections of the outer and inner surfaces of the stent are obtained and image-processed at high speed. Defects appearing in those surfaces and also in the edges are extremely contrasted due to the shadowing effect of the high NA illumination and acquisition approach. Therefore by means of morphological operations and a sensitivity parameter, defects are detected. Based on a trained defect library, a binary classifier sorts each kind of defect through a set of scoring vectors, providing the quality operator with all the required information to finally take a decision. We expect this new approach to make defect detection completely objective and to dramatically reduce the time and cost of stent quality control stage.

  12. An automated cirrus classification

    Science.gov (United States)

    Gryspeerdt, Edward; Quaas, Johannes; Sourdeval, Odran; Goren, Tom

    2017-04-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but our understanding of the lifecycle and controls on cirrus clouds remains incomplete. Cirrus clouds can have very different properties and development depending on their environment, particularly during their formation. However, the relevant factors often cannot be distinguished using commonly retrieved satellite data products (such as cloud optical depth). In particular, the initial cloud phase has been identified as an important factor in cloud development, but although back-trajectory based methods can provide information on the initial cloud phase, they are computationally expensive and depend on the cloud parametrisations used in re-analysis products. In this work, a classification system (Identification and Classification of Cirrus, IC-CIR) is introduced. Using re-analysis and satellite data, cirrus clouds are separated in four main types: frontal, convective, orographic and in-situ. The properties of these classes show that this classification is able to provide useful information on the properties and initial phase of cirrus clouds, information that could not be provided by instantaneous satellite retrieved cloud properties alone. This classification is designed to be easily implemented in global climate models, helping to improve future comparisons between observations and models and reducing the uncertainty in cirrus clouds properties, leading to improved cloud parametrisations.

  13. Automatic classification of defects in weld pipe

    International Nuclear Information System (INIS)

    Anuar Mikdad Muad; Mohd Ashhar Hj Khalid; Abdul Aziz Mohamad; Abu Bakar Mhd Ghazali; Abdul Razak Hamzah

    2000-01-01

    With the advancement of computer imaging technology, the image on hard radiographic film can be digitized and stored in a computer and the manual process of defect recognition and classification may be replace by the computer. In this paper a computerized method for automatic detection and classification of common defects in film radiography of weld pipe is described. The detection and classification processes consist of automatic selection of interest area on the image and then classify common defects using image processing and special algorithms. Analysis of the attributes of each defect such as area, size, shape and orientation are carried out by the feature analysis process. These attributes reveal the type of each defect. These methods of defect classification result in high success rate. Our experience showed that sharp film images produced better results

  14. Classification and Methods of Shrinkage Defect Control

    Directory of Open Access Journals (Sweden)

    N. S. Larichev

    2016-01-01

    Full Text Available The objective is to put forward a proposal to divide the internal shrinkage defects into the dimensional levels according to defects of certain size and shape.The paper presents the terminology used to describe the internal shrinkage defects in the casting and shows its flaws. These include the lack of well-defined threshold size values and shape of defects. It is shown that in describing defects their sizes and shape are defined qualitatively rather than quantitatively. And it is noted that division of defects into pores and shells is based on the morphological characters.The paper notes that a distinct difference between defects is necessary because of different methods of their elimination from the casting body.The paper presents an overview of control methods to determine the defects of the shrinkage nature in castings. These are methods of destructive and non-destructive testing, such as Xrays, tomography, and metallography. The paper also shows advantages and disadvantages of the considered methods of control. Based on the control method capacities it offers to divide the shrinkage defects into the three dimensional levels.To estimate the shape of defects the paper suggests a new option, that is a shape criterion. By the example of the typical defects of each dimensional level are defined the threshold values of the shape criterion.The paper discusses the basic techniques to estimate the porosity and offers a relationship between the defects of different dimensional levels and a porosity score and percent. It shows that the transition from a dimensional level to another one is in line with not only increasing pore size, but also with a significant deterioration of the mechanical properties of castings.The main conclusions are as follows:1. At present, there is no single unambiguous classification of casting shrinkage defects in the technical literature.2. As follows from the analysis of the classifications of shrinkage defects, their

  15. Automated spectral classification and the GAIA project

    Science.gov (United States)

    Lasala, Jerry; Kurtz, Michael J.

    1995-01-01

    Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.

  16. Classification and printability of EUV mask defects from SEM images

    Science.gov (United States)

    Cho, Wonil; Price, Daniel; Morgan, Paul A.; Rost, Daniel; Satake, Masaki; Tolani, Vikram L.

    2017-10-01

    Classification and Printability of EUV Mask Defects from SEM images EUV lithography is starting to show more promise for patterning some critical layers at 5nm technology node and beyond. However, there still are many key technical obstacles to overcome before bringing EUV Lithography into high volume manufacturing (HVM). One of the greatest obstacles is manufacturing defect-free masks. For pattern defect inspections in the mask-shop, cutting-edge 193nm optical inspection tools have been used so far due to lacking any e-beam mask inspection (EBMI) or EUV actinic pattern inspection (API) tools. The main issue with current 193nm inspection tools is the limited resolution for mask dimensions targeted for EUV patterning. The theoretical resolution limit for 193nm mask inspection tools is about 60nm HP on masks, which means that main feature sizes on EUV masks will be well beyond the practical resolution of 193nm inspection tools. Nevertheless, 193nm inspection tools with various illumination conditions that maximize defect sensitivity and/or main-pattern modulation are being explored for initial EUV defect detection. Due to the generally low signal-to-noise in the 193nm inspection imaging at EUV patterning dimensions, these inspections often result in hundreds and thousands of defects which then need to be accurately reviewed and dispositioned. Manually reviewing each defect is difficult due to poor resolution. In addition, the lack of a reliable aerial dispositioning system makes it very challenging to disposition for printability. In this paper, we present the use of SEM images of EUV masks for higher resolution review and disposition of defects. In this approach, most of the defects detected by the 193nm inspection tools are first imaged on a mask SEM tool. These images together with the corresponding post-OPC design clips are provided to KLA-Tencor's Reticle Decision Center (RDC) platform which provides ADC (Automated Defect Classification) and S2A (SEM

  17. Automated source classification of new transient sources

    Science.gov (United States)

    Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.

    2017-10-01

    The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.

  18. Automated compound classification using a chemical ontology

    Directory of Open Access Journals (Sweden)

    Bobach Claudia

    2012-12-01

    Full Text Available Abstract Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning

  19. Automated compound classification using a chemical ontology.

    Science.gov (United States)

    Bobach, Claudia; Böhme, Timo; Laube, Ulf; Püschel, Anett; Weber, Lutz

    2012-12-29

    Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning logic allows to translate chemistry expert knowledge into a

  20. Automated Sunspot Detection and Classification Using SOHO/MDI Imagery

    Science.gov (United States)

    2015-03-01

    AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SOHO/MDI IMAGERY THESIS Samantha R. Howard, 1st Lieutenant, USAF AFIT-ENP-MS-15-M-078 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENP-MS-15-M-078 AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SOHO...MS-15-M-078 AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SOHO/MDI IMAGERY Samantha R. Howard, B.S. 1st Lieutenant, USAF Committee Membership

  1. Defect sizing using automated ultrasonic inspection techniques at RNL

    International Nuclear Information System (INIS)

    Rogerson, A.; Highmore, P.J.; Poulter, L.N.J.

    1983-10-01

    RNL has developed and applied automated wide-beam pulse-echo and time-of-flight techniques with synthetic aperture processing for sizing defects in clad thick-section weldments and nozzle corner regions. These techniques were amongst those used in the four test plate inspections making up the UKAEA Defect Detection Trials. In this report a critical appraisal is given of the sizing procedures adopted by RNL in these inspections. Several factors influencing sizing accuracy are discussed and results from particular defects highlighted. The time-of-flight technique with colour graphics data display is shown to be highly effective in imaging near-vertical buried defects and underclad defects of height greater than 5 mm. Early characterisation of any identified defect from its ultrasonic response under pulse-echo inspection is seen as a desirable aid to the selection of an appropriate advanced sizing technique for buried defects. (author)

  2. Deep sub-wavelength metrology for advanced defect classification

    NARCIS (Netherlands)

    van der Walle, P; Van Der Donck, J. C.J.; Mulckhuyse, W; Nijsten, L.; Bernal Arango, F.A.; De Jong, A.; Van Zeijl, E.; Spruit, H. E.T.; van den Berg, J.H.; Nanda, G.; van Langen-Suurling, A.K.; Alkemade, P.F.A.; Pereira, S.F.; Maas, D.J.; Lehmann, Peter; Osten, Wolfgang; Albertazzi Gonçalves, Armando

    2017-01-01

    Particle defects are important contributors to yield loss in semi-conductor manufacturing. Particles need to be detected and characterized in order to determine and eliminate their root cause. We have conceived a process flow for advanced defect classification (ADC) that distinguishes three

  3. Deep sub-wavelength metrology for advanced defect classification

    NARCIS (Netherlands)

    Walle, P. van der; Kramer, E.; Donck, J.C.J. van der; Mulckhuyse, W.F.W.; Nijsten, L.; Bernal Arango, F.A.; Jong, A. de; Zeijl, E. van; Spruit, H.E.T.; Berg, J.H. van den; Nanda, G.; Langen-Suurling, A.K. van; Alkemade, P.F.A.; Pereira, S.F.; Maas, D.J.

    2017-01-01

    Particle defects are important contributors to yield loss in semi-conductor manufacturing. Particles need to be detected and characterized in order to determine and eliminate their root cause. We have conceived a process flow for advanced defect classification (ADC) that distinguishes three

  4. Automated feature extraction and classification from image sources

    Science.gov (United States)

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  5. Automated structural classification of lipids by machine learning.

    Science.gov (United States)

    Taylor, Ryan; Miller, Ryan H; Miller, Ryan D; Porter, Michael; Dalgleish, James; Prince, John T

    2015-03-01

    Modern lipidomics is largely dependent upon structural ontologies because of the great diversity exhibited in the lipidome, but no automated lipid classification exists to facilitate this partitioning. The size of the putative lipidome far exceeds the number currently classified, despite a decade of work. Automated classification would benefit ongoing classification efforts by decreasing the time needed and increasing the accuracy of classification while providing classifications for mass spectral identification algorithms. We introduce a tool that automates classification into the LIPID MAPS ontology of known lipids with >95% accuracy and novel lipids with 63% accuracy. The classification is based upon simple chemical characteristics and modern machine learning algorithms. The decision trees produced are intelligible and can be used to clarify implicit assumptions about the current LIPID MAPS classification scheme. These characteristics and decision trees are made available to facilitate alternative implementations. We also discovered many hundreds of lipids that are currently misclassified in the LIPID MAPS database, strongly underscoring the need for automated classification. Source code and chemical characteristic lists as SMARTS search strings are available under an open-source license at https://www.github.com/princelab/lipid_classifier. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Partial Auricular Defects; Classification & Reconstruction Guideline ...

    African Journals Online (AJOL)

    Background: The protruding position of the auricle makes it susceptible to trauma and results in a wide variety of auricular defects .Therefore many techniques had been developed for reconstruction .To select a proper technique may represent a challenge to the occasional otoplastic surgeon. This article proposes a ...

  7. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Woven fabric defects detection based on texture classification algorithm

    International Nuclear Information System (INIS)

    Ben Salem, Y.; Nasri, S.

    2011-01-01

    In this paper we have compared two famous methods in texture classification to solve the problem of recognition and classification of defects occurring in a textile manufacture. We have compared local binary patterns method with co-occurrence matrix. The classifier used is the support vector machines (SVM). The system has been tested using TILDA database. The results obtained are interesting and show that LBP is a good method for the problems of recognition and classifcation defects, it gives a good running time especially for the real time applications.

  9. Automated defect location and sizing by advanced ultrasonic techniques

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1983-01-01

    From this assessment of advanced automated defect location and sizing techniques it is concluded that, 1. Pulse-echo techniques, when used at high sensitivity, are capable of detecting all known defects in the test weldments inspected; 2. Search sensitivity has a marked influence on defect detection at both 1 and 2 MHz, and it is considered that 20% DAC is the highest amplitude threshold level which could be prudently adopted at the search stage; 3. The important through-thickness dimension of deeply buried defects in the height range 5 to 50mm can be sized to an estimated accuracy of +2mm using the Silk technique and that applying a SAFT-type algorithm to the data gives good lateral positioning of defects; 4. The 70 0 longitudinal wave twin-crystal technique has proved to be a highly effective method of detecting underclad cracks. A 70 0 shear wave, pulse-echo technique and a 0 0 longitudinal wave twin crystal method also give good detection results in the near surface region; 5. The Silk technique has been effective in sizing defects in the height range 5 to 35mm in the near-surface region

  10. Deep sub-wavelength metrology for advanced defect classification

    Science.gov (United States)

    van der Walle, P.; Kramer, E.; van der Donck, J. C. J.; Mulckhuyse, W.; Nijsten, L.; Bernal Arango, F. A.; de Jong, A.; van Zeijl, E.; Spruit, H. E. T.; van den Berg, J. H.; Nanda, G.; van Langen-Suurling, A. K.; Alkemade, P. F. A.; Pereira, S. F.; Maas, D. J.

    2017-06-01

    Particle defects are important contributors to yield loss in semi-conductor manufacturing. Particles need to be detected and characterized in order to determine and eliminate their root cause. We have conceived a process flow for advanced defect classification (ADC) that distinguishes three consecutive steps; detection, review and classification. For defect detection, TNO has developed the Rapid Nano (RN3) particle scanner, which illuminates the sample from nine azimuth angles. The RN3 is capable of detecting 42 nm Latex Sphere Equivalent (LSE) particles on XXX-flat Silicon wafers. For each sample, the lower detection limit (LDL) can be verified by an analysis of the speckle signal, which originates from the surface roughness of the substrate. In detection-mode (RN3.1), the signal from all illumination angles is added. In review-mode (RN3.9), the signals from all nine arms are recorded individually and analyzed in order to retrieve additional information on the shape and size of deep sub-wavelength defects. This paper presents experimental and modelling results on the extraction of shape information from the RN3.9 multi-azimuth signal such as aspect ratio, skewness, and orientation of test defects. Both modeling and experimental work confirm that the RN3.9 signal contains detailed defect shape information. After review by RN3.9, defects are coarsely classified, yielding a purified Defect-of-Interest (DoI) list for further analysis on slower metrology tools, such as SEM, AFM or HIM, that provide more detailed review data and further classification. Purifying the DoI list via optical metrology with RN3.9 will make inspection time on slower review tools more efficient.

  11. Automated Classification of Seedlings Using Computer Vision

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Christiansen, Peter

    The objective of this project is to investigate the possibilities of recognizing plant species at multiple growth stages based on RGB images. Plants and leaves are initially segmented from a database through a partly automated procedure providing samples of 2438 plants and 4767 leaves distributed...

  12. Automated classification of computer network attacks

    CSIR Research Space (South Africa)

    Van Heerden, R

    2013-11-01

    Full Text Available In this paper we demonstrate how an automated reasoner, HermiT, is used to classify instances of computer network based attacks in conjunction with a network attack ontology. The ontology describes different types of network attacks through classes...

  13. Improved Automated Classification of Alcoholics and Non-alcoholics

    OpenAIRE

    Ramaswamy Palaniappan

    2008-01-01

    In this paper, several improvements are proposed to previous work of automated classification of alcoholics and nonalcoholics. In the previous paper, multiplayer-perceptron neural network classifying energy of gamma band Visual Evoked Potential (VEP) signals gave the best classification performance using 800 VEP signals from 10 alcoholics and 10 non-alcoholics. Here, the dataset is extended to include 3560 VEP signals from 102 subjects: 62 alcoholics and 40 non-alcoholics...

  14. Improving reticle defect disposition via fully automated lithography simulation

    Science.gov (United States)

    Mann, Raunak; Goodman, Eliot; Lao, Keith; Ha, Steven; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan

    2016-03-01

    Most advanced wafer fabs have embraced complex pattern decoration, which creates numerous challenges during in-fab reticle qualification. These optical proximity correction (OPC) techniques create assist features that tend to be very close in size and shape to the main patterns as seen in Figure 1. A small defect on an assist feature will most likely have little or no impact on the fidelity of the wafer image, whereas the same defect on a main feature could significantly decrease device functionality. In order to properly disposition these defects, reticle inspection technicians need an efficient method that automatically separates main from assist features and predicts the resulting defect impact on the wafer image. Analysis System (ADAS) defect simulation system[1]. Up until now, using ADAS simulation was limited to engineers due to the complexity of the settings that need to be manually entered in order to create an accurate result. A single error in entering one of these values can cause erroneous results, therefore full automation is necessary. In this study, we propose a new method where all needed simulation parameters are automatically loaded into ADAS. This is accomplished in two parts. First we have created a scanner parameter database that is automatically identified from mask product and level names. Second, we automatically determine the appropriate simulation printability threshold by using a new reference image (provided by the inspection tool) that contains a known measured value of the reticle critical dimension (CD). This new method automatically loads the correct scanner conditions, sets the appropriate simulation threshold, and automatically measures the percentage of CD change caused by the defect. This streamlines qualification and reduces the number of reticles being put on hold, waiting for engineer review. We also present data showing the consistency and reliability of the new method, along with the impact on the efficiency of in

  15. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  16. Automated lung nodule classification following automated nodule detection on CT: A serial approach

    International Nuclear Information System (INIS)

    Armato, Samuel G. III; Altman, Michael B.; Wilkie, Joel; Sone, Shusuke; Li, Feng; Doi, Kunio; Roy, Arunabha S.

    2003-01-01

    We have evaluated the performance of an automated classifier applied to the task of differentiating malignant and benign lung nodules in low-dose helical computed tomography (CT) scans acquired as part of a lung cancer screening program. The nodules classified in this manner were initially identified by our automated lung nodule detection method, so that the output of automated lung nodule detection was used as input to automated lung nodule classification. This study begins to narrow the distinction between the 'detection task' and the 'classification task'. Automated lung nodule detection is based on two- and three-dimensional analyses of the CT image data. Gray-level-thresholding techniques are used to identify initial lung nodule candidates, for which morphological and gray-level features are computed. A rule-based approach is applied to reduce the number of nodule candidates that correspond to non-nodules, and the features of remaining candidates are merged through linear discriminant analysis to obtain final detection results. Automated lung nodule classification merges the features of the lung nodule candidates identified by the detection algorithm that correspond to actual nodules through another linear discriminant classifier to distinguish between malignant and benign nodules. The automated classification method was applied to the computerized detection results obtained from a database of 393 low-dose thoracic CT scans containing 470 confirmed lung nodules (69 malignant and 401 benign nodules). Receiver operating characteristic (ROC) analysis was used to evaluate the ability of the classifier to differentiate between nodule candidates that correspond to malignant nodules and nodule candidates that correspond to benign lesions. The area under the ROC curve for this classification task attained a value of 0.79 during a leave-one-out evaluation

  17. Developmental defects in zebrafish for classification of EGF pathway inhibitors

    International Nuclear Information System (INIS)

    Pruvot, Benoist; Curé, Yoann; Djiotsa, Joachim; Voncken, Audrey; Muller, Marc

    2014-01-01

    One of the major challenges when testing drug candidates targeted at a specific pathway in whole animals is the discrimination between specific effects and unwanted, off-target effects. Here we used the zebrafish to define several developmental defects caused by impairment of Egf signaling, a major pathway of interest in tumor biology. We inactivated Egf signaling by genetically blocking Egf expression or using specific inhibitors of the Egf receptor function. We show that the combined occurrence of defects in cartilage formation, disturbance of blood flow in the trunk and a decrease of myelin basic protein expression represent good indicators for impairment of Egf signaling. Finally, we present a classification of known tyrosine kinase inhibitors according to their specificity for the Egf pathway. In conclusion, we show that developmental indicators can help to discriminate between specific effects on the target pathway from off-target effects in molecularly targeted drug screening experiments in whole animal systems. - Highlights: • We analyze the functions of Egf signaling on zebrafish development. • Genetic blocking of Egf expression causes cartilage, myelin and circulatory defects. • Chemical inhibition of Egf receptor function causes similar defects. • Developmental defects can reveal the specificity of Egf pathway inhibitors

  18. Sternoclavicular Joint Infection: Classification of Resection Defects and Reconstructive Algorithm

    Directory of Open Access Journals (Sweden)

    Janna Joethy

    2012-11-01

    Full Text Available Background Aggressive treatment of sternoclavicular joint (SCJ infection involves systemicantibiotics, surgical drainage and resection if indicated. The purpose of this paper is to describea classification of post resectional SCJ defects and highlight our reconstructive algorithm.Defects were classified into A, where closure was possible often with the aid of topicalnegative pressure dressing; B, where parts of the manubrium, calvicular head, and first rib wereexcised; and C, where both clavicular, first ribs and most of the manubrium were resected.Methods Twelve patients (age range, 42 to 72 years over the last 8 years underwentreconstruction after SCJ infection. There was 1 case of a type A defect, 10 type B defects, and1 type C defect. Reconstruction was performed using the pectoralis major flap in 6 cases (50%,the latissimus dorsi flap in 4 cases (33%, secondary closure in 1 case and; the latissimus andthe rectus flap in 1 case.Results All wounds healed uneventfully with no flap failure. Nine patients had good shouldermotion. Three patients with extensive clavicular resection had restricted shoulder abductionand were unable to abduct their arm past 90˚. Internal and external rotation were not affected.Conclusions We highlight our reconstructive algorithm which is summarised as follows:for an isolated type B SCJ defect we recommend the ipsilateral pectoralis major muscle forclosure. For a type C bilateral defect, we suggest the latissimum dorsi flap. In cases of extensiveinfection where the thoracoacromial and internal mammary vessels are thrombosed, thepectoralis major and rectus abdominus cannot be used; and the latissimus dorsi flap is chosen.

  19. Sternoclavicular Joint Infection: Classification of Resection Defects and Reconstructive Algorithm

    Directory of Open Access Journals (Sweden)

    Janna Joethy

    2012-11-01

    Full Text Available BackgroundAggressive treatment of sternoclavicular joint (SCJ infection involves systemic antibiotics, surgical drainage and resection if indicated. The purpose of this paper is to describe a classification of post resectional SCJ defects and highlight our reconstructive algorithm. Defects were classified into A, where closure was possible often with the aid of topical negative pressure dressing; B, where parts of the manubrium, calvicular head, and first rib were excised; and C, where both clavicular, first ribs and most of the manubrium were resected.MethodsTwelve patients (age range, 42 to 72 years over the last 8 years underwent reconstruction after SCJ infection. There was 1 case of a type A defect, 10 type B defects, and 1 type C defect. Reconstruction was performed using the pectoralis major flap in 6 cases (50%, the latissimus dorsi flap in 4 cases (33%, secondary closure in 1 case and; the latissimus and the rectus flap in 1 case.ResultsAll wounds healed uneventfully with no flap failure. Nine patients had good shoulder motion. Three patients with extensive clavicular resection had restricted shoulder abduction and were unable to abduct their arm past 90°. Internal and external rotation were not affected.ConclusionsWe highlight our reconstructive algorithm which is summarised as follows: for an isolated type B SCJ defect we recommend the ipsilateral pectoralis major muscle for closure. For a type C bilateral defect, we suggest the latissimum dorsi flap. In cases of extensive infection where the thoracoacromial and internal mammary vessels are thrombosed, the pectoralis major and rectus abdominus cannot be used; and the latissimus dorsi flap is chosen.

  20. Evolutionary fuzzy ARTMAP neural networks for classification of semiconductor defects.

    Science.gov (United States)

    Tan, Shing Chiang; Watada, Junzo; Ibrahim, Zuwairie; Khalid, Marzuki

    2015-05-01

    Wafer defect detection using an intelligent system is an approach of quality improvement in semiconductor manufacturing that aims to enhance its process stability, increase production capacity, and improve yields. Occasionally, only few records that indicate defective units are available and they are classified as a minority group in a large database. Such a situation leads to an imbalanced data set problem, wherein it engenders a great challenge to deal with by applying machine-learning techniques for obtaining effective solution. In addition, the database may comprise overlapping samples of different classes. This paper introduces two models of evolutionary fuzzy ARTMAP (FAM) neural networks to deal with the imbalanced data set problems in a semiconductor manufacturing operations. In particular, both the FAM models and hybrid genetic algorithms are integrated in the proposed evolutionary artificial neural networks (EANNs) to classify an imbalanced data set. In addition, one of the proposed EANNs incorporates a facility to learn overlapping samples of different classes from the imbalanced data environment. The classification results of the proposed evolutionary FAM neural networks are presented, compared, and analyzed using several classification metrics. The outcomes positively indicate the effectiveness of the proposed networks in handling classification problems with imbalanced data sets.

  1. “The Naming of Cats”: Automated Genre Classification

    Directory of Open Access Journals (Sweden)

    Yunhyong Kim

    2007-07-01

    Full Text Available This paper builds on the work presented at the ECDL 2006 in automated genre classification as a step toward automating metadata extraction from digital documents for ingest into digital repositories such as those run by archives, libraries and eprint services (Kim & Ross, 2006b. We have previously proposed dividing features of a document into five types (features for visual layout, language model features, stylometric features, features for semantic structure, and contextual features as an object linked to previously classified objects and other external sources and have examined visual and language model features. The current paper compares results from testing classifiers based on image and stylometric features in a binary classification to show that certain genres have strong image features which enable effective separation of documents belonging to the genre from a large pool of other documents.

  2. Automated cell type discovery and classification through knowledge transfer

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E.

    2017-01-01

    Abstract Motivation: Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. Results: We present a new algorithm called Automated Cell-type Discovery and Classification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. Availability and Implementation: A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc. Contact: brian.kidd@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28158442

  3. A support vector machine approach for classification of welding defects from ultrasonic signals

    Science.gov (United States)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  4. Automated Classification of Asteroids into Families at Work

    Science.gov (United States)

    Knežević, Zoran; Milani, Andrea; Cellino, Alberto; Novaković, Bojan; Spoto, Federica; Paolicchi, Paolo

    2014-07-01

    We have recently proposed a new approach to the asteroid family classification by combining the classical HCM method with an automated procedure to add newly discovered members to existing families. This approach is specifically intended to cope with ever increasing asteroid data sets, and consists of several steps to segment the problem and handle the very large amount of data in an efficient and accurate manner. We briefly present all these steps and show the results from three subsequent updates making use of only the automated step of attributing the newly numbered asteroids to the known families. We describe the changes of the individual families membership, as well as the evolution of the classification due to the newly added intersections between the families, resolved candidate family mergers, and emergence of the new candidates for the mergers. We thus demonstrate how by the new approach the asteroid family classification becomes stable in general terms (converging towards a permanent list of confirmed families), and in the same time evolving in details (to account for the newly discovered asteroids) at each update.

  5. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    Science.gov (United States)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  6. Literature classification for semi-automated updating of biological knowledgebases

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Winther, Ole

    2013-01-01

    abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion: We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining...... types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature. Results: We defined and applied a machine...... learning approach for literature classification to support updating of TANTIGEN, a knowledgebase of tumor T-cell antigens. Abstracts from PubMed were downloaded and classified as either "relevant" or "irrelevant" for database update. Training and five-fold cross-validation of a k-NN classifier on 310...

  7. Automated tissue classification framework for reproducible chronic wound assessment.

    Science.gov (United States)

    Mukherjee, Rashmi; Manohar, Dhiraj Dhane; Das, Dev Kumar; Achar, Arun; Mitra, Analava; Chakraborty, Chandan

    2014-01-01

    The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough) scheme for chronic wound (CW) evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB) wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity) color space and subsequently the "S" component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM), were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793).

  8. Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment

    Directory of Open Access Journals (Sweden)

    Rashmi Mukherjee

    2014-01-01

    Full Text Available The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough scheme for chronic wound (CW evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity color space and subsequently the “S” component of HSI color channels was selected as it provided higher contrast. Wound areas from 6 different types of CW were segmented from whole images using fuzzy divergence based thresholding by minimizing edge ambiguity. A set of color and textural features describing granulation, necrotic, and slough tissues in the segmented wound area were extracted using various mathematical techniques. Finally, statistical learning algorithms, namely, Bayesian classification and support vector machine (SVM, were trained and tested for wound tissue classification in different CW images. The performance of the wound area segmentation protocol was further validated by ground truth images labeled by clinical experts. It was observed that SVM with 3rd order polynomial kernel provided the highest accuracies, that is, 86.94%, 90.47%, and 75.53%, for classifying granulation, slough, and necrotic tissues, respectively. The proposed automated tissue classification technique achieved the highest overall accuracy, that is, 87.61%, with highest kappa statistic value (0.793.

  9. Empirical Analysis and Automated Classification of Security Bug Reports

    Science.gov (United States)

    Tyo, Jacob P.

    2016-01-01

    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. An ac- curacy of 92%, recall of 96%, precision of 92%, probability of false alarm of 4%, F-Score of 81% and G-Score of 90% were the best results achieved during two-class classification. Furthermore, an accuracy of 80%, recall of 80%, precision of 94%, and F-score of 85% were the best results achieved during multiclass classification.

  10. Automated segmentation of atherosclerotic histology based on pattern classification

    Directory of Open Access Journals (Sweden)

    Arna van Engelen

    2013-01-01

    Full Text Available Background: Histology sections provide accurate information on atherosclerotic plaque composition, and are used in various applications. To our knowledge, no automated systems for plaque component segmentation in histology sections currently exist. Materials and Methods: We perform pixel-wise classification of fibrous, lipid, and necrotic tissue in Elastica Von Gieson-stained histology sections, using features based on color channel intensity and local image texture and structure. We compare an approach where we train on independent data to an approach where we train on one or two sections per specimen in order to segment the remaining sections. We evaluate the results on segmentation accuracy in histology, and we use the obtained histology segmentations to train plaque component classification methods in ex vivo Magnetic resonance imaging (MRI and in vivo MRI and computed tomography (CT. Results: In leave-one-specimen-out experiments on 176 histology slices of 13 plaques, a pixel-wise accuracy of 75.7 ± 6.8% was obtained. This increased to 77.6 ± 6.5% when two manually annotated slices of the specimen to be segmented were used for training. Rank correlations of relative component volumes with manually annotated volumes were high in this situation (P = 0.82-0.98. Using the obtained histology segmentations to train plaque component classification methods in ex vivo MRI and in vivo MRI and CT resulted in similar image segmentations for training on the automated histology segmentations as for training on a fully manual ground truth. The size of the lipid-rich necrotic core was significantly smaller when training on fully automated histology segmentations than when manually annotated histology sections were used. This difference was reduced and not statistically significant when one or two slices per section were manually annotated for histology segmentation. Conclusions: Good histology segmentations can be obtained by automated segmentation

  11. Classification of weld defect based on information fusion technology for radiographic testing system

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin; Dang, Changying [State Key Laboratory for Manufacturing System Engineering, Department of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China)

    2016-03-15

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  12. Automated noninvasive classification of renal cancer on multiphase CT

    Energy Technology Data Exchange (ETDEWEB)

    Linguraru, Marius George; Wang, Shijun; Shah, Furhawn; Gautam, Rabindra; Peterson, James; Linehan, W. Marston; Summers, Ronald M. [Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 10 Center Drive, Bethesda, Maryland 20892 (United States); Urologic Oncology Branch, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, Maryland 20892 (United States); Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 10 Center Drive, Bethesda, Maryland 20892 (United States)

    2011-10-15

    Purpose: To explore the added value of the shape of renal lesions for classifying renal neoplasms. To investigate the potential of computer-aided analysis of contrast-enhanced computed-tomography (CT) to quantify and classify renal lesions. Methods: A computer-aided clinical tool based on adaptive level sets was employed to analyze 125 renal lesions from contrast-enhanced abdominal CT studies of 43 patients. There were 47 cysts and 78 neoplasms: 22 Von Hippel-Lindau (VHL), 16 Birt-Hogg-Dube (BHD), 19 hereditary papillary renal carcinomas (HPRC), and 21 hereditary leiomyomatosis and renal cell cancers (HLRCC). The technique quantified the three-dimensional size and enhancement of lesions. Intrapatient and interphase registration facilitated the study of lesion serial enhancement. The histograms of curvature-related features were used to classify the lesion types. The areas under the curve (AUC) were calculated for receiver operating characteristic curves. Results: Tumors were robustly segmented with 0.80 overlap (0.98 correlation) between manual and semi-automated quantifications. The method further identified morphological discrepancies between the types of lesions. The classification based on lesion appearance, enhancement and morphology between cysts and cancers showed AUC = 0.98; for BHD + VHL (solid cancers) vs. HPRC + HLRCC AUC = 0.99; for VHL vs. BHD AUC = 0.82; and for HPRC vs. HLRCC AUC = 0.84. All semi-automated classifications were statistically significant (p < 0.05) and superior to the analyses based solely on serial enhancement. Conclusions: The computer-aided clinical tool allowed the accurate quantification of cystic, solid, and mixed renal tumors. Cancer types were classified into four categories using their shape and enhancement. Comprehensive imaging biomarkers of renal neoplasms on abdominal CT may facilitate their noninvasive classification, guide clinical management, and monitor responses to drugs or interventions.

  13. Applying machine learning classification techniques to automate sky object cataloguing

    Science.gov (United States)

    Fayyad, Usama M.; Doyle, Richard J.; Weir, W. Nick; Djorgovski, Stanislav

    1993-08-01

    We describe the application of an Artificial Intelligence machine learning techniques to the development of an automated tool for the reduction of a large scientific data set. The 2nd Mt. Palomar Northern Sky Survey is nearly completed. This survey provides comprehensive coverage of the northern celestial hemisphere in the form of photographic plates. The plates are being transformed into digitized images whose quality will probably not be surpassed in the next ten to twenty years. The images are expected to contain on the order of 107 galaxies and 108 stars. Astronomers wish to determine which of these sky objects belong to various classes of galaxies and stars. Unfortunately, the size of this data set precludes analysis in an exclusively manual fashion. Our approach is to develop a software system which integrates the functions of independently developed techniques for image processing and data classification. Digitized sky images are passed through image processing routines to identify sky objects and to extract a set of features for each object. These routines are used to help select a useful set of attributes for classifying sky objects. Then GID3 (Generalized ID3) and O-B Tree, two inductive learning techniques, learns classification decision trees from examples. These classifiers will then be applied to new data. These developmnent process is highly interactive, with astronomer input playing a vital role. Astronomers refine the feature set used to construct sky object descriptions, and evaluate the performance of the automated classification technique on new data. This paper gives an overview of the machine learning techniques with an emphasis on their general applicability, describes the details of our specific application, and reports the initial encouraging results. The results indicate that our machine learning approach is well-suited to the problem. The primary benefit of the approach is increased data reduction throughput. Another benefit is

  14. Automated recognition system for ELM classification in JET

    International Nuclear Information System (INIS)

    Duro, N.; Dormido, R.; Vega, J.; Dormido-Canto, S.; Farias, G.; Sanchez, J.; Vargas, H.; Murari, A.

    2009-01-01

    Edge localized modes (ELMs) are instabilities occurring in the edge of H-mode plasmas. Considerable efforts are being devoted to understanding the physics behind this non-linear phenomenon. A first characterization of ELMs is usually their identification as type I or type III. An automated pattern recognition system has been developed in JET for off-line ELM recognition and classification. The empirical method presented in this paper analyzes each individual ELM instead of starting from a temporal segment containing many ELM bursts. The ELM recognition and isolation is carried out using three signals: Dα, line integrated electron density and stored diamagnetic energy. A reduced set of characteristics (such as diamagnetic energy drop, ELM period or Dα shape) has been extracted to build supervised and unsupervised learning systems for classification purposes. The former are based on support vector machines (SVM). The latter have been developed with hierarchical and K-means clustering methods. The success rate of the classification systems is about 98% for a database of almost 300 ELMs.

  15. Automated recognition system for ELM classification in JET

    Energy Technology Data Exchange (ETDEWEB)

    Duro, N. [Dpto. de Informatica y Automatica - UNED, C/ Juan del Rosal 16, 28040 Madrid (Spain)], E-mail: nduro@dia.uned.es; Dormido, R. [Dpto. de Informatica y Automatica - UNED, C/ Juan del Rosal 16, 28040 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, Avd. Complutense 22, 28040 Madrid (Spain); Dormido-Canto, S.; Farias, G.; Sanchez, J.; Vargas, H. [Dpto. de Informatica y Automatica - UNED, C/ Juan del Rosal 16, 28040 Madrid (Spain); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padua (Italy)

    2009-06-15

    Edge localized modes (ELMs) are instabilities occurring in the edge of H-mode plasmas. Considerable efforts are being devoted to understanding the physics behind this non-linear phenomenon. A first characterization of ELMs is usually their identification as type I or type III. An automated pattern recognition system has been developed in JET for off-line ELM recognition and classification. The empirical method presented in this paper analyzes each individual ELM instead of starting from a temporal segment containing many ELM bursts. The ELM recognition and isolation is carried out using three signals: D{alpha}, line integrated electron density and stored diamagnetic energy. A reduced set of characteristics (such as diamagnetic energy drop, ELM period or D{alpha} shape) has been extracted to build supervised and unsupervised learning systems for classification purposes. The former are based on support vector machines (SVM). The latter have been developed with hierarchical and K-means clustering methods. The success rate of the classification systems is about 98% for a database of almost 300 ELMs.

  16. Towards automated classification of intensive care nursing narratives.

    Science.gov (United States)

    Hiissa, Marketta; Pahikkala, Tapio; Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salanterä, Sanna; Salakoski, Tapio

    2006-01-01

    Nursing narratives are an important part of patient documentation, but the possibilities to utilize them in the direct care process are limited due to the lack of proper tools. One solution to facilitate the utilization of narrative data could be to classify them according to their content. In this paper, we addressed two issues related to designing an automated classifier: domain experts' agreement on the content of the classes into which the data are to be classified, and the ability of the machine-learning algorithm to perform the classification on an acceptable level. The data we used were a set of Finnish intensive care nursing narratives. By using Cohen's kappa, we assessed the agreement of three nurses on the content of the classes Breathing, Blood Circulation and Pain, and by using the area under ROC curve (AUC), we measured the ability of the Least Squares Support Vector Machine (LS-SVM) algorithm to learn the classification patterns of the nurses. On average, the values of kappa were around 0.8. The agreement was highest in the class Blood Circulation, and lowest in the class Breathing. The LS-SVM algorithm was able to learn the classification patterns of the three nurses on an acceptable level; the values of AUC were generally around 0.85. Our results indicate that one way to develop electronic patient records could be tools that handle the free text in nursing documentation.

  17. Towards automated classification of intensive care nursing narratives.

    Science.gov (United States)

    Hiissa, Marketta; Pahikkala, Tapio; Suominen, Hanna; Lehtikunnas, Tuija; Back, Barbro; Karsten, Helena; Salanterä, Sanna; Salakoski, Tapio

    2007-12-01

    Nursing narratives are an important part of patient documentation, but the possibilities to utilize them in the direct care process are limited due to the lack of proper tools. One solution to facilitate the utilization of narrative data could be to classify them according to their content. Our objective is to address two issues related to designing an automated classifier: domain experts' agreement on the content of classes Breathing, Blood Circulation and Pain, as well as the ability of a machine-learning-based classifier to learn the classification patterns of the nurses. The data we used were a set of Finnish intensive care nursing narratives, and we used the regularized least-squares (RLS) algorithm for the automatic classification. The agreement of the nurses was assessed by using Cohen's kappa, and the performance of the algorithm was measured using area under ROC curve (AUC). On average, the values of kappa were around 0.8. The agreement was highest in the class Blood Circulation, and lowest in the class Breathing. The RLS algorithm was able to learn the classification patterns of the three nurses on an acceptable level; the values of AUC were generally around 0.85. Our results indicate that the free text in nursing documentation can be automatically classified and this can offer a way to develop electronic patient records.

  18. Automated Classification of ROSAT Sources Using Heterogeneous Multiwavelength Source Catalogs

    Science.gov (United States)

    McGlynn, Thomas; Suchkov, A. A.; Winter, E. L.; Hanisch, R. J.; White, R. L.; Ochsenbein, F.; Derriere, S.; Voges, W.; Corcoran, M. F.

    2004-01-01

    We describe an on-line system for automated classification of X-ray sources, ClassX, and present preliminary results of classification of the three major catalogs of ROSAT sources, RASS BSC, RASS FSC, and WGACAT, into six class categories: stars, white dwarfs, X-ray binaries, galaxies, AGNs, and clusters of galaxies. ClassX is based on a machine learning technology. It represents a system of classifiers, each classifier consisting of a considerable number of oblique decision trees. These trees are built as the classifier is 'trained' to recognize various classes of objects using a training sample of sources of known object types. Each source is characterized by a preselected set of parameters, or attributes; the same set is then used as the classifier conducts classification of sources of unknown identity. The ClassX pipeline features an automatic search for X-ray source counterparts among heterogeneous data sets in on-line data archives using Virtual Observatory protocols; it retrieves from those archives all the attributes required by the selected classifier and inputs them to the classifier. The user input to ClassX is typically a file with target coordinates, optionally complemented with target IDs. The output contains the class name, attributes, and class probabilities for all classified targets. We discuss ways to characterize and assess the classifier quality and performance and present the respective validation procedures. Based on both internal and external validation, we conclude that the ClassX classifiers yield reasonable and reliable classifications for ROSAT sources and have the potential to broaden class representation significantly for rare object types.

  19. ROLE OF DATA MINING CLASSIFICATION TECHNIQUE IN SOFTWARE DEFECT PREDICTION

    OpenAIRE

    Dr.A.R.Pon Periyasamy; Mrs A.Misbahulhuda

    2017-01-01

    Software defect prediction is the process of locating defective modules in software. Software quality may be a field of study and apply that describes the fascinating attributes of software package product. The performance should be excellent with none defects. Software quality metrics are a set of software package metrics that target the standard aspects of the product, process, and project. The software package defect prediction model helps in early detection of defects and contributes to t...

  20. Automated vegetation classification using Thematic Mapper Simulation data

    Science.gov (United States)

    Nedelman, K. S.; Cate, R. B.; Bizzell, R. M.

    1983-01-01

    The present investigation is concerned with the results of a study of Thematic Mapper Simulation (TMS) data. One of the objectives of the study was related to an evaluation of the usefulness of the Thematic Mapper's (TM) improved spatial resolution and spectral coverage. The study was undertaken as part of a preparation for the efficient incorporation of Landsat 4 data into ongoing technology development in remote sensing. The study included an application of automated Landsat vegetation classification technology to TMS data. Results of comparing TMS data to Multispectral Scanner (MSS) data were found to indicate that all field definition, crop type discrimination, and subsequent proportion estimation may be greatly increased with the availability of TM data.

  1. Automated detection and classification for craters based on geometric matching

    Science.gov (United States)

    Chen, Jian-qing; Cui, Ping-yuan; Cui, Hui-tao

    2011-08-01

    Crater detection and classification are critical elements for planetary mission preparations and landing site selection. This paper presents a methodology for the automated detection and matching of craters on images of planetary surface such as Moon, Mars and asteroids. For craters usually are bowl shaped depression, craters can be figured as circles or circular arc during landing phase. Based on the hypothesis that detected crater edges is related to craters in a template by translation, rotation and scaling, the proposed matching method use circles to fitting craters edge, and align circular arc edges from the image of the target body with circular features contained in a model. The approach includes edge detection, edge grouping, reference point detection and geometric circle model matching. Finally we simulate planetary surface to test the reasonableness and effectiveness of the proposed method.

  2. Classification of white maize defects with multispectral imaging.

    Science.gov (United States)

    Sendin, Kate; Manley, Marena; Williams, Paul J

    2018-03-15

    Multispectral imaging with object-wise multivariate image analysis was evaluated for its potential to grade whole white maize kernels. The types of defective materials regarded in grading legislation were divided into 13 classes, and were imaged with a multispectral imaging instrument spanning the UV, visible and NIR regions (19 wavelengths ranging from 375 to 970nm). Object-wise partial least squares discriminant analysis (PLS-DA) models were developed and validated with an independent data set. Results demonstrated good performance in distinguishing between sound maize and undesirable materials, with cross-validated coefficients of determination (Q 2 ) and classification accuracies ranging from 0.35 to 0.99 and 83 to 100%, respectively. Wavelengths related to absorbance of green, yellow and orange colour indicated the presence of lycopene and anthocyanin (505, 525, 570 and 590 nm). NIR wavelengths 890, 940 nm (associated with fat) and 970 nm (associated with water) were generally identified as important features throughout the study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  4. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    Science.gov (United States)

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  5. Fully Automated Sunspot Detection and Classification Using SDO HMI Imagery in MATLAB

    Science.gov (United States)

    2014-03-27

    FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB THESIS Gordon M. Spahr, Second Lieutenant, USAF AFIT-ENP-14-M-34...the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENP-14-M-34 FULLY AUTOMATED SUNSPOT DETECTION AND...DISTRIUBUTION UNLIMITED. AFIT-ENP-14-M-34 FULLY AUTOMATED SUNSPOT DETECTION AND CLASSIFICATION USING SDO HMI IMAGERY IN MATLAB Gordon M. Spahr, BS Second

  6. Automated cell type discovery and classification through knowledge transfer.

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E; Dudley, Joel T; Kidd, Brian A

    2017-06-01

    Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. We present a new algorithm called utomated ell-type iscovery and lassification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc . brian.kidd@mssm.edu or joel.dudley@mssm.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  7. Automated Terrestrial EMI Emitter Detection, Classification, and Localization

    Science.gov (United States)

    Stottler, R.; Ong, J.; Gioia, C.; Bowman, C.; Bhopale, A.

    Clear operating spectrum at ground station antenna locations is critically important for communicating with, commanding, controlling, and maintaining the health of satellites. Electro Magnetic Interference (EMI) can interfere with these communications, so it is extremely important to track down and eliminate sources of EMI. The Terrestrial RFI-locating Automation with CasE based Reasoning (TRACER) system is being implemented to automate terrestrial EMI emitter localization and identification to improve space situational awareness, reduce manpower requirements, dramatically shorten EMI response time, enable the system to evolve without programmer involvement, and support adversarial scenarios such as jamming. The operational version of TRACER is being implemented and applied with real data (power versus frequency over time) for both satellite communication antennas and sweeping Direction Finding (DF) antennas located near them. This paper presents the design and initial implementation of TRACER’s investigation data management, automation, and data visualization capabilities. TRACER monitors DF antenna signals and detects and classifies EMI using neural network technology, trained on past cases of both normal communications and EMI events. When EMI events are detected, an Investigation Object is created automatically. The user interface facilitates the management of multiple investigations simultaneously. Using a variant of the Friis transmission equation, emissions data is used to estimate and plot the emitter’s locations over time for comparison with current flights. The data is also displayed on a set of five linked graphs to aid in the perception of patterns spanning power, time, frequency, and bearing. Based on details of the signal (its classification, direction, and strength, etc.), TRACER retrieves one or more cases of EMI investigation methodologies which are represented as graphical behavior transition networks (BTNs). These BTNs can be edited easily

  8. Automated authorship attribution using advanced signal classification techniques.

    Directory of Open Access Journals (Sweden)

    Maryam Ebrahimpour

    Full Text Available In this paper, we develop two automated authorship attribution schemes, one based on Multiple Discriminant Analysis (MDA and the other based on a Support Vector Machine (SVM. The classification features we exploit are based on word frequencies in the text. We adopt an approach of preprocessing each text by stripping it of all characters except a-z and space. This is in order to increase the portability of the software to different types of texts. We test the methodology on a corpus of undisputed English texts, and use leave-one-out cross validation to demonstrate classification accuracies in excess of 90%. We further test our methods on the Federalist Papers, which have a partly disputed authorship and a fair degree of scholarly consensus. And finally, we apply our methodology to the question of the authorship of the Letter to the Hebrews by comparing it against a number of original Greek texts of known authorship. These tests identify where some of the limitations lie, motivating a number of open questions for future work. An open source implementation of our methodology is freely available for use at https://github.com/matthewberryman/author-detection.

  9. Cognitive high speed defect detection and classification in MWIR images of laser welding

    Science.gov (United States)

    Lapido, Yago L.; Rodriguez-Araújo, Jorge; García-Díaz, Antón; Castro, Gemma; Vidal, Félix; Romero, Pablo; Vergara, Germán.

    2015-07-01

    We present a novel approach for real-time defect detection and classification in laser welding processes based on the use of uncooled PbSe image sensors working in the MWIR range. The spatial evolution of the melt pool was recorded and analyzed during several welding procedures. A machine learning approach was developed to classify welding defects. Principal components analysis (PCA) is used for dimensionality reduction of the melt pool data. This enhances classification results and enables on-line classification rates close to 1 kHz with non-optimized code prototyped in Python. These results point to the feasibility of real-time defect detection.

  10. The method of diagnosis and classification of the gingival line defects of the teeth hard tissues

    Directory of Open Access Journals (Sweden)

    Olena Bulbuk

    2017-06-01

    Full Text Available For solving the problem of diagnosis and treatment of hard tissue defects the significant role belongs to the choice of tactics for dental treatment of hard tissue defects located in the gingival line of any tooth. This work aims to study the problems of diagnosis and classification of gingival line defects of the teeth hard tissues. That will contribute to the objectification of differentiated diagnostic and therapeutic approaches in the dental treatment of various clinical variants of these defects localization. The objective of the study – is to develop the anatomical-functional classification for differentiated estimation of hard tissue defects in the gingival part, as the basis for the application of differential diagnostic-therapeutic approaches to the dental treatment of hard tissue defects disposed in the gingival part of any tooth. Materials and methods of investigation: There was conducted the examination of 48 patients with hard tissue defects located in the gingival part of any tooth. To assess the magnitude of gingival line destruction the periodontal probe and X-ray examination were used. Results. The result of the performed research the classification of the gingival line defects of the hard tissues was offered using exponent power. The value of this indicator is equal to an integer number expressed in millimeters of distance from the epithelial attachment to the cavity’s bottom of defect. Conclusions. The proposed classification fills an obvious gap in academic representations about hard tissue defects located in the gingival part of any tooth. Also it offers the prospects of consensus on differentiated diagnostic-therapeutic approaches in different clinical variants of location.  This classification builds methodological “bridge of continuity” between therapeutic and prosthetic dentistry in the field of treatment of the gingival line defects of dental hard tissues.

  11. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    Science.gov (United States)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach

  12. Comparison of an automated classification system with an empirical classification of circulation patterns over the Pannonian basin, Central Europe

    Science.gov (United States)

    Maheras, Panagiotis; Tolika, Konstantia; Tegoulias, Ioannis; Anagnostopoulou, Christina; Szpirosz, Klicász; Károssy, Csaba; Makra, László

    2018-04-01

    The aim of the study is to compare the performance of the two classification methods, based on the atmospheric circulation types over the Pannonian basin in Central Europe. Moreover, relationships including seasonal occurrences and correlation coefficients, as well as comparative diagrams of the seasonal occurrences of the circulation types of the two classification systems are presented. When comparing of the automated (objective) and empirical (subjective) classification methods, it was found that the frequency of the empirical anticyclonic (cyclonic) types is much higher (lower) than that of the automated anticyclonic (cyclonic) types both on an annual and seasonal basis. The highest and statistically significant correlations between the circulation types of the two classification systems, as well as those between the cumulated seasonal anticyclonic and cyclonic types occur in winter for both classifications, since the weather-influencing effect of the atmospheric circulation in this season is the most prevalent. Precipitation amounts in Budapest display a decreasing trend in accordance with the decrease in the occurrence of the automated cyclonic types. In contrast, the occurrence of the empirical cyclonic types displays an increasing trend. There occur types in a given classification that are usually accompanied by high ratios of certain types in the other classification.

  13. Yarn-dyed fabric defect classification based on convolutional neural network

    Science.gov (United States)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  14. Development of an intelligent ultrasonic welding defect classification software

    International Nuclear Information System (INIS)

    Song, Sung Jin; Kim, Hak Joon; Jeong, Hee Don

    1997-01-01

    Ultrasonic pattern recognition is the most effective approach to the problem of discriminating types of flaws in weldments based on ultrasonic flaw signals. In spite of significant progress in the research on this methodology, it has not been widely used in many practical ultrasonic inspections of weldments in industry. Hence, for the convenient application of this approach in many practical situations, we develop an intelligent ultrasonic signature classification software which can discriminate types of flaws in weldments based on their ultrasonic signals using various tools in artificial intelligence such as neural networks. This software shows the excellent performance in an experimental problem where flaws in weldments are classified into two categories of cracks and non-cracks. This performance demonstrates the high possibility of this software as a practical tool for ultrasonic flaw classification in weldments.

  15. Semi-Automated Classification of Seafloor Data Collected on the Delmarva Inner Shelf

    Science.gov (United States)

    Sweeney, E. M.; Pendleton, E. A.; Brothers, L. L.; Mahmud, A.; Thieler, E. R.

    2017-12-01

    We tested automated classification methods on acoustic bathymetry and backscatter data collected by the U.S. Geological Survey (USGS) and National Oceanic and Atmospheric Administration (NOAA) on the Delmarva inner continental shelf to efficiently and objectively identify sediment texture and geomorphology. Automated classification techniques are generally less subjective and take significantly less time than manual classification methods. We used a semi-automated process combining unsupervised and supervised classification techniques to characterize seafloor based on bathymetric slope and relative backscatter intensity. Statistical comparison of our automated classification results with those of a manual classification conducted on a subset of the acoustic imagery indicates that our automated method was highly accurate (95% total accuracy and 93% Kappa). Our methods resolve sediment ridges, zones of flat seafloor and areas of high and low backscatter. We compared our classification scheme with mean grain size statistics of samples collected in the study area and found that strong correlations between backscatter intensity and sediment texture exist. High backscatter zones are associated with the presence of gravel and shells mixed with sand, and low backscatter areas are primarily clean sand or sand mixed with mud. Slope classes further elucidate textural and geomorphologic differences in the seafloor, such that steep slopes (>0.35°) with high backscatter are most often associated with the updrift side of sand ridges and bedforms, whereas low slope with high backscatter correspond to coarse lag or shell deposits. Low backscatter and high slopes are most often found on the downdrift side of ridges and bedforms, and low backscatter and low slopes identify swale areas and sand sheets. We found that poor acoustic data quality was the most significant cause of inaccurate classification results, which required additional user input to mitigate. Our method worked well

  16. Experimental Study of the Effect of Internal Defects on Stress Waves during Automated Fiber Placement

    Directory of Open Access Journals (Sweden)

    Zhenyu Han

    2018-04-01

    Full Text Available The detection technique of component defects is currently only realized to detect offline defects and online surface defects during automated fiber placement (AFP. The characteristics of stress waves can be effectively applied to identify and detect internal defects in material structure. However, the correlation mechanism between stress waves and internal defects remains unclear during the AFP process. This paper proposes a novel experimental method to test stress waves, where continuous loading induced by process itself is used as an excitation source without other external excitation. Twenty-seven groups of thermosetting prepreg laminates under different processing parameters are manufactured to obtain different void content. In order to quantitatively estimate the void content in the prepreg structure, the relation model between the void content and ultrasonic attenuation coefficient is revealed using an A-scan ultrasonic flaw detector and photographic methods by optical microscope. Furthermore, the high-frequency noises of stress waves are removed using Haar wavelet transform. The peaks, the Manhattan distance and mean stress during the laying process are analyzed and evaluated. Partial conclusions in this paper could provide theoretical support for online real-time detection of internal defects based on stress wave characteristics.

  17. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation

    CSIR Research Space (South Africa)

    Ngwangwa, HM

    2010-04-01

    Full Text Available -1 Journal of Terramechanics Volume 47, Issue 2, April 2010, Pages 97-111 Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation H.M. Ngwangwaa, P.S. Heynsa, , , F...

  18. Qualitative properties of roasting defect beans and development of its classification methods by hyperspectral imaging technology.

    Science.gov (United States)

    Cho, Jeong-Seok; Bae, Hyung-Jin; Cho, Byoung-Kwan; Moon, Kwang-Deog

    2017-04-01

    Qualitative properties of roasting defect coffee beans and their classification methods were studied using hyperspectral imaging (HSI). The roasting defect beans were divided into 5 groups: medium roasting (Cont), under developed (RD-1), over roasting (RD-2), interior under developed (RD-3), and interior scorching (RD-4). The following qualitative properties were assayed: browning index (BI), moisture content (MC), chlorogenic acid (CA), trigonelline (TG), and caffeine (CF) content. Their HSI spectra (1000-1700nm) were also analysed to develop the classification methods of roasting defect beans. RD-2 showed the highest BI and the lowest MC, CA, and TG content. The accuracy of classification model of partial least-squares discriminant was 86.2%. The most powerful wavelength to classify the defective beans was approximately 1420nm (related to OH bond). The HSI reflectance values at 1420nm showed similar tendency with MC, enabling the use of this technology to classify the roasting defect beans. Copyright © 2016. Published by Elsevier Ltd.

  19. Simple Fully Automated Group Classification on Brain fMRI

    Energy Technology Data Exchange (ETDEWEB)

    Honorio, J.; Goldstein, R.; Honorio, J.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-04-14

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  20. Simple Fully Automated Group Classification on Brain fMRI

    International Nuclear Information System (INIS)

    Honorio, J.; Goldstein, R.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-01-01

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  1. Automated Diatom Classification (Part B: A Deep Learning Approach

    Directory of Open Access Journals (Sweden)

    Anibal Pedraza

    2017-05-01

    Full Text Available Diatoms, a kind of algae microorganisms with several species, are quite useful for water quality determination, one of the hottest topics in applied biology nowadays. At the same time, deep learning and convolutional neural networks (CNN are becoming an extensively used technique for image classification in a variety of problems. This paper approaches diatom classification with this technique, in order to demonstrate whether it is suitable for solving the classification problem. An extensive dataset was specifically collected (80 types, 100 samples/type for this study. The dataset covers different illumination conditions and it was computationally augmented to more than 160,000 samples. After that, CNNs were applied over datasets pre-processed with different image processing techniques. An overall accuracy of 99% is obtained for the 80-class problem and different kinds of images (brightfield, normalized. Results were compared to previous presented classification techniques with different number of samples. As far as the authors know, this is the first time that CNNs are applied to diatom classification.

  2. Classification of Structure Defects of Metal Matrix Castings with Saturated Reinforcement

    Directory of Open Access Journals (Sweden)

    Gawdzińska K.

    2012-09-01

    Full Text Available Definition of a composite [1] describes an ideal composite material with perfect structure. In real composite materials, structure is usually imperfect - composites contain various types of defects [2, 3-5], especially as the casted composites are of concern. The reason for this is a specific structure of castings, related to course of the manufacturing process. In case of metal matrix composite castings, especially regarding these manufactured by saturation, there is no classification of these defects [2, 4]. Classification of defects in castings of classic materials (cast iron, cast steel, non-ferrous alloys is insufficient and requires completion of specific defects of mentioned materials. This problem (noted during manufacturing metal matrix composite castings with saturated reinforcement in Institute of Basic Technical Sciences of Maritime University Szczecin has become a reason of starting work aimed at creating such classification. As a result, this paper was prepared. It can contribute to improvement of quality of studied materials and, as a consequence, improve the environment protection level.

  3. Classification of Structure Defects of Metal Matrix Castings with Saturated Reinforcement

    Directory of Open Access Journals (Sweden)

    K. Gawdzińska

    2012-09-01

    Full Text Available Definition of a composite [1] describes an ideal composite material with perfect structure. In real composite materials, structure isusually imperfect – composites contain various types of defects [2, 3–5], especially as the casted composites are of concern. The reason for this is a specific structure of castings, related to course of the manufacturing process. In case of metal matrix composite castings, especially regarding these manufactured by saturation, there is no classification of these defects [2, 4]. Classification of defects in castings of classic materials (cast iron, cast steel, non-ferrous alloys is insufficient and requires completion of specific defects of mentioned materials. This problem (noted during manufacturing metal matrix composite castings with saturated reinforcement in Institute of Basic Technical Sciences of Maritime University Szczecin has become a reason of starting work aimed at creating such classification. As a result, this paper was prepared. It can contribute to improvement of quality of studied materials and, as a consequence, improve the environment protection level.

  4. Generating Clustered Journal Maps : An Automated System for Hierarchical Classification

    NARCIS (Netherlands)

    Leydesdorff, L.; Bornmann, L.; Wagner, C.S.

    2017-01-01

    Journal maps and classifications for 11,359 journals listed in the combined Journal Citation Reports 2015 of the Science and Social Sciences Citation Indexes are provided at https://leydesdorff.github.io/journals/ and http://www.leydesdorff.net/jcr15. A routine using VOSviewer for integrating the

  5. How automated image analysis techniques help scientists in species identification and classification?

    Science.gov (United States)

    Yousef Kalafi, Elham; Town, Christopher; Kaur Dhillon, Sarinder

    2017-09-04

    Identification of taxonomy at a specific level is time consuming and reliant upon expert ecologists. Hence the demand for automated species identification increased over the last two decades. Automation of data classification is primarily focussed on images, incorporating and analysing image data has recently become easier due to developments in computational technology. Research efforts in identification of species include specimens' image processing, extraction of identical features, followed by classifying them into correct categories. In this paper, we discuss recent automated species identification systems, categorizing and evaluating their methods. We reviewed and compared different methods in step by step scheme of automated identification and classification systems of species images. The selection of methods is influenced by many variables such as level of classification, number of training data and complexity of images. The aim of writing this paper is to provide researchers and scientists an extensive background study on work related to automated species identification, focusing on pattern recognition techniques in building such systems for biodiversity studies.

  6. Automated otolith image classification with multiple views: an evaluation on Sciaenidae.

    Science.gov (United States)

    Wong, J Y; Chu, C; Chong, V C; Dhillon, S K; Loh, K H

    2016-08-01

    Combined multiple 2D views (proximal, anterior and ventral aspects) of the sagittal otolith are proposed here as a method to capture shape information for fish classification. Classification performance of single view compared with combined 2D views show improved classification accuracy of the latter, for nine species of Sciaenidae. The effects of shape description methods (shape indices, Procrustes analysis and elliptical Fourier analysis) on classification performance were evaluated. Procrustes analysis and elliptical Fourier analysis perform better than shape indices when single view is considered, but all perform equally well with combined views. A generic content-based image retrieval (CBIR) system that ranks dissimilarity (Procrustes distance) of otolith images was built to search query images without the need for detailed information of side (left or right), aspect (proximal or distal) and direction (positive or negative) of the otolith. Methods for the development of this automated classification system are discussed. © 2016 The Fisheries Society of the British Isles.

  7. Automated Tissue Classification Framework for Reproducible Chronic Wound Assessment

    OpenAIRE

    Mukherjee, Rashmi; Manohar, Dhiraj Dhane; Das, Dev Kumar; Achar, Arun; Mitra, Analava; Chakraborty, Chandan

    2014-01-01

    The aim of this paper was to develop a computer assisted tissue classification (granulation, necrotic, and slough) scheme for chronic wound (CW) evaluation using medical image processing and statistical machine learning techniques. The red-green-blue (RGB) wound images grabbed by normal digital camera were first transformed into HSI (hue, saturation, and intensity) color space and subsequently the “S” component of HSI color channels was selected as it provided higher contrast. Wound areas fro...

  8. Automated morphological classification of galaxies based on projection gradient nonnegative matrix factorization algorithm

    Science.gov (United States)

    Selim, I. M.; Abd El Aziz, Mohamed

    2017-04-01

    The development of automated morphological classification schemes can successfully distinguish between morphological types of galaxies and can be used for studies of the formation and subsequent evolution of galaxies in our universe. In this paper, we present a new automated machine supervised learning astronomical classification scheme based on the Nonnegative Matrix Factorization algorithm. This scheme is making distinctions between all types roughly corresponding to Hubble types such as elliptical, lenticulars, spiral, and irregular galaxies. The proposed algorithm is performed on two examples with different number of image (small dataset contains 110 image and large dataset contains 700 images). The experimental results show that galaxy images from EFIGI catalog can be classified automatically with an accuracy of ˜93% for small and ˜92% for large number. These results are in good agreement when compared with the visual classifications.

  9. An Automated Defect Prediction Framework using Genetic Algorithms: A Validation of Empirical Studies

    Directory of Open Access Journals (Sweden)

    Juan Murillo-Morera

    2016-05-01

    Full Text Available Today, it is common for software projects to collect measurement data through development processes. With these data, defect prediction software can try to estimate the defect proneness of a software module, with the objective of assisting and guiding software practitioners. With timely and accurate defect predictions, practitioners can focus their limited testing resources on higher risk areas. This paper reports the results of three empirical studies that uses an automated genetic defect prediction framework. This framework generates and compares different learning schemes (preprocessing + attribute selection + learning algorithms and selects the best one using a genetic algorithm, with the objective to estimate the defect proneness of a software module. The first empirical study is a performance comparison of our framework with the most important framework of the literature. The second empirical study is a performance and runtime comparison between our framework and an exhaustive framework. The third empirical study is a sensitivity analysis. The last empirical study, is our main contribution in this paper. Performance of the software development defect prediction models (using AUC, Area Under the Curve was validated using NASA-MDP and PROMISE data sets. Seventeen data sets from NASA-MDP (13 and PROMISE (4 projects were analyzed running a NxM-fold cross-validation. A genetic algorithm was used to select the components of the learning schemes automatically, and to assess and report the results. Our results reported similar performance between frameworks. Our framework reported better runtime than exhaustive framework. Finally, we reported the best configuration according to sensitivity analysis.

  10. Magnetic Resonance Imaging Score and Classification System (AMADEUS) for Assessment of Preoperative Cartilage Defect Severity.

    Science.gov (United States)

    Jungmann, Pia M; Welsch, Götz H; Brittberg, Mats; Trattnig, Siegfried; Braun, Sepp; Imhoff, Andreas B; Salzmann, Gian M

    2017-07-01

    Objective To design a simple magnetic resonance (MR)-based assessment system for quantification of osteochondral defect severity prior to cartilage repair surgery at the knee. Design The new scoring tool was supposed to include 3 different parameters: (1) cartilage defect size, (2) depth/morphology of the cartilage defect, and (3) subchondral bone quality, resulting in a specific 3-digit code. A clearly defined numeric score was developed, resulting in a final score of 0 to 100. Defect severity grades I through IV were defined. For intra- and interobserver agreement, defects were assessed by 2 independent readers on preoperative knee MR images of n = 44 subjects who subsequently received cartilage repair surgery. For statistical analyses, mean values ± standard deviation (SD), interclass correlation coefficients (ICC), and linear weighted kappa values were calculated. Results The mean total Area Measurement And DEpth & Underlying Structures (AMADEUS) score was 48 ± 24, (range, 0-85). The mean defect size was 2.8 ± 2.6 cm 2 . There were 36 of 44 full-thickness defects. The subchondral bone showed defects in 21 of 44 cases. Kappa values for intraobserver reliability ranged between 0.82 and 0.94. Kappa values for interobserver reliability ranged between 0.38 and 0.85. Kappa values for AMADEUS grade were 0.75 and 0.67 for intra- and interobserver agreement, respectively. ICC scores for the AMADEUS total score were 0.97 and 0.96 for intra- and interobserver agreement, respectively. Conclusions The AMADEUS score and classification system allows reliable severity encoding, scoring and grading of osteochondral defects on knee MR images, which is easily clinically applicable in daily practice.

  11. Intelligence Package Development for UT Signal Pattern Recognition and Application to Classification of Defects in Austenitic Stainless Steel Weld

    International Nuclear Information System (INIS)

    Lee, Kang Yong; Kim, Joon Seob

    1996-01-01

    The research for the classification of the artificial defects in welding parts is performed using the pattern recognition technology of ultrasonic signal. The signal pattern recognition package including the user defined function is developed to perform the digital signal processing, feature extraction, feature selection and classifier selection. The neural network classifier and the statistical classifiers such as the linear discriminant function classifier and the empirical Bayesian classifier are compared and discussed. The pattern recognition technique is applied to the classification of artificial defects such as notches and a hole. If appropriately learned, the neural network classifier is concluded to be better than the statistical classifiers in the classification of the artificial defects

  12. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  13. Automated functional classification of experimental and predicted protein structures

    Directory of Open Access Journals (Sweden)

    Samudrala Ram

    2006-06-01

    Full Text Available Abstract Background Proteins that are similar in sequence or structure may perform different functions in nature. In such cases, function cannot be inferred from sequence or structural similarity. Results We analyzed experimental structures belonging to the Structural Classification of Proteins (SCOP database and showed that about half of them belong to multi-functional fold families for which protein similarity alone is not adequate to assign function. We also analyzed predicted structures from the LiveBench and the PDB-CAFASP experiments and showed that accurate homology-based functional assignments cannot be achieved approximately one third of the time, when the protein is a member of a multi-functional fold family. We then conducted extended performance evaluation and comparisons on both experimental and predicted structures using our Functional Signatures from Structural Alignments (FSSA algorithm that we previously developed to handle the problem of classifying proteins belonging to multi-functional fold families. Conclusion The results indicate that the FSSA algorithm has better accuracy when compared to homology-based approaches for functional classification of both experimental and predicted protein structures, in part due to its use of local, as opposed to global, information for classifying function. The FSSA algorithm has also been implemented as a webserver and is available at http://protinfo.compbio.washington.edu/fssa.

  14. Comparison of ultrasonic image features with echodynamic curves for defect classification and characterization

    Science.gov (United States)

    Zhang, Jie; Wedge, Sam; Rogerson, Allan; Drinkwater, Bruce

    2015-03-01

    Ultrasonic array imaging and multi-probe pulse echo inspection are two common ultrasonic techniques used for defect detection, classification and characterization in non-destructive evaluation. Compared to multi-probe pulse echo inspection, ultrasonic array imaging offers some advantages such as higher resolution images and the requirement to obtain fewer measurements. However, it is also limited by a lack of industry-approved inspection procedures and standards. In this paper, several artificial planar and volumetric weld defects of different orientations and locations embedded in 60 mm thick welded ferritic test specimens were measured using both ultrasonic arrays and multiple single crystal probes. The resultant TFM images and echodynamic curves for each defect were compared and the results demonstrate the correlations between TFM image features and echodynamic curve characteristics. Combining the analysis of multi-probe pulse echo inspection data and ultrasonic array images offers better classification and characterization of defects. These findings benefit the further development of industrial ultrasonic array inspection procedures and encourage the uptake of TFM technology within industry.

  15. Automated classification of tailed bacteriophages according to their neck organization.

    Science.gov (United States)

    Lopes, Anne; Tavares, Paulo; Petit, Marie-Agnès; Guérois, Raphaël; Zinn-Justin, Sophie

    2014-11-27

    The genetic diversity observed among bacteriophages remains a major obstacle for the identification of homologs and the comparison of their functional modules. In the structural module, although several classes of homologous proteins contributing to the head and tail structure can be detected, proteins of the head-to-tail connection (or neck) are generally more divergent. Yet, molecular analyses of a few tailed phages belonging to different morphological classes suggested that only a limited number of structural solutions are used in order to produce a functional virion. To challenge this hypothesis and analyze proteins diversity at the virion neck, we developed a specific computational strategy to cope with sequence divergence in phage proteins. We searched for homologs of a set of proteins encoded in the structural module using a phage learning database. We show that using a combination of iterative profile-profile comparison and gene context analyses, we can identify a set of head, neck and tail proteins in most tailed bacteriophages of our database. Classification of phages based on neck protein sequences delineates 4 Types corresponding to known morphological subfamilies. Further analysis of the most abundant Type 1 yields 10 Clusters characterized by consistent sets of head, neck and tail proteins. We developed Virfam, a webserver that automatically identifies proteins of the phage head-neck-tail module and assign phages to the most closely related cluster of phages. This server was tested against 624 new phages from the NCBI database. 93% of the tailed and unclassified phages could be assigned to our head-neck-tail based categories, thus highlighting the large representativeness of the identified virion architectures. Types and Clusters delineate consistent subgroups of Caudovirales, which correlate with several virion properties. Our method and webserver have the capacity to automatically classify most tailed phages, detect their structural module, assign a

  16. A package for the automated classification of periodic variable stars

    Science.gov (United States)

    Kim, Dae-Won; Bailer-Jones, Coryn A. L.

    2016-03-01

    We present a machine learning package for the classification of periodic variable stars. Our package is intended to be general: it can classify any single band optical light curve comprising at least a few tens of observations covering durations from weeks to years with arbitrary time sampling. We use light curves of periodic variable stars taken from OGLE and EROS-2 to train the model. To make our classifier relatively survey-independent, it is trained on 16 features extracted from the light curves (e.g., period, skewness, Fourier amplitude ratio). The model classifies light curves into one of seven superclasses - δ Scuti, RR Lyrae, Cepheid, Type II Cepheid, eclipsing binary, long-period variable, non-variable - as well as subclasses of these, such as ab, c, d, and e types for RR Lyraes. When trained to give only superclasses, our model achieves 0.98 for both recall and precision as measured on an independent validation dataset (on a scale of 0 to 1). When trained to give subclasses, it achieves 0.81 for both recall and precision. The majority of misclassifications of the subclass model is caused by confusion within a superclass rather than between superclasses. To assess classification performance of the subclass model, we applied it to the MACHO, LINEAR, and ASAS periodic variables, which gave recall/precision of 0.92/0.98, 0.89/0.96, and 0.84/0.88, respectively. We also applied the subclass model to Hipparcos periodic variable stars of many other variability types that do not exist in our training set, in order to examine how much those types degrade the classification performance of our target classes. In addition, we investigate how the performance varies with the number of data points and duration of observations. We find that recall and precision do not vary significantly if there are more than 80 data points and the duration is more than a few weeks. The classifier software of the subclass model is available (in Python) from the GitHub repository (http

  17. A graphical automated detection system to locate hardwood log surface defects using high-resolution three-dimensional laser scan data

    Science.gov (United States)

    Liya Thomas; R. Edward. Thomas

    2011-01-01

    We have developed an automated defect detection system and a state-of-the-art Graphic User Interface (GUI) for hardwood logs. The algorithm identifies defects at least 0.5 inch high and at least 3 inches in diameter on barked hardwood log and stem surfaces. To summarize defect features and to build a knowledge base, hundreds of defects were measured, photographed, and...

  18. Automated validation of patient safety clinical incident classification: macro analysis.

    Science.gov (United States)

    Gupta, Jaiprakash; Patrick, Jon

    2013-01-01

    Patient safety is the buzz word in healthcare. Incident Information Management System (IIMS) is electronic software that stores clinical mishaps narratives in places where patients are treated. It is estimated that in one state alone over one million electronic text documents are available in IIMS. In this paper we investigate the data density available in the fields entered to notify an incident and the validity of the built in classification used by clinician to categories the incidents. Waikato Environment for Knowledge Analysis (WEKA) software was used to test the classes. Four statistical classifier based on J48, Naïve Bayes (NB), Naïve Bayes Multinominal (NBM) and Support Vector Machine using radial basis function (SVM_RBF) algorithms were used to validate the classes. The data pool was 10,000 clinical incidents drawn from 7 hospitals in one state in Australia. In first part of the study 1000 clinical incidents were selected to determine type and number of fields worth investigating and in the second part another 5448 clinical incidents were randomly selected to validate 13 clinical incident types. Result shows 74.6% of the cells were empty and only 23 fields had content over 70% of the time. The percentage correctly classified classes on four algorithms using categorical dataset ranged from 42 to 49%, using free-text datasets from 65% to 77% and using both datasets from 72% to 79%. Kappa statistic ranged from 0.36 to 0.4. for categorical data, from 0.61 to 0.74. for free-text and from 0.67 to 0.77 for both datasets. Similar increases in performance in the 3 experiments was noted on true positive rate, precision, F-measure and area under curve (AUC) of receiver operating characteristics (ROC) scores. The study demonstrates only 14 of 73 fields in IIMS have data that is usable for machine learning experiments. Irrespective of the type of algorithms used when all datasets are used performance was better. Classifier NBM showed best performance. We think the

  19. Automated classification of cell morphology by coherence-controlled holographic microscopy.

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Automated database-guided expert-supervised orientation for immunophenotypic diagnosis and classification of acute leukemia.

    Science.gov (United States)

    Lhermitte, L; Mejstrikova, E; van der Sluijs-Gelling, A J; Grigore, G E; Sedek, L; Bras, A E; Gaipa, G; Sobral da Costa, E; Novakova, M; Sonneveld, E; Buracchi, C; de Sá Bacelar, T; Te Marvelde, J G; Trinquand, A; Asnafi, V; Szczepanski, T; Matarraz, S; Lopez, A; Vidriales, B; Bulsa, J; Hrusak, O; Kalina, T; Lecrevisse, Q; Martin Ayuso, M; Brüggemann, M; Verde, J; Fernandez, P; Burgos, L; Paiva, B; Pedreira, C E; van Dongen, J J M; Orfao, A; van der Velden, V H J

    2017-11-01

    Precise classification of acute leukemia (AL) is crucial for adequate treatment. EuroFlow has previously designed an AL orientation tube (ALOT) to guide towards the relevant classification panel (T-cell acute lymphoblastic leukemia (T-ALL), B-cell precursor (BCP)-ALL and/or acute myeloid leukemia (AML)) and final diagnosis. Now we built a reference database with 656 typical AL samples (145 T-ALL, 377 BCP-ALL, 134 AML), processed and analyzed via standardized protocols. Using principal component analysis (PCA)-based plots and automated classification algorithms for direct comparison of single-cells from individual patients against the database, another 783 cases were subsequently evaluated. Depending on the database-guided results, patients were categorized as: (i) typical T, B or Myeloid without or; (ii) with a transitional component to another lineage; (iii) atypical; or (iv) mixed-lineage. Using this automated algorithm, in 781/783 cases (99.7%) the right panel was selected, and data comparable to the final WHO-diagnosis was already provided in >93% of cases (85% T-ALL, 97% BCP-ALL, 95% AML and 87% mixed-phenotype AL patients), even without data on the full-characterization panels. Our results show that database-guided analysis facilitates standardized interpretation of ALOT results and allows accurate selection of the relevant classification panels, hence providing a solid basis for designing future WHO AL classifications.Leukemia advance online publication, 1 December 2017; doi:10.1038/leu.2017.313.

  1. Automated classification of cell morphology by coherence-controlled holographic microscopy

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.

  2. Automated color classification of urine dipstick image in urine examination

    Science.gov (United States)

    Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.

    2018-03-01

    Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.

  3. Improving automated case finding for ectopic pregnancy using a classification algorithm

    Science.gov (United States)

    Scholes, D.; Yu, O.; Raebel, M.A.; Trabert, B.; Holt, V.L.

    2011-01-01

    BACKGROUND Research and surveillance work addressing ectopic pregnancy often rely on diagnosis and procedure codes available from automated data sources. However, the use of these codes may result in misclassification of cases. Our aims were to evaluate the accuracy of standard ectopic pregnancy codes; and, through the use of additional automated data, to develop and validate a classification algorithm that could potentially improve the accuracy of ectopic pregnancy case identification. METHODS Using automated databases from two US managed-care plans, Group Health Cooperative (GH) and Kaiser Permanente Colorado (KPCO), we sampled women aged 15–44 with an ectopic pregnancy diagnosis or procedure code from 2001 to 2007 and verified their true case status through medical record review. We calculated positive predictive values (PPV) for code-selected cases compared with true cases at both sites. Using additional variables from the automated databases and classification and regression tree (CART) analysis, we developed a case-finding algorithm at GH (n = 280), which was validated at KPCO (n = 500). RESULTS Compared with true cases, the PPV of code-selected cases was 68 and 81% at GH and KPCO, respectively. The case-finding algorithm identified three predictors: ≥2 visits with an ectopic pregnancy code within 180 days; International Classification of Diseases, 9th Revision, Clinical Modification codes for tubal pregnancy; and methotrexate treatment. Relative to true cases, performance measures for the development and validation sets, respectively, were: 93 and 95% sensitivity; 81 and 81% specificity; 91 and 96% PPV; 84 and 79% negative predictive value. Misclassification proportions were 32% in the development set and 19% in the validation set when using standard codes; they were 11 and 8%, respectively, when using the algorithm. CONCLUSIONS The ectopic pregnancy algorithm improved case-finding accuracy over use of standard codes alone and generalized well to a

  4. A Fully Automated Classification for Mapping the Annual Cropland Extent

    Science.gov (United States)

    Waldner, F.; Defourny, P.

    2015-12-01

    Mapping the global cropland extent is of paramount importance for food security. Indeed, accurate and reliable information on cropland and the location of major crop types is required to make future policy, investment, and logistical decisions, as well as production monitoring. Timely cropland information directly feed early warning systems such as GIEWS and, FEWS NET. In Africa, and particularly in the arid and semi-arid region, food security is center of debate (at least 10% of the population remains undernourished) and accurate cropland estimation is a challenge. Space borne Earth Observation provides opportunities for global cropland monitoring in a spatially explicit, economic, efficient, and objective fashion. In the both agriculture monitoring and climate modelling, cropland maps serve as mask to isolate agricultural land for (i) time-series analysis for crop condition monitoring and (ii) to investigate how the cropland is respond to climatic evolution. A large diversity of mapping strategies ranging from the local to the global scale and associated with various degrees of accuracy can be found in the literature. At the global scale, despite efforts, cropland is generally one of classes with the poorest accuracy which make difficult the use for agricultural. This research aims at improving the cropland delineation from the local scale to the regional and global scales as well as allowing near real time updates. To that aim, five temporal features were designed to target the key- characteristics of crop spectral-temporal behavior. To ensure a high degree of automation, training data is extracted from available baseline land cover maps. The method delivers cropland maps with a high accuracy over contrasted agro-systems in Ukraine, Argentina, China and Belgium. The accuracy reached are comparable to those obtained with classifiers trained with in-situ data. Besides, it was found that the cropland class is associated with a low uncertainty. The temporal features

  5. Automated processing of webcam images for phenological classification.

    Directory of Open Access Journals (Sweden)

    Ludwig Bothmann

    Full Text Available Along with the global climate change, there is an increasing interest for its effect on phenological patterns such as start and end of the growing season. Scientific digital webcams are used for this purpose taking every day one or more images from the same natural motive showing for example trees or grassland sites. To derive phenological patterns from the webcam images, regions of interest are manually defined on these images by an expert and subsequently a time series of percentage greenness is derived and analyzed with respect to structural changes. While this standard approach leads to satisfying results and allows to determine dates of phenological change points, it is associated with a considerable amount of manual work and is therefore constrained to a limited number of webcams only. In particular, this forbids to apply the phenological analysis to a large network of publicly accessible webcams in order to capture spatial phenological variation. In order to be able to scale up the analysis to several hundreds or thousands of webcams, we propose and evaluate two automated alternatives for the definition of regions of interest, allowing for efficient analyses of webcam images. A semi-supervised approach selects pixels based on the correlation of the pixels' time series of percentage greenness with a few prototype pixels. An unsupervised approach clusters pixels based on scores of a singular value decomposition. We show for a scientific webcam that the resulting regions of interest are at least as informative as those chosen by an expert with the advantage that no manual action is required. Additionally, we show that the methods can even be applied to publicly available webcams accessed via the internet yielding interesting partitions of the analyzed images. Finally, we show that the methods are suitable for the intended big data applications by analyzing 13988 webcams from the AMOS database. All developed methods are implemented in the

  6. Automated classification of inflammation in colon histological sections based on digital microscopy and advanced image analysis.

    Science.gov (United States)

    Ficsor, Levente; Varga, Viktor Sebestyén; Tagscherer, Attila; Tulassay, Zsolt; Molnar, Bela

    2008-03-01

    Automated and quantitative histological analysis can improve diagnostic efficacy in colon sections. Our objective was to develop a parameter set for automated classification of aspecific colitis, ulcerative colitis, and Crohn's disease using digital slides, tissue cytometric parameters, and virtual microscopy. Routinely processed hematoxylin-and-eosin-stained histological sections from specimens that showed normal mucosa (24 cases), aspecific colitis (11 cases), ulcerative colitis (25 cases), and Crohn's disease (9 cases) diagnosed by conventional optical microscopy were scanned and digitized in high resolution (0.24 mum/pixel). Thirty-eight cytometric parameters based on morphometry were determined on cells, glands, and superficial epithelium. Fourteen tissue cytometric parameters based on ratios of tissue compartments were counted as well. Leave-one-out discriminant analysis was used for classification of the samples groups. Cellular morphometric features showed no significant differences in these benign colon alterations. However, gland related morphological differences (Gland Shape) for normal mucosa, ulcerative colitis, and aspecific colitis were found (P parameters showed significant differences (P parameters were the ratio of cell number in glands and in the whole slide, biopsy/gland surface ratio. These differences resulted in 88% overall accuracy in the classification. Crohn's disease could be discriminated only in 56%. Automated virtual microscopy can be used to classify colon mucosa as normal, ulcerative colitis, and aspecific colitis with reasonable accuracy. Further developments of dedicated parameters are necessary to identify Crohn's disease on digital slides. Copyright 2008 International Society for Analytical Cytology.

  7. Methods for pattern selection, class-specific feature selection and classification for automated learning.

    Science.gov (United States)

    Roy, Asim; Mackin, Patrick D; Mukhopadhyay, Somnath

    2013-05-01

    This paper presents methods for training pattern (prototype) selection, class-specific feature selection and classification for automated learning. For training pattern selection, we propose a method of sampling that extracts a small number of representative training patterns (prototypes) from the dataset. The idea is to extract a set of prototype training patterns that represents each class region in a classification problem. In class-specific feature selection, we try to find a separate feature set for each class such that it is the best one to separate that class from the other classes. We then build a separate classifier for that class based on its own feature set. The paper also presents a new hypersphere classification algorithm. Hypersphere nets are similar to radial basis function (RBF) nets and belong to the group of kernel function nets. Polynomial time complexity of the methods is proven. Polynomial time complexity of learning algorithms is important to the field of neural networks. Computational results are provided for a number of well-known datasets. None of the parameters of the algorithm were fine tuned for any of the problems solved and this supports the idea of automation of learning methods. Automation of learning is crucial to wider deployment of learning technologies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics.

    Science.gov (United States)

    Chepelev, Leonid L; Riazanov, Alexandre; Kouznetsov, Alexandre; Low, Hong Sang; Dumontier, Michel; Baker, Christopher J O

    2011-07-26

    The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI) framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of our integrative methodology in the context of

  9. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics

    Directory of Open Access Journals (Sweden)

    Dumontier Michel

    2011-07-01

    Full Text Available Abstract Background The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. Results As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of

  10. Automated classification of dolphin echolocation click types from the Gulf of Mexico.

    Science.gov (United States)

    Frasier, Kaitlin E; Roch, Marie A; Soldevilla, Melissa S; Wiggins, Sean M; Garrison, Lance P; Hildebrand, John A

    2017-12-01

    Delphinids produce large numbers of short duration, broadband echolocation clicks which may be useful for species classification in passive acoustic monitoring efforts. A challenge in echolocation click classification is to overcome the many sources of variability to recognize underlying patterns across many detections. An automated unsupervised network-based classification method was developed to simulate the approach a human analyst uses when categorizing click types: Clusters of similar clicks were identified by incorporating multiple click characteristics (spectral shape and inter-click interval distributions) to distinguish within-type from between-type variation, and identify distinct, persistent click types. Once click types were established, an algorithm for classifying novel detections using existing clusters was tested. The automated classification method was applied to a dataset of 52 million clicks detected across five monitoring sites over two years in the Gulf of Mexico (GOM). Seven distinct click types were identified, one of which is known to be associated with an acoustically identifiable delphinid (Risso's dolphin) and six of which are not yet identified. All types occurred at multiple monitoring locations, but the relative occurrence of types varied, particularly between continental shelf and slope locations. Automatically-identified click types from autonomous seafloor recorders without verifiable species identification were compared with clicks detected on sea-surface towed hydrophone arrays in the presence of visually identified delphinid species. These comparisons suggest potential species identities for the animals producing some echolocation click types. The network-based classification method presented here is effective for rapid, unsupervised delphinid click classification across large datasets in which the click types may not be known a priori.

  11. Towards more reliable automated multi-dose dispensing: retrospective follow-up study on medication dose errors and product defects.

    Science.gov (United States)

    Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko

    2013-03-01

    To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.

  12. A Neural-Network-Based Semi-Automated Geospatial Classification Tool

    Science.gov (United States)

    Hale, R. G.; Herzfeld, U. C.

    2014-12-01

    North America's largest glacier system, the Bering Bagley Glacier System (BBGS) in Alaska, surged in 2011-2013, as shown by rapid mass transfer, elevation change, and heavy crevassing. Little is known about the physics controlling surge glaciers' semi-cyclic patterns; therefore, it is crucial to collect and analyze as much data as possible so that predictive models can be made. In addition, physical signs frozen in ice in the form of crevasses may help serve as a warning for future surges. The BBGS surge provided an opportunity to develop an automated classification tool for crevasse classification based on imagery collected from small aircraft. The classification allows one to link image classification to geophysical processes associated with ice deformation. The tool uses an approach that employs geostatistical functions and a feed-forward perceptron with error back-propagation. The connectionist-geostatistical approach uses directional experimental (discrete) variograms to parameterize images into a form that the Neural Network (NN) can recognize. In an application to preform analysis on airborne video graphic data from the surge of the BBGS, an NN was able to distinguish 18 different crevasse classes with 95 percent or higher accuracy, for over 3,000 images. Recognizing that each surge wave results in different crevasse types and that environmental conditions affect the appearance in imagery, we designed the tool's semi-automated pre-training algorithm to be adaptable. The tool can be optimized to specific settings and variables of image analysis: (airborne and satellite imagery, different camera types, observation altitude, number and types of classes, and resolution). The generalization of the classification tool brings three important advantages: (1) multiple types of problems in geophysics can be studied, (2) the training process is sufficiently formalized to allow non-experts in neural nets to perform the training process, and (3) the time required to

  13. Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Teramoto, Atsushi; Tsukamoto, Tetsuya; Kiriyama, Yuka; Fujita, Hiroshi

    2017-01-01

    Lung cancer is a leading cause of death worldwide. Currently, in differential diagnosis of lung cancer, accurate classification of cancer types (adenocarcinoma, squamous cell carcinoma, and small cell carcinoma) is required. However, improving the accuracy and stability of diagnosis is challenging. In this study, we developed an automated classification scheme for lung cancers presented in microscopic images using a deep convolutional neural network (DCNN), which is a major deep learning technique. The DCNN used for classification consists of three convolutional layers, three pooling layers, and two fully connected layers. In evaluation experiments conducted, the DCNN was trained using our original database with a graphics processing unit. Microscopic images were first cropped and resampled to obtain images with resolution of 256 × 256 pixels and, to prevent overfitting, collected images were augmented via rotation, flipping, and filtering. The probabilities of three types of cancers were estimated using the developed scheme and its classification accuracy was evaluated using threefold cross validation. In the results obtained, approximately 71% of the images were classified correctly, which is on par with the accuracy of cytotechnologists and pathologists. Thus, the developed scheme is useful for classification of lung cancers from microscopic images.

  14. Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification

    Science.gov (United States)

    Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.

    2017-12-01

    We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.

  15. A Novel Method for the Separation of Overlapping Pollen Species for Automated Detection and Classification.

    Science.gov (United States)

    Tello-Mijares, Santiago; Flores, Francisco

    2016-01-01

    The identification of pollen in an automated way will accelerate different tasks and applications of palynology to aid in, among others, climate change studies, medical allergies calendar, and forensic science. The aim of this paper is to develop a system that automatically captures a hundred microscopic images of pollen and classifies them into the 12 different species from Lagunera Region, Mexico. Many times, the pollen is overlapping on the microscopic images, which increases the difficulty for its automated identification and classification. This paper focuses on a method to segment the overlapping pollen. First, the proposed method segments the overlapping pollen. Second, the method separates the pollen based on the mean shift process (100% segmentation) and erosion by H-minima based on the Fibonacci series. Thus, pollen is characterized by its shape, color, and texture for training and evaluating the performance of three classification techniques: random tree forest, multilayer perceptron, and Bayes net. Using the newly developed system, we obtained segmentation results of 100% and classification on top of 96.2% and 96.1% in recall and precision using multilayer perceptron in twofold cross validation.

  16. Automated image classification applied to reconstituted human corneal epithelium for the early detection of toxic damage

    Science.gov (United States)

    Crosta, Giovanni Franco; Urani, Chiara; De Servi, Barbara; Meloni, Marisa

    2010-02-01

    For a long time acute eye irritation has been assessed by means of the DRAIZE rabbit test, the limitations of which are known. Alternative tests based on in vitro models have been proposed. This work focuses on the "reconstituted human corneal epithelium" (R-HCE), which resembles the corneal epithelium of the human eye by thickness, morphology and marker expression. Testing a substance on R-HCE involves a variety of methods. Herewith quantitative morphological analysis is applied to optical microscope images of R-HCE cross sections resulting from exposure to benzalkonium chloride (BAK). The short term objectives and the first results are the analysis and classification of said images. Automated analysis relies on feature extraction by the spectrum-enhancement algorithm, which is made sensitive to anisotropic morphology, and classification based on principal components analysis. The winning strategy has been the separate analysis of the apical and basal layers, which carry morphological information of different types. R-HCE specimens have been ranked by gross damage. The onset of early damage has been detected and an R-HCE specimen exposed to a low BAK dose has been singled out from the negative and positive control. These results provide a proof of principle for the automated classification of the specimens of interest on a purely morphological basis by means of the spectrum enhancement algorithm.

  17. Classification of Atrial Septal Defect and Ventricular Septal Defect with Documented Hemodynamic Parameters via Cardiac Catheterization by Genetic Algorithms and Multi-Layered Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Mustafa Yıldız

    2012-08-01

    Full Text Available Introduction: We aimed to develop a classification method to discriminate ventricular septal defect and atrial septal defect by using severalhemodynamic parameters.Patients and Methods: Forty three patients (30 atrial septal defect, 13 ventricular septal defect; 26 female, 17 male with documentedhemodynamic parameters via cardiac catheterization are included to study. Such parameters as blood pressure values of different areas,gender, age and Qp/Qs ratios are used for classification. Parameters, we used in classification are determined by divergence analysismethod. Those parameters are; i pulmonary artery diastolic pressure, ii Qp/Qs ratio, iii right atrium pressure, iv age, v pulmonary arterysystolic pressure, vi left ventricular sistolic pressure, vii aorta mean pressure, viii left ventricular diastolic pressure, ix aorta diastolicpressure, x aorta systolic pressure. Those parameters detected from our study population, are uploaded to multi-layered artificial neuralnetwork and the network was trained by genetic algorithm.Results: Trained cluster consists of 14 factors (7 atrial septal defect and 7 ventricular septal defect. Overall success ratio is 79.2%, andwith a proper instruction of artificial neural network this ratio increases up to 89%.Conclusion: Parameters, belonging to artificial neural network, which are needed to be detected by the investigator in classical methods,can easily be detected with the help of genetic algorithms. During the instruction of artificial neural network by genetic algorithms, boththe topology of network and factors of network can be determined. During the test stage, elements, not included in instruction cluster, areassumed as in test cluster, and as a result of this study, we observed that multi-layered artificial neural network can be instructed properly,and neural network is a successful method for aimed classification.

  18. Automated Outcome Classification of Computed Tomography Imaging Reports for Pediatric Traumatic Brain Injury.

    Science.gov (United States)

    Yadav, Kabir; Sarioglu, Efsun; Choi, Hyeong Ah; Cartwright, Walter B; Hinds, Pamela S; Chamberlain, James M

    2016-02-01

    The authors have previously demonstrated highly reliable automated classification of free-text computed tomography (CT) imaging reports using a hybrid system that pairs linguistic (natural language processing) and statistical (machine learning) techniques. Previously performed for identifying the outcome of orbital fracture in unprocessed radiology reports from a clinical data repository, the performance has not been replicated for more complex outcomes. To validate automated outcome classification performance of a hybrid natural language processing (NLP) and machine learning system for brain CT imaging reports. The hypothesis was that our system has performance characteristics for identifying pediatric traumatic brain injury (TBI). This was a secondary analysis of a subset of 2,121 CT reports from the Pediatric Emergency Care Applied Research Network (PECARN) TBI study. For that project, radiologists dictated CT reports as free text, which were then deidentified and scanned as PDF documents. Trained data abstractors manually coded each report for TBI outcome. Text was extracted from the PDF files using optical character recognition. The data set was randomly split evenly for training and testing. Training patient reports were used as input to the Medical Language Extraction and Encoding (MedLEE) NLP tool to create structured output containing standardized medical terms and modifiers for negation, certainty, and temporal status. A random subset stratified by site was analyzed using descriptive quantitative content analysis to confirm identification of TBI findings based on the National Institute of Neurological Disorders and Stroke (NINDS) Common Data Elements project. Findings were coded for presence or absence, weighted by frequency of mentions, and past/future/indication modifiers were filtered. After combining with the manual reference standard, a decision tree classifier was created using data mining tools WEKA 3.7.5 and Salford Predictive Miner 7

  19. VizieR Online Data Catalog: SDSS automated morphology classification (Huertas-Company+, 2011)

    Science.gov (United States)

    Huertas-Company, M.; Aguerri, J. A. L.; Bernardi, M.; Mei, S.; Sanchez Almeida, J.

    2010-11-01

    We used all the SDSS DR7 spectroscopic sample as the starting base. Then, the selection of objects was based on Sanchez Almeida et al. (2010ApJ...714..487A) who performed an unsupervised automated classification of all the SDSS spectra. Basically, we chose galaxies with redshift below 0.25, and with good photometric data and clean spectra, meaning objects not too close to the edges, not saturated, or not properly deblended. The final catalog contains 698420 objects for which we estimate the morphology (also available at http://gepicom04.obspm.fr/sdssmorphology/Morphology2010.html ). (3 data files).

  20. Automated identification of dementia using medical imaging: a survey from a pattern classification perspective.

    Science.gov (United States)

    Zheng, Chuanchuan; Xia, Yong; Pan, Yongsheng; Chen, Jinhu

    2016-03-01

    In this review paper, we summarized the automated dementia identification algorithms in the literature from a pattern classification perspective. Since most of those algorithms consist of both feature extraction and classification, we provide a survey on three categories of feature extraction methods, including the voxel-, vertex- and ROI-based ones, and four categories of classifiers, including the linear discriminant analysis, Bayes classifiers, support vector machines, and artificial neural networks. We also compare the reported performance of many recently published dementia identification algorithms. Our comparison shows that many algorithms can differentiate the Alzheimer's disease (AD) from elderly normal with a largely satisfying accuracy, whereas distinguishing the mild cognitive impairment from AD or elderly normal still remains a major challenge.

  1. An automated approach to classification of duplex assay for digital droplet PCR.

    Science.gov (United States)

    Liu, Cong; Zhou, Wuping; Zhang, Tao; Jiang, Keming; Li, Haiwen; Dong, Wenfei

    2018-01-25

    In the digital polymerase chain reaction (dPCR) detection process, discriminating positive droplets from negative ones directly affects the final concentration and is one of the most important factors affecting accuracy. Current automated classification methods usually discuss single-channel detections, whereas duplex detection experiments are less discussed. In this paper, we designed a classification method by estimating the upper limit of the negative droplets. The right tail of the negative droplets is approximated using a generalized Pareto distribution. Furthermore, our method takes fluorescence compensation in duplex assays into account. We also demonstrate the method on Bio-Rad's mutant detection dataset. Experimental results show that the method provides similar or better accuracy than other algorithms reported over a wider dynamic range.

  2. ClassyFire: automated chemical classification with a comprehensive, computable taxonomy.

    Science.gov (United States)

    Djoumbou Feunang, Yannick; Eisner, Roman; Knox, Craig; Chepelev, Leonid; Hastings, Janna; Owen, Gareth; Fahy, Eoin; Steinbeck, Christoph; Subramanian, Shankar; Bolton, Evan; Greiner, Russell; Wishart, David S

    2016-01-01

    Scientists have long been driven by the desire to describe, organize, classify, and compare objects using taxonomies and/or ontologies. In contrast to biology, geology, and many other scientific disciplines, the world of chemistry still lacks a standardized chemical ontology or taxonomy. Several attempts at chemical classification have been made; but they have mostly been limited to either manual, or semi-automated proof-of-principle applications. This is regrettable as comprehensive chemical classification and description tools could not only improve our understanding of chemistry but also improve the linkage between chemistry and many other fields. For instance, the chemical classification of a compound could help predict its metabolic fate in humans, its druggability or potential hazards associated with it, among others. However, the sheer number (tens of millions of compounds) and complexity of chemical structures is such that any manual classification effort would prove to be near impossible. We have developed a comprehensive, flexible, and computable, purely structure-based chemical taxonomy (ChemOnt), along with a computer program (ClassyFire) that uses only chemical structures and structural features to automatically assign all known chemical compounds to a taxonomy consisting of >4800 different categories. This new chemical taxonomy consists of up to 11 different levels (Kingdom, SuperClass, Class, SubClass, etc.) with each of the categories defined by unambiguous, computable structural rules. Furthermore each category is named using a consensus-based nomenclature and described (in English) based on the characteristic common structural properties of the compounds it contains. The ClassyFire webserver is freely accessible at http://classyfire.wishartlab.com/. Moreover, a Ruby API version is available at https://bitbucket.org/wishartlab/classyfire_api, which provides programmatic access to the ClassyFire server and database. ClassyFire has been used to

  3. Automated Classification of Severity in Cardiac Dyssynchrony Merging Clinical Data and Mechanical Descriptors

    Directory of Open Access Journals (Sweden)

    Alejandro Santos-Díaz

    2017-01-01

    Full Text Available Cardiac resynchronization therapy (CRT improves functional classification among patients with left ventricle malfunction and ventricular electric conduction disorders. However, a high percentage of subjects under CRT (20%–30% do not show any improvement. Nonetheless the presence of mechanical contraction dyssynchrony in ventricles has been proposed as an indicator of CRT response. This work proposes an automated classification model of severity in ventricular contraction dyssynchrony. The model includes clinical data such as left ventricular ejection fraction (LVEF, QRS and P-R intervals, and the 3 most significant factors extracted from the factor analysis of dynamic structures applied to a set of equilibrium radionuclide angiography images representing the mechanical behavior of cardiac contraction. A control group of 33 normal volunteers (28±5 years, LVEF of 59.7%±5.8% and a HF group of 42 subjects (53.12±15.05 years, LVEF < 35% were studied. The proposed classifiers had hit rates of 90%, 50%, and 80% to distinguish between absent, mild, and moderate-severe interventricular dyssynchrony, respectively. For intraventricular dyssynchrony, hit rates of 100%, 50%, and 90% were observed distinguishing between absent, mild, and moderate-severe, respectively. These results seem promising in using this automated method for clinical follow-up of patients undergoing CRT.

  4. A Framework to Support Automated Classification and Labeling of Brain Electromagnetic Patterns

    Directory of Open Access Journals (Sweden)

    Gwen A. Frishkoff

    2007-01-01

    Full Text Available This paper describes a framework for automated classification and labeling of patterns in electroencephalographic (EEG and magnetoencephalographic (MEG data. We describe recent progress on four goals: 1 specification of rules and concepts that capture expert knowledge of event-related potentials (ERP patterns in visual word recognition; 2 implementation of rules in an automated data processing and labeling stream; 3 data mining techniques that lead to refinement of rules; and 4 iterative steps towards system evaluation and optimization. This process combines top-down, or knowledge-driven, methods with bottom-up, or data-driven, methods. As illustrated here, these methods are complementary and can lead to development of tools for pattern classification and labeling that are robust and conceptually transparent to researchers. The present application focuses on patterns in averaged EEG (ERP data. We also describe efforts to extend our methods to represent patterns in MEG data, as well as EM patterns in source (anatomical space. The broader aim of this work is to design an ontology-based system to support cross-laboratory, cross-paradigm, and cross-modal integration of brain functional data. Tools developed for this project are implemented in MATLAB and are freely available on request.

  5. Robust automated classification of first-motion polarities for focal mechanism determination with machine learning

    Science.gov (United States)

    Ross, Z. E.; Meier, M. A.; Hauksson, E.

    2017-12-01

    Accurate first-motion polarities are essential for determining earthquake focal mechanisms, but are difficult to measure automatically because of picking errors and signal to noise issues. Here we develop an algorithm for reliable automated classification of first-motion polarities using machine learning algorithms. A classifier is designed to identify whether the first-motion polarity is up, down, or undefined by examining the waveform data directly. We first improve the accuracy of automatic P-wave onset picks by maximizing a weighted signal/noise ratio for a suite of candidate picks around the automatic pick. We then use the waveform amplitudes before and after the optimized pick as features for the classification. We demonstrate the method's potential by training and testing the classifier on tens of thousands of hand-made first-motion picks by the Southern California Seismic Network. The classifier assigned the same polarity as chosen by an analyst in more than 94% of the records. We show that the method is generalizable to a variety of learning algorithms, including neural networks and random forest classifiers. The method is suitable for automated processing of large seismic waveform datasets, and can potentially be used in real-time applications, e.g. for improving the source characterizations of earthquake early warning algorithms.

  6. Detection of delamination defects in plate type fuel elements applying an automated C-Scan ultrasonic system

    International Nuclear Information System (INIS)

    Katchadjian, P.; Desimone, C.; Ziobrowski, C.; Garcia, A.

    2002-01-01

    For the inspection of plate type fuel elements to be used in Research Nuclear Reactors it was applied an immersion pulse-echo ultrasonic technique. For that reason an automated movement system was implemented according to the axes X, Y and Z that allows to automate the test and to show the results obtained in format of C-Scan, facilitating the immediate identification of possible defects and making repetitive the inspection. In this work problems found during the laboratory tests and factors that difficult the inspection are commented. Also the results of C-Scans over UMo fuel elements with pattern defects are shown. Finally, the main characteristics of the transducer with the one the better results were obtained are detailed. (author)

  7. Automated classification of tropical shrub species: a hybrid of leaf shape and machine learning approach.

    Science.gov (United States)

    Murat, Miraemiliana; Chang, Siow-Wee; Abu, Arpah; Yap, Hwa Jen; Yong, Kien-Thai

    2017-01-01

    Plants play a crucial role in foodstuff, medicine, industry, and environmental protection. The skill of recognising plants is very important in some applications, including conservation of endangered species and rehabilitation of lands after mining activities. However, it is a difficult task to identify plant species because it requires specialized knowledge. Developing an automated classification system for plant species is necessary and valuable since it can help specialists as well as the public in identifying plant species easily. Shape descriptors were applied on the myDAUN dataset that contains 45 tropical shrub species collected from the University of Malaya (UM), Malaysia. Based on literature review, this is the first study in the development of tropical shrub species image dataset and classification using a hybrid of leaf shape and machine learning approach. Four types of shape descriptors were used in this study namely morphological shape descriptors (MSD), Histogram of Oriented Gradients (HOG), Hu invariant moments (Hu) and Zernike moments (ZM). Single descriptor, as well as the combination of hybrid descriptors were tested and compared. The tropical shrub species are classified using six different classifiers, which are artificial neural network (ANN), random forest (RF), support vector machine (SVM), k-nearest neighbour (k-NN), linear discriminant analysis (LDA) and directed acyclic graph multiclass least squares twin support vector machine (DAG MLSTSVM). In addition, three types of feature selection methods were tested in the myDAUN dataset, Relief, Correlation-based feature selection (CFS) and Pearson's coefficient correlation (PCC). The well-known Flavia dataset and Swedish Leaf dataset were used as the validation dataset on the proposed methods. The results showed that the hybrid of all descriptors of ANN outperformed the other classifiers with an average classification accuracy of 98.23% for the myDAUN dataset, 95.25% for the Flavia dataset and 99

  8. A simple and robust method for automated photometric classification of supernovae using neural networks

    Science.gov (United States)

    Karpenka, N. V.; Feroz, F.; Hobson, M. P.

    2013-02-01

    A method is presented for automated photometric classification of supernovae (SNe) as Type Ia or non-Ia. A two-step approach is adopted in which (i) the SN light curve flux measurements in each observing filter are fitted separately to an analytical parametrized function that is sufficiently flexible to accommodate virtually all types of SNe and (ii) the fitted function parameters and their associated uncertainties, along with the number of flux measurements, the maximum-likelihood value of the fit and Bayesian evidence for the model, are used as the input feature vector to a classification neural network that outputs the probability that the SN under consideration is of Type Ia. The method is trained and tested using data released following the Supernova Photometric Classification Challenge (SNPCC), consisting of light curves for 20 895 SNe in total. We consider several random divisions of the data into training and testing sets: for instance, for our sample D_1 (D_4), a total of 10 (40) per cent of the data are involved in training the algorithm and the remainder used for blind testing of the resulting classifier; we make no selection cuts. Assigning a canonical threshold probability of pth = 0.5 on the network output to class an SN as Type Ia, for the sample D_1 (D_4) we obtain a completeness of 0.78 (0.82), purity of 0.77 (0.82) and SNPCC figure of merit of 0.41 (0.50). Including the SN host-galaxy redshift and its uncertainty as additional inputs to the classification network results in a modest 5-10 per cent increase in these values. We find that the quality of the classification does not vary significantly with SN redshift. Moreover, our probabilistic classification method allows one to calculate the expected completeness, purity and figure of merit (or other measures of classification quality) as a function of the threshold probability pth, without knowing the true classes of the SNe in the testing sample, as is the case in the classification of real SNe

  9. Genetic defects in downregulation of IgE production and a new genetic classification of atopy

    Directory of Open Access Journals (Sweden)

    Naomi Kondo

    2004-01-01

    Full Text Available Atopic disorders, such as asthma, eczema and rhinitis, develop due to the interactions between genetic and environmental factors. Atopy is characterized by enhanced IgE responses to environmental antigens. The production of IgE is upregulated by Th2 cytokines, in particular interleukin (IL-4, and downregulated by Th1 cytokines, in particular interferon (IFN-γ. In the present review, we present the genetic factors responsible for IgE production and genetic defects in the downregulation (brake of IgE production, especially in terms of IL-12 and IL-18 signaling, mutations of the IL-12 receptor β2 chain gene and mutations of the IL-18 receptor α chain gene in atopy. Moreover, we newly present a genetic classification of atopy. There are four categories of genes that control the expression of allergic disorders, which include: (i antigen recognition; (ii IgE production (downregulation=brake; and upregulation; (iii the production and release of mediators; and (iv events on target organs. In the near future, this genetic classification will facilitate the development of tailor-made treatment.

  10. Automated arteriole and venule classification using deep learning for retinal images from the UK Biobank cohort.

    Science.gov (United States)

    Welikala, R A; Foster, P J; Whincup, P H; Rudnicka, A R; Owen, C G; Strachan, D P; Barman, S A

    2017-11-01

    The morphometric characteristics of the retinal vasculature are associated with future risk of many systemic and vascular diseases. However, analysis of data from large population based studies is needed to help resolve uncertainties in some of these associations. This requires automated systems that extract quantitative measures of vessel morphology from large numbers of retinal images. Associations between retinal vessel morphology and disease precursors/outcomes may be similar or opposing for arterioles and venules. Therefore, the accurate detection of the vessel type is an important element in such automated systems. This paper presents a deep learning approach for the automatic classification of arterioles and venules across the entire retinal image, including vessels located at the optic disc. This comprises of a convolutional neural network whose architecture contains six learned layers: three convolutional and three fully-connected. Complex patterns are automatically learnt from the data, which avoids the use of hand crafted features. The method is developed and evaluated using 835,914 centreline pixels derived from 100 retinal images selected from the 135,867 retinal images obtained at the UK Biobank (large population-based cohort study of middle aged and older adults) baseline examination. This is a challenging dataset in respect to image quality and hence arteriole/venule classification is required to be highly robust. The method achieves a significant increase in accuracy of 8.1% when compared to the baseline method, resulting in an arteriole/venule classification accuracy of 86.97% (per pixel basis) over the entire retinal image. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Automation of Nuclear Fuel Pellet Quality Control

    International Nuclear Information System (INIS)

    Keyvan, Shahla; Song, Xiaolong

    2001-01-01

    It would be highly desirable to automate the pellet inspection process, which at the present time is done by humans using the naked eye for inspection. A prototype of an automated inspection system was developed. The system examines photographic images of pellets using various artificial intelligence techniques for image analysis and defect classification. The steps in the process are described

  12. Comparing automated classification and digitization approaches to detect change in eelgrass bed extent during restoration of a large river delta

    Science.gov (United States)

    Davenport, Anna Elizabeth; Davis, Jerry D.; Woo, Isa; Grossman, Eric; Barham, Jesse B.; Ellings, Christopher S.; Takekawa, John Y.

    2017-01-01

    Native eelgrass (Zostera marina) is an important contributor to ecosystem services that supplies cover for juvenile fish, supports a variety of invertebrate prey resources for fish and waterbirds, provides substrate for herring roe consumed by numerous fish and birds, helps stabilize sediment, and sequesters organic carbon. Seagrasses are in decline globally, and monitoring changes in their growth and extent is increasingly valuable to determine impacts from large-scale estuarine restoration and inform blue carbon mapping initiatives. Thus, we examined the efficacy of two remote sensing mapping methods with high-resolution (0.5 m pixel size) color near infrared imagery with ground validation to assess change following major tidal marsh restoration. Automated classification of false color aerial imagery and digitized polygons documented a slight decline in eelgrass area directly after restoration followed by an increase two years later. Classification of sparse and low to medium density eelgrass was confounded in areas with algal cover, however large dense patches of eelgrass were well delineated. Automated classification of aerial imagery from unsupervised and supervised methods provided reasonable accuracies of 73% and hand-digitizing polygons from the same imagery yielded similar results. Visual clues for hand digitizing from the high-resolution imagery provided as reliable a map of dense eelgrass extent as automated image classification. We found that automated classification had no advantages over manual digitization particularly because of the limitations of detecting eelgrass with only three bands of imagery and near infrared.

  13. Effective automated feature construction and selection for classification of biological sequences.

    Directory of Open Access Journals (Sweden)

    Uday Kamath

    Full Text Available Many open problems in bioinformatics involve elucidating underlying functional signals in biological sequences. DNA sequences, in particular, are characterized by rich architectures in which functional signals are increasingly found to combine local and distal interactions at the nucleotide level. Problems of interest include detection of regulatory regions, splice sites, exons, hypersensitive sites, and more. These problems naturally lend themselves to formulation as classification problems in machine learning. When classification is based on features extracted from the sequences under investigation, success is critically dependent on the chosen set of features.We present an algorithmic framework (EFFECT for automated detection of functional signals in biological sequences. We focus here on classification problems involving DNA sequences which state-of-the-art work in machine learning shows to be challenging and involve complex combinations of local and distal features. EFFECT uses a two-stage process to first construct a set of candidate sequence-based features and then select a most effective subset for the classification task at hand. Both stages make heavy use of evolutionary algorithms to efficiently guide the search towards informative features capable of discriminating between sequences that contain a particular functional signal and those that do not.To demonstrate its generality, EFFECT is applied to three separate problems of importance in DNA research: the recognition of hypersensitive sites, splice sites, and ALU sites. Comparisons with state-of-the-art algorithms show that the framework is both general and powerful. In addition, a detailed analysis of the constructed features shows that they contain valuable biological information about DNA architecture, allowing biologists and other researchers to directly inspect the features and potentially use the insights obtained to assist wet-laboratory studies on retainment or modification

  14. Automated classification of bone marrow cells in microscopic images for diagnosis of leukemia: a comparison of two classification schemes with respect to the segmentation quality

    Science.gov (United States)

    Krappe, Sebastian; Benz, Michaela; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2015-03-01

    The morphological analysis of bone marrow smears is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually with the use of bright field microscope. This is a time consuming, partly subjective and tedious process. Furthermore, repeated examinations of a slide yield intra- and inter-observer variances. For this reason an automation of morphological bone marrow analysis is pursued. This analysis comprises several steps: image acquisition and smear detection, cell localization and segmentation, feature extraction and cell classification. The automated classification of bone marrow cells is depending on the automated cell segmentation and the choice of adequate features extracted from different parts of the cell. In this work we focus on the evaluation of support vector machines (SVMs) and random forests (RFs) for the differentiation of bone marrow cells in 16 different classes, including immature and abnormal cell classes. Data sets of different segmentation quality are used to test the two approaches. Automated solutions for the morphological analysis for bone marrow smears could use such a classifier to pre-classify bone marrow cells and thereby shortening the examination duration.

  15. Developing and Integrating Advanced Movement Features Improves Automated Classification of Ciliate Species.

    Science.gov (United States)

    Soleymani, Ali; Pennekamp, Frank; Petchey, Owen L; Weibel, Robert

    2015-01-01

    Recent advances in tracking technologies such as GPS or video tracking systems describe the movement paths of individuals in unprecedented details and are increasingly used in different fields, including ecology. However, extracting information from raw movement data requires advanced analysis techniques, for instance to infer behaviors expressed during a certain period of the recorded trajectory, or gender or species identity in case data is obtained from remote tracking. In this paper, we address how different movement features affect the ability to automatically classify the species identity, using a dataset of unicellular microbes (i.e., ciliates). Previously, morphological attributes and simple movement metrics, such as speed, were used for classifying ciliate species. Here, we demonstrate that adding advanced movement features, in particular such based on discrete wavelet transform, to morphological features can improve classification. These results may have practical applications in automated monitoring of waste water facilities as well as environmental monitoring of aquatic systems.

  16. Developing and Integrating Advanced Movement Features Improves Automated Classification of Ciliate Species.

    Directory of Open Access Journals (Sweden)

    Ali Soleymani

    Full Text Available Recent advances in tracking technologies such as GPS or video tracking systems describe the movement paths of individuals in unprecedented details and are increasingly used in different fields, including ecology. However, extracting information from raw movement data requires advanced analysis techniques, for instance to infer behaviors expressed during a certain period of the recorded trajectory, or gender or species identity in case data is obtained from remote tracking. In this paper, we address how different movement features affect the ability to automatically classify the species identity, using a dataset of unicellular microbes (i.e., ciliates. Previously, morphological attributes and simple movement metrics, such as speed, were used for classifying ciliate species. Here, we demonstrate that adding advanced movement features, in particular such based on discrete wavelet transform, to morphological features can improve classification. These results may have practical applications in automated monitoring of waste water facilities as well as environmental monitoring of aquatic systems.

  17. a Fully Automated Pipeline for Classification Tasks with AN Application to Remote Sensing

    Science.gov (United States)

    Suzuki, K.; Claesen, M.; Takeda, H.; De Moor, B.

    2016-06-01

    Nowadays deep learning has been intensively in spotlight owing to its great victories at major competitions, which undeservedly pushed `shallow' machine learning methods, relatively naive/handy algorithms commonly used by industrial engineers, to the background in spite of their facilities such as small requisite amount of time/dataset for training. We, with a practical point of view, utilized shallow learning algorithms to construct a learning pipeline such that operators can utilize machine learning without any special knowledge, expensive computation environment, and a large amount of labelled data. The proposed pipeline automates a whole classification process, namely feature-selection, weighting features and the selection of the most suitable classifier with optimized hyperparameters. The configuration facilitates particle swarm optimization, one of well-known metaheuristic algorithms for the sake of generally fast and fine optimization, which enables us not only to optimize (hyper)parameters but also to determine appropriate features/classifier to the problem, which has conventionally been a priori based on domain knowledge and remained untouched or dealt with naïve algorithms such as grid search. Through experiments with the MNIST and CIFAR-10 datasets, common datasets in computer vision field for character recognition and object recognition problems respectively, our automated learning approach provides high performance considering its simple setting (i.e. non-specialized setting depending on dataset), small amount of training data, and practical learning time. Moreover, compared to deep learning the performance stays robust without almost any modification even with a remote sensing object recognition problem, which in turn indicates that there is a high possibility that our approach contributes to general classification problems.

  18. Automated Analysis and Classification of Histological Tissue Features by Multi-Dimensional Microscopic Molecular Profiling.

    Directory of Open Access Journals (Sweden)

    Daniel P Riordan

    Full Text Available Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the

  19. Automated annotation and classification of BI-RADS assessment from radiology reports.

    Science.gov (United States)

    Castro, Sergio M; Tseytlin, Eugene; Medvedeva, Olga; Mitchell, Kevin; Visweswaran, Shyam; Bekhuis, Tanja; Jacobson, Rebecca S

    2017-05-01

    The Breast Imaging Reporting and Data System (BI-RADS) was developed to reduce variation in the descriptions of findings. Manual analysis of breast radiology report data is challenging but is necessary for clinical and healthcare quality assurance activities. The objective of this study is to develop a natural language processing (NLP) system for automated BI-RADS categories extraction from breast radiology reports. We evaluated an existing rule-based NLP algorithm, and then we developed and evaluated our own method using a supervised machine learning approach. We divided the BI-RADS category extraction task into two specific tasks: (1) annotation of all BI-RADS category values within a report, (2) classification of the laterality of each BI-RADS category value. We used one algorithm for task 1 and evaluated three algorithms for task 2. Across all evaluations and model training, we used a total of 2159 radiology reports from 18 hospitals, from 2003 to 2015. Performance with the existing rule-based algorithm was not satisfactory. Conditional random fields showed a high performance for task 1 with an F-1 measure of 0.95. Rules from partial decision trees (PART) algorithm showed the best performance across classes for task 2 with a weighted F-1 measure of 0.91 for BIRADS 0-6, and 0.93 for BIRADS 3-5. Classification performance by class showed that performance improved for all classes from Naïve Bayes to Support Vector Machine (SVM), and also from SVM to PART. Our system is able to annotate and classify all BI-RADS mentions present in a single radiology report and can serve as the foundation for future studies that will leverage automated BI-RADS annotation, to provide feedback to radiologists as part of a learning health system loop. Copyright © 2017. Published by Elsevier Inc.

  20. Automated classification of mammographic microcalcifications by using artificial neural networks and ACR BI-RADS criteria

    Science.gov (United States)

    Hara, Takeshi; Yamada, Akitsugu; Fujita, Hiroshi; Iwase, Takuji; Endo, Tokiko

    2001-07-01

    We have been developing an automated detection scheme for mammographic microcalcifications as a part of computer-assisted diagnosis (CAD) system. The purpose of this study is to develop an automated classification technique for the detected microcalcifications. Types of distributions of calcifications are known to be significantly relevant to their probability of malignancy, and are described on ACR BI-RADS (Breast Imaging Reporting and Data System) , in which five typical types are illustrated as diffuse/scattered, regional, segmental, linear and clustered. Detected microcalcifications by our CAD system are classified automatically into one of their five types based on shape of grouped microcalcifications and the number of microcalcifications within the grouped area. The type of distribution and other general image feature values are analyzed by artificial neural networks (ANNs) and the probability of malignancy is indicated. Eighty mammograms with biopsy-proven microcalcifications were employed and digitized with a laser scanner at a pixel size of 0.1mm and 12-bit density depth. The sensitivity and specificity were 93% and 93%, respectively. The performance was significantly improved in comparison with the case that the five criteria in BI-RADS were not employed.

  1. Automated classification of self-grooming in mice using open-source software.

    Science.gov (United States)

    van den Boom, Bastijn J G; Pavlidi, Pavlina; Wolf, Casper J H; Mooij, Adriana H; Willuhn, Ingo

    2017-09-01

    Manual analysis of behavior is labor intensive and subject to inter-rater variability. Although considerable progress in automation of analysis has been made, complex behavior such as grooming still lacks satisfactory automated quantification. We trained a freely available, automated classifier, Janelia Automatic Animal Behavior Annotator (JAABA), to quantify self-grooming duration and number of bouts based on video recordings of SAPAP3 knockout mice (a mouse line that self-grooms excessively) and wild-type animals. We compared the JAABA classifier with human expert observers to test its ability to measure self-grooming in three scenarios: mice in an open field, mice on an elevated plus-maze, and tethered mice in an open field. In each scenario, the classifier identified both grooming and non-grooming with great accuracy and correlated highly with results obtained by human observers. Consistently, the JAABA classifier confirmed previous reports of excessive grooming in SAPAP3 knockout mice. Thus far, manual analysis was regarded as the only valid quantification method for self-grooming. We demonstrate that the JAABA classifier is a valid and reliable scoring tool, more cost-efficient than manual scoring, easy to use, requires minimal effort, provides high throughput, and prevents inter-rater variability. We introduce the JAABA classifier as an efficient analysis tool for the assessment of rodent self-grooming with expert quality. In our "how-to" instructions, we provide all information necessary to implement behavioral classification with JAABA. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Automated Classification of Heritage Buildings for As-Built Bim Using Machine Learning Techniques

    Science.gov (United States)

    Bassier, M.; Vergauwen, M.; Van Genechten, B.

    2017-08-01

    Semantically rich three dimensional models such as Building Information Models (BIMs) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings life cyle which is crucial in the conservation process. The creation of as-built BIM models is based on point cloud data. However, manually interpreting this data is labour intensive and often leads to misinterpretations. By automatically classifying the point cloud, the information can be proccesed more effeciently. A key aspect in this automated scan-to-BIM process is the classification of building objects. In this research we look to automatically recognise elements in existing buildings to create compact semantic information models. Our algorithm efficiently extracts the main structural components such as floors, ceilings, roofs, walls and beams despite the presence of significant clutter and occlusions. More specifically, Support Vector Machines (SVM) are proposed for the classification. The algorithm is evaluated using real data of a variety of existing buildings. The results prove that the used classifier recognizes the objects with both high precision and recall. As a result, entire data sets are reliably labelled at once. The approach enables experts to better document and process heritage assets.

  3. Automated Tongue Feature Extraction for ZHENG Classification in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Ratchadaporn Kanawong

    2012-01-01

    Full Text Available ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.

  4. An Automated Artificial Neural Network System for Land Use/Land Cover Classification from Landsat TM Imagery

    Directory of Open Access Journals (Sweden)

    Siamak Khorram

    2009-07-01

    Full Text Available This paper focuses on an automated ANN classification system consisting of two modules: an unsupervised Kohonen’s Self-Organizing Mapping (SOM neural network module, and a supervised Multilayer Perceptron (MLP neural network module using the Backpropagation (BP training algorithm. Two training algorithms were provided for the SOM network module: the standard SOM, and a refined SOM learning algorithm which incorporated Simulated Annealing (SA. The ability of our automated ANN system to perform Land-Use/Land-Cover (LU/LC classifications of a Landsat Thematic Mapper (TM image was tested using a supervised MLP network, an unsupervised SOM network, and a combination of SOM with SA network. Our case study demonstrated that the ANN classification system fulfilled the tasks of network training pattern creation, network training, and network generalization. The results from the three networks were assessed via a comparison with reference data derived from the high spatial resolution Digital Colour Infrared (CIR Digital Orthophoto Quarter Quad (DOQQ data. The supervised MLP network obtained the most accurate classification accuracy as compared to the two unsupervised SOM networks. Additionally, the classification performance of the refined SOM network was found to be significantly better than that of the standard SOM network essentially due to the incorporation of SA. This is mainly due to the SA-assisted classification utilizing the scheduling cooling scheme. It is concluded that our automated ANN classification system can be utilized for LU/LC applications and will be particularly useful when traditional statistical classification methods are not suitable due to a statistically abnormal distribution of the input data.

  5. Automated Segmentation and Classification of Coral using Fluid Lensing from Unmanned Airborne Platforms

    Science.gov (United States)

    Instrella, R.; Chirayath, V.

    2015-12-01

    In recent years, there has been a growing interest among biologists in monitoring the short and long term health of the world's coral reefs. The environmental impact of climate change poses a growing threat to these biologically diverse and fragile ecosystems, prompting scientists to use remote sensing platforms and computer vision algorithms to analyze shallow marine systems. In this study, we present a novel method for performing coral segmentation and classification from aerial data collected from small unmanned aerial vehicles (sUAV). Our method uses Fluid Lensing algorithms to remove and exploit strong optical distortions created along the air-fluid boundary to produce cm-scale resolution imagery of the ocean floor at depths up to 5 meters. A 3D model of the reef is reconstructed using structure from motion (SFM) algorithms, and the associated depth information is combined with multidimensional maximum a posteriori (MAP) estimation to separate organic from inorganic material and classify coral morphologies in the Fluid-Lensed transects. In this study, MAP estimation is performed using a set of manually classified 100 x 100 pixel training images to determine the most probable coral classification within an interrogated region of interest. Aerial footage of a coral reef was captured off the coast of American Samoa and used to test our proposed method. 90 x 20 meter transects of the Samoan coastline undergo automated classification and are manually segmented by a marine biologist for comparison, leading to success rates as high as 85%. This method has broad applications for coastal remote sensing, and will provide marine biologists access to large swaths of high resolution, segmented coral imagery.

  6. Automated Segmentation and Classification of Coral using Fluid Lensing from Unmanned Airborne Platforms

    Science.gov (United States)

    Instrella, Ron; Chirayath, Ved

    2016-01-01

    In recent years, there has been a growing interest among biologists in monitoring the short and long term health of the world's coral reefs. The environmental impact of climate change poses a growing threat to these biologically diverse and fragile ecosystems, prompting scientists to use remote sensing platforms and computer vision algorithms to analyze shallow marine systems. In this study, we present a novel method for performing coral segmentation and classification from aerial data collected from small unmanned aerial vehicles (sUAV). Our method uses Fluid Lensing algorithms to remove and exploit strong optical distortions created along the air-fluid boundary to produce cm-scale resolution imagery of the ocean floor at depths up to 5 meters. A 3D model of the reef is reconstructed using structure from motion (SFM) algorithms, and the associated depth information is combined with multidimensional maximum a posteriori (MAP) estimation to separate organic from inorganic material and classify coral morphologies in the Fluid-Lensed transects. In this study, MAP estimation is performed using a set of manually classified 100 x 100 pixel training images to determine the most probable coral classification within an interrogated region of interest. Aerial footage of a coral reef was captured off the coast of American Samoa and used to test our proposed method. 90 x 20 meter transects of the Samoan coastline undergo automated classification and are manually segmented by a marine biologist for comparison, leading to success rates as high as 85%. This method has broad applications for coastal remote sensing, and will provide marine biologists access to large swaths of high resolution, segmented coral imagery.

  7. Automated classification of tropical shrub species: a hybrid of leaf shape and machine learning approach

    Directory of Open Access Journals (Sweden)

    Miraemiliana Murat

    2017-09-01

    Full Text Available Plants play a crucial role in foodstuff, medicine, industry, and environmental protection. The skill of recognising plants is very important in some applications, including conservation of endangered species and rehabilitation of lands after mining activities. However, it is a difficult task to identify plant species because it requires specialized knowledge. Developing an automated classification system for plant species is necessary and valuable since it can help specialists as well as the public in identifying plant species easily. Shape descriptors were applied on the myDAUN dataset that contains 45 tropical shrub species collected from the University of Malaya (UM, Malaysia. Based on literature review, this is the first study in the development of tropical shrub species image dataset and classification using a hybrid of leaf shape and machine learning approach. Four types of shape descriptors were used in this study namely morphological shape descriptors (MSD, Histogram of Oriented Gradients (HOG, Hu invariant moments (Hu and Zernike moments (ZM. Single descriptor, as well as the combination of hybrid descriptors were tested and compared. The tropical shrub species are classified using six different classifiers, which are artificial neural network (ANN, random forest (RF, support vector machine (SVM, k-nearest neighbour (k-NN, linear discriminant analysis (LDA and directed acyclic graph multiclass least squares twin support vector machine (DAG MLSTSVM. In addition, three types of feature selection methods were tested in the myDAUN dataset, Relief, Correlation-based feature selection (CFS and Pearson’s coefficient correlation (PCC. The well-known Flavia dataset and Swedish Leaf dataset were used as the validation dataset on the proposed methods. The results showed that the hybrid of all descriptors of ANN outperformed the other classifiers with an average classification accuracy of 98.23% for the myDAUN dataset, 95.25% for the Flavia

  8. An Automated Algorithm to Screen Massive Training Samples for a Global Impervious Surface Classification

    Science.gov (United States)

    Tan, Bin; Brown de Colstoun, Eric; Wolfe, Robert E.; Tilton, James C.; Huang, Chengquan; Smith, Sarah E.

    2012-01-01

    An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

  9. Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.

    Directory of Open Access Journals (Sweden)

    Andre F Marquand

    Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.

  10. Interpreting complex data by methods of recognition and classification in an automated system of aerogeophysical material processing

    Energy Technology Data Exchange (ETDEWEB)

    Koval' , L.A.; Dolgov, S.V.; Liokumovich, G.B.; Ovcharenko, A.V.; Priyezzhev, I.I.

    1984-01-01

    The system of automated processing of aerogeophysical data, ASOM-AGS/YeS, is equipped with complex interpretation of multichannel measurements. Algorithms of factor analysis, automatic classification and apparatus of a priori specified (selected) decisive rules are used. The areas of effect of these procedures can be initially limited to the specified geological information. The possibilities of the method are demonstrated by the results of automated processing of the aerogram-spectrometric measurements in the region of the known copper-porphyr manifestation in Kazakhstan. This ore deposit was clearly noted after processing by the method of main components by complex aureole of independent factors U (severe increase), Th (noticeable increase), K (decrease).

  11. Olive oil sensory defects classification with data fusion of instrumental techniques and multivariate analysis (PLS-DA).

    Science.gov (United States)

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-07-15

    Three instrumental techniques, headspace-mass spectrometry (HS-MS), mid-infrared spectroscopy (MIR) and UV-visible spectrophotometry (UV-vis), have been combined to classify virgin olive oil samples based on the presence or absence of sensory defects. The reference sensory values were provided by an official taste panel. Different data fusion strategies were studied to improve the discrimination capability compared to using each instrumental technique individually. A general model was applied to discriminate high-quality non-defective olive oils (extra-virgin) and the lowest-quality olive oils considered non-edible (lampante). A specific identification of key off-flavours, such as musty, winey, fusty and rancid, was also studied. The data fusion of the three techniques improved the classification results in most of the cases. Low-level data fusion was the best strategy to discriminate musty, winey and fusty defects, using HS-MS, MIR and UV-vis, and the rancid defect using only HS-MS and MIR. The mid-level data fusion approach using partial least squares-discriminant analysis (PLS-DA) scores was found to be the best strategy for defective vs non-defective and edible vs non-edible oil discrimination. However, the data fusion did not sufficiently improve the results obtained by a single technique (HS-MS) to classify non-defective classes. These results indicate that instrumental data fusion can be useful for the identification of sensory defects in virgin olive oils. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Automated classification of radiology reports to facilitate retrospective study in radiology.

    Science.gov (United States)

    Zhou, Yihua; Amundson, Per K; Yu, Fang; Kessler, Marcus M; Benzinger, Tammie L S; Wippold, Franz J

    2014-12-01

    Retrospective research is an import tool in radiology. Identifying imaging examinations appropriate for a given research question from the unstructured radiology reports is extremely useful, but labor-intensive. Using the machine learning text-mining methods implemented in LingPipe [1], we evaluated the performance of the dynamic language model (DLM) and the Naïve Bayesian (NB) classifiers in classifying radiology reports to facilitate identification of radiological examinations for research projects. The training dataset consisted of 14,325 sentences from 11,432 radiology reports randomly selected from a database of 5,104,594 reports in all disciplines of radiology. The training sentences were categorized manually into six categories (Positive, Differential, Post Treatment, Negative, Normal, and History). A 10-fold cross-validation [2] was used to evaluate the performance of the models, which were tested in classification of radiology reports for cases of sellar or suprasellar masses and colloid cysts. The average accuracies for the DLM and NB classifiers were 88.5% with 95% confidence interval (CI) of 1.9% and 85.9% with 95% CI of 2.0%, respectively. The DLM performed slightly better and was used to classify 1,397 radiology reports containing the keywords "sellar or suprasellar mass", or "colloid cyst". The DLM model produced an accuracy of 88.2% with 95% CI of 2.1% for 959 reports that contain "sellar or suprasellar mass" and an accuracy of 86.3% with 95% CI of 2.5% for 437 reports of "colloid cyst". We conclude that automated classification of radiology reports using machine learning techniques can effectively facilitate the identification of cases suitable for retrospective research.

  13. A multiresolution approach to automated classification of protein subcellular location images

    Directory of Open Access Journals (Sweden)

    Srinivasa Gowri

    2007-06-01

    Full Text Available Abstract Background Fluorescence microscopy is widely used to determine the subcellular location of proteins. Efforts to determine location on a proteome-wide basis create a need for automated methods to analyze the resulting images. Over the past ten years, the feasibility of using machine learning methods to recognize all major subcellular location patterns has been convincingly demonstrated, using diverse feature sets and classifiers. On a well-studied data set of 2D HeLa single-cell images, the best performance to date, 91.5%, was obtained by including a set of multiresolution features. This demonstrates the value of multiresolution approaches to this important problem. Results We report here a novel approach for the classification of subcellular location patterns by classifying in multiresolution subspaces. Our system is able to work with any feature set and any classifier. It consists of multiresolution (MR decomposition, followed by feature computation and classification in each MR subspace, yielding local decisions that are then combined into a global decision. With 26 texture features alone and a neural network classifier, we obtained an increase in accuracy on the 2D HeLa data set to 95.3%. Conclusion We demonstrate that the space-frequency localized information in the multiresolution subspaces adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of features is sufficient, consisting of our novel modified Haralick texture features. Our proposed system is general, allowing for any combinations of sets of features and any combination of classifiers.

  14. Automating the Identification of Patient Safety Incident Reports Using Multi-Label Classification.

    Science.gov (United States)

    Wang, Ying; Coiera, Enrico; Runciman, William; Magrabi, Farah

    2017-01-01

    Automated identification provides an efficient way to categorize patient safety incidents. Previous studies have focused on identifying single incident types relating to a specific patient safety problem, e.g., clinical handover. In reality, there are multiple types of incidents reflecting the breadth of patient safety problems and a single report may describe multiple problems, i.e., it can be assigned multiple type labels. This study evaluated the abilty of multi-label classification methods to identify multiple incident types in single reports. Three multi-label methods were evaluated: binary relevance, classifier chains and ensemble of classifier chains. We found that an ensemble of classifier chains was the most effective method using binary Support Vector Machines with radial basis function kernel and bag-of-words feature extraction, performing equally well on balanced and stratified datasets, (F-score: 73.7% vs. 74.7%). Classifiers were able to identify six common incident types: falls, medications, pressure injury, aggression, documentation problems and others.

  15. Automated segmentation of geographic atrophy in fundus autofluorescence images using supervised pixel classification.

    Science.gov (United States)

    Hu, Zhihong; Medioni, Gerard G; Hernandez, Matthias; Sadda, Srinivas R

    2015-01-01

    Geographic atrophy (GA) is a manifestation of the advanced or late stage of age-related macular degeneration (AMD). AMD is the leading cause of blindness in people over the age of 65 in the western world. The purpose of this study is to develop a fully automated supervised pixel classification approach for segmenting GA, including uni- and multifocal patches in fundus autofluorescene (FAF) images. The image features include region-wise intensity measures, gray-level co-occurrence matrix measures, and Gaussian filter banks. A [Formula: see text]-nearest-neighbor pixel classifier is applied to obtain a GA probability map, representing the likelihood that the image pixel belongs to GA. Sixteen randomly chosen FAF images were obtained from 16 subjects with GA. The algorithm-defined GA regions are compared with manual delineation performed by a certified image reading center grader. Eight-fold cross-validation is applied to evaluate the algorithm performance. The mean overlap ratio (OR), area correlation (Pearson's [Formula: see text]), accuracy (ACC), true positive rate (TPR), specificity (SPC), positive predictive value (PPV), and false discovery rate (FDR) between the algorithm- and manually defined GA regions are [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], respectively.

  16. Seasonal cultivated and fallow cropland mapping using MODIS-based automated cropland classification algorithm

    Science.gov (United States)

    Wu, Zhuoting; Thenkabail, Prasad S.; Mueller, Rick; Zakzeski, Audra; Melton, Forrest; Johnson, Lee; Rosevelt, Carolyn; Dwyer, John; Jones, Jeanine; Verdin, James P.

    2014-01-01

    Increasing drought occurrences and growing populations demand accurate, routine, and consistent cultivated and fallow cropland products to enable water and food security analysis. The overarching goal of this research was to develop and test automated cropland classification algorithm (ACCA) that provide accurate, consistent, and repeatable information on seasonal cultivated as well as seasonal fallow cropland extents and areas based on the Moderate Resolution Imaging Spectroradiometer remote sensing data. Seasonal ACCA development process involves writing series of iterative decision tree codes to separate cultivated and fallow croplands from noncroplands, aiming to accurately mirror reliable reference data sources. A pixel-by-pixel accuracy assessment when compared with the U.S. Department of Agriculture (USDA) cropland data showed, on average, a producer’s accuracy of 93% and a user’s accuracy of 85% across all months. Further, ACCA-derived cropland maps agreed well with the USDA Farm Service Agency crop acreage-reported data for both cultivated and fallow croplands with R-square values over 0.7 and field surveys with an accuracy of ≥95% for cultivated croplands and ≥76% for fallow croplands. Our results demonstrated the ability of ACCA to generate cropland products, such as cultivated and fallow cropland extents and areas, accurately, automatically, and repeatedly throughout the growing season.

  17. Deep SOMs for automated feature extraction and classification from big data streaming

    Science.gov (United States)

    Sakkari, Mohamed; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    In this paper, we proposed a deep self-organizing map model (Deep-SOMs) for automated features extracting and learning from big data streaming which we benefit from the framework Spark for real time streams and highly parallel data processing. The SOMs deep architecture is based on the notion of abstraction (patterns automatically extract from the raw data, from the less to more abstract). The proposed model consists of three hidden self-organizing layers, an input and an output layer. Each layer is made up of a multitude of SOMs, each map only focusing at local headmistress sub-region from the input image. Then, each layer trains the local information to generate more overall information in the higher layer. The proposed Deep-SOMs model is unique in terms of the layers architecture, the SOMs sampling method and learning. During the learning stage we use a set of unsupervised SOMs for feature extraction. We validate the effectiveness of our approach on large data sets such as Leukemia dataset and SRBCT. Results of comparison have shown that the Deep-SOMs model performs better than many existing algorithms for images classification.

  18. Classification of Acute Decompensated Heart Failure (ADHF): An Automated Algorithm Compared to a Physician Reviewer Panel: The ARIC Study

    Science.gov (United States)

    Loehr, Laura R.; Agarwal, Sunil K.; Baggett, Chris; Wruck, Lisa M.; Chang, Patricia P.; Solomon, Scott D.; Shahar, Eyal; Ni, Hanyu; Rosamond, Wayne D.; Heiss, Gerardo

    2013-01-01

    Background An algorithm to classify heart failure (HF) endpoints inclusive of contemporary measures of biomarkers and echocardiography was recently proposed by an international expert panel. Our objective was to assess agreement of HF classification by this contemporaneous algorithm with that by a standardized physician reviewer panel, when applied to data abstracted from community-based hospital records. Methods and Results During 2005-2007, all hospitalizations were identified from four U.S. communities under surveillance as part of the Atherosclerosis Risk in Communities (ARIC) study. Potential HF hospitalizations were sampled by ICD discharge codes and demographics from men and women aged 55 years and older. The HF classification algorithm was automated and applied to 2,729 (N=13,854 weighted hospitalizations) hospitalizations in which either BNP measures or ejection fraction were documented (mean age 75 years). There were 1,403 (54%, N=7,534 weighted) events classified as acute, decompensated HF (ADHF) by the automated algorithm, and 1,748 (68%, N=9,276 weighted) such events by the ARIC reviewer panel. The chance-corrected agreement between ADHF by physician reviewer panel and the automated algorithm was moderate (Kappa=0.39). Sensitivity and specificity of the automated algorithm with ARIC reviewer panel as the referent standard was 0.68 (95% CI, 0.67 - 0.69), and 0.75 (95% CI, 0.74 - 0.76), respectively. Conclusions Although the automated classification improved efficiency and decreased costs, its accuracy in classifying HF hospitalizations was modest compared to a standardized physician reviewer panel. PMID:23650310

  19. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network

    International Nuclear Information System (INIS)

    Garcia, Gabe V.

    2005-01-01

    A major cause of failure in nuclear steam generators is degradation of their tubes. Although seven primary defect categories exist, one of the principal causes of tube failure is intergranular attack/stress corrosion cracking (IGA/SCC). This type of defect usually begins on the secondary side surface of the tubes and propagates both inwards and laterally. In many cases this defect is found at or near the tube support plates

  20. Eddy Current Signature Classification of Steam Generator Tube Defects Using A Learning Vector Quantization Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Gabe V. Garcia

    2005-01-03

    A major cause of failure in nuclear steam generators is degradation of their tubes. Although seven primary defect categories exist, one of the principal causes of tube failure is intergranular attack/stress corrosion cracking (IGA/SCC). This type of defect usually begins on the secondary side surface of the tubes and propagates both inwards and laterally. In many cases this defect is found at or near the tube support plates.

  1. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy

    OpenAIRE

    Welikala, R; Fraz, M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-01-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from eac...

  2. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    time series are typically affected by a significant noise to signal ratio. The results of the analysis show that even with such a rough-quality dataset, our automated classification procedure can greatly improve radar interpretation of mass movements. In general, uncorrelated PS (type 0) are concentrated in flat areas such as fluvial terraces and valley bottoms, and along stable watershed divides; linear PS (type 1) are mainly located on slopes (both inside or outside mapped landslides) or near the edge of scarps or steep slopes; non-linear PS (types 2 to 5) typically fall inside landslide deposits or in the surrounding areas. The spatial distribution of classified PS allows to detect deformation phenomena not visible by considering the average velocity alone, and provide important information on the temporal evolution of the phenomena such as acceleration, deceleration, seasonal fluctuations, abrupt or continuous changes of the displacement rate. Based on these encouraging results we integrated all the classification algorithms into a Graphical User Interface (called PSTime) which is freely available as a standalone application.

  3. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    International Nuclear Information System (INIS)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Brink, Henrik; Crellin-Quick, Arien; Butler, Nathaniel R.

    2012-01-01

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  4. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Brink, Henrik; Crellin-Quick, Arien [Astronomy Department, University of California, Berkeley, CA 94720-3411 (United States); Butler, Nathaniel R., E-mail: jwrichar@stat.berkeley.edu [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States)

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  5. Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images.

    Science.gov (United States)

    Vermeer, K A; van der Schoot, J; Lemij, H G; de Boer, J F

    2011-06-01

    Current OCT devices provide three-dimensional (3D) in-vivo images of the human retina. The resulting very large data sets are difficult to manually assess. Automated segmentation is required to automatically process the data and produce images that are clinically useful and easy to interpret. In this paper, we present a method to segment the retinal layers in these images. Instead of using complex heuristics to define each layer, simple features are defined and machine learning classifiers are trained based on manually labeled examples. When applied to new data, these classifiers produce labels for every pixel. After regularization of the 3D labeled volume to produce a surface, this results in consistent, three-dimensionally segmented layers that match known retinal morphology. Six labels were defined, corresponding to the following layers: Vitreous, retinal nerve fiber layer (RNFL), ganglion cell layer & inner plexiform layer, inner nuclear layer & outer plexiform layer, photoreceptors & retinal pigment epithelium and choroid. For both normal and glaucomatous eyes that were imaged with a Spectralis (Heidelberg Engineering) OCT system, the five resulting interfaces were compared between automatic and manual segmentation. RMS errors for the top and bottom of the retina were between 4 and 6 μm, while the errors for intra-retinal interfaces were between 6 and 15 μm. The resulting total retinal thickness maps corresponded with known retinal morphology. RNFL thickness maps were compared to GDx (Carl Zeiss Meditec) thickness maps. Both maps were mostly consistent but local defects were better visualized in OCT-derived thickness maps.

  6. Studying the potential impact of automated document classification on scheduling a systematic review update

    Science.gov (United States)

    2012-01-01

    Background Systematic Reviews (SRs) are an essential part of evidence-based medicine, providing support for clinical practice and policy on a wide range of medical topics. However, producing SRs is resource-intensive, and progress in the research they review leads to SRs becoming outdated, requiring updates. Although the question of how and when to update SRs has been studied, the best method for determining when to update is still unclear, necessitating further research. Methods In this work we study the potential impact of a machine learning-based automated system for providing alerts when new publications become available within an SR topic. Some of these new publications are especially important, as they report findings that are more likely to initiate a review update. To this end, we have designed a classification algorithm to identify articles that are likely to be included in an SR update, along with an annotation scheme designed to identify the most important publications in a topic area. Using an SR database containing over 70,000 articles, we annotated articles from 9 topics that had received an update during the study period. The algorithm was then evaluated in terms of the overall correct and incorrect alert rate for publications meeting the topic inclusion criteria, as well as in terms of its ability to identify important, update-motivating publications in a topic area. Results Our initial approach, based on our previous work in topic-specific SR publication classification, identifies over 70% of the most important new publications, while maintaining a low overall alert rate. Conclusions We performed an initial analysis of the opportunities and challenges in aiding the SR update planning process with an informatics-based machine learning approach. Alerts could be a useful tool in the planning, scheduling, and allocation of resources for SR updates, providing an improvement in timeliness and coverage for the large number of medical topics needing SRs

  7. Automated classification of immunostaining patterns in breast tissue from the human protein Atlas

    Directory of Open Access Journals (Sweden)

    Issac Niwas Swamidoss

    2013-01-01

    Full Text Available Background: The Human Protein Atlas (HPA is an effort to map the location of all human proteins (http://www.proteinatlas.org/. It contains a large number of histological images of sections from human tissue. Tissue micro arrays (TMA are imaged by a slide scanning microscope, and each image represents a thin slice of a tissue core with a dark brown antibody specific stain and a blue counter stain. When generating antibodies for protein profiling of the human proteome, an important step in the quality control is to compare staining patterns of different antibodies directed towards the same protein. This comparison is an ultimate control that the antibody recognizes the right protein. In this paper, we propose and evaluate different approaches for classifying sub-cellular antibody staining patterns in breast tissue samples. Materials and Methods: The proposed methods include the computation of various features including gray level co-occurrence matrix (GLCM features, complex wavelet co-occurrence matrix (CWCM features, and weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM-inspired features. The extracted features are used into two different multivariate classifiers (support vector machine (SVM and linear discriminant analysis (LDA classifier. Before extracting features, we use color deconvolution to separate different tissue components, such as the brownly stained positive regions and the blue cellular regions, in the immuno-stained TMA images of breast tissue. Results: We present classification results based on combinations of feature measurements. The proposed complex wavelet features and the WND-CHARM features have accuracy similar to that of a human expert. Conclusions: Both human experts and the proposed automated methods have difficulties discriminating between nuclear and cytoplasmic staining patterns. This is to a large extent due to mixed staining of nucleus and cytoplasm. Methods for

  8. Automated classification of free-text pathology reports for registration of incident cases of cancer.

    Science.gov (United States)

    Jouhet, V; Defossez, G; Burgun, A; le Beux, P; Levillain, P; Ingrand, P; Claveau, V

    2012-01-01

    Our study aimed to construct and evaluate functions called "classifiers", produced by supervised machine learning techniques, in order to categorize automatically pathology reports using solely their content. Patients from the Poitou-Charentes Cancer Registry having at least one pathology report and a single non-metastatic invasive neoplasm were included. A descriptor weighting function accounting for the distribution of terms among targeted classes was developed and compared to classic methods based on inverse document frequencies. The classification was performed with support vector machine (SVM) and Naive Bayes classifiers. Two levels of granularity were tested for both the topographical and the morphological axes of the ICD-O3 code. The ability to correctly attribute a precise ICD-O3 code and the ability to attribute the broad category defined by the International Agency for Research on Cancer (IARC) for the multiple primary cancer registration rules were evaluated using F1-measures. 5121 pathology reports produced by 35 pathologists were selected. The best performance was achieved by our class-weighted descriptor, associated with a SVM classifier. Using this method, the pathology reports were properly classified in the IARC categories with F1-measures of 0.967 for both topography and morphology. The ICD-O3 code attribution had lower performance with a 0.715 F1-measure for topography and 0.854 for morphology. These results suggest that free-text pathology reports could be useful as a data source for automated systems in order to identify and notify new cases of cancer. Future work is needed to evaluate the improvement in performance obtained from the use of natural language processing, including the case of multiple tumor description and possible incorporation of other medical documents such as surgical reports.

  9. Characterization of glycosylphosphatidylinositol biosynthesis defects by clinical features, flow cytometry, and automated image analysis

    DEFF Research Database (Denmark)

    Knaus, Alexej; Pantel, Jean Tori; Pendziwiat, Manuela

    2018-01-01

    , the increasing number of individuals with a GPIBD shows that hyperphosphatasia is a variable feature that is not ideal for a clinical classification. METHODS: We studied the discriminatory power of multiple GPI-linked substrates that were assessed by flow cytometry in blood cells and fibroblasts of 39 and 14...

  10. Rapid Classification of Landsat TM Imagery for Phase 1 Stratification Using the Automated NDVI Threshold Supervised Classification (ANTSC) Methodology

    Science.gov (United States)

    William H. Cooke; Dennis M. Jacobs

    2005-01-01

    FIA annual inventories require rapid updating of pixel-based Phase 1 estimates. Scientists at the Southern Research Station are developing an automated methodology that uses a Normalized Difference Vegetation Index (NDVI) for identifying and eliminating problem FIA plots from the analysis. Problem plots are those that have questionable land use/land cover information....

  11. Automated classification of mouse pup isolation syllables: from cluster analysis to an Excel based ‘mouse pup syllable classification calculator’

    Directory of Open Access Journals (Sweden)

    Jasmine eGrimsley

    2013-01-01

    Full Text Available Mouse pups vocalize at high rates when they are cold or isolated from the nest. The proportions of each syllable type produced carry information about disease state and are being used as behavioral markers for the internal state of animals. Manual classifications of these vocalizations identified ten syllable types based on their spectro-temporal features. However, manual classification of mouse syllables is time consuming and vulnerable to experimenter bias. This study uses an automated cluster analysis to identify acoustically distinct syllable types produced by CBA/CaJ mouse pups, and then compares the results to prior manual classification methods. The cluster analysis identified two syllable types, based on their frequency bands, that have continuous frequency-time structure, and two syllable types featuring abrupt frequency transitions. Although cluster analysis computed fewer syllable types than manual classification, the clusters represented well the probability distributions of the acoustic features within syllables. These probability distributions indicate that some of the manually classified syllable types are not statistically distinct. The characteristics of the four classified clusters were used to generate a Microsoft Excel-based mouse syllable classifier that rapidly categorizes syllables, with over a 90% match, into the syllable types determined by cluster analysis.

  12. Automated classification of self-grooming in mice using open-source software

    NARCIS (Netherlands)

    Van den Boom, B.; Pavlidi, Pavlina; Wolf, Casper M H; Mooij, Hanne A H; Willuhn, Ingo

    BACKGROUND: Manual analysis of behavior is labor intensive and subject to inter-rater variability. Although considerable progress in automation of analysis has been made, complex behavior such as grooming still lacks satisfactory automated quantification. NEW METHOD: We trained a freely available,

  13. Automated classification of self-grooming in mice using open-source software

    NARCIS (Netherlands)

    van den Boom, Bastijn J. G.; Pavlidi, Pavlina; Wolf, Casper M. H.; Mooij, Hanne A. H.; Willuhn, Ingo

    2017-01-01

    Background: Manual analysis of behavior is labor intensive and subject to inter-rater variability. Although considerable progress in automation of analysis has been made, complex behavior such as grooming still lacks satisfactory automated quantification. New method: We trained a freely available,

  14. Vertebral Body Compression Fractures and Bone Density: Automated Detection and Classification on CT Images.

    Science.gov (United States)

    Burns, Joseph E; Yao, Jianhua; Summers, Ronald M

    2017-09-01

    Purpose To create and validate a computer system with which to detect, localize, and classify compression fractures and measure bone density of thoracic and lumbar vertebral bodies on computed tomographic (CT) images. Materials and Methods Institutional review board approval was obtained, and informed consent was waived in this HIPAA-compliant retrospective study. A CT study set of 150 patients (mean age, 73 years; age range, 55-96 years; 92 women, 58 men) with (n = 75) and without (n = 75) compression fractures was assembled. All case patients were age and sex matched with control subjects. A total of 210 thoracic and lumbar vertebrae showed compression fractures and were electronically marked and classified by a radiologist. Prototype fully automated spinal segmentation and fracture detection software were then used to analyze the study set. System performance was evaluated with free-response receiver operating characteristic analysis. Results Sensitivity for detection or localization of compression fractures was 95.7% (201 of 210; 95% confidence interval [CI]: 87.0%, 98.9%), with a false-positive rate of 0.29 per patient. Additionally, sensitivity was 98.7% and specificity was 77.3% at case-based receiver operating characteristic curve analysis. Accuracy for classification by Genant type (anterior, middle, or posterior height loss) was 0.95 (107 of 113; 95% CI: 0.89, 0.98), with weighted κ of 0.90 (95% CI: 0.81, 0.99). Accuracy for categorization by Genant height loss grade was 0.68 (77 of 113; 95% CI: 0.59, 0.76), with a weighted κ of 0.59 (95% CI: 0.47, 0.71). The average bone attenuation for T12-L4 vertebrae was 146 HU ± 29 (standard deviation) in case patients and 173 HU ± 42 in control patients; this difference was statistically significant (P high sensitivity and with a low false-positive rate, as well as to calculate vertebral bone density, on CT images. © RSNA, 2017 Online supplemental material is available for this article.

  15. Detection and classification of latent defects and diseases on raw French fries with multispectral imaging

    NARCIS (Netherlands)

    Noordam, J.C.; Broek, van den W.H.A.M.; Buydens, L.M.C.

    2005-01-01

    This paper describes an application of both multispectral imaging and red/green/blue (RGB) colour imaging for the discrimination between different defect and diseases on raw French fries. Four different potato cultivars generally used for French fries production are selected from which fries are

  16. Classification of defects in honeycomb composite structure of helicopter rotor blades

    International Nuclear Information System (INIS)

    Balasko, M.; Svab, E.; Molnar, Gy.; Veres, I.

    2005-01-01

    The use of non-destructive testing methods to qualify the state of rotor blades with respect to their expected flight hours, with the aim to extend their lifetime without any risk of breakdown, is an important financial demand. In order to detect the possible defects in the composite structure of Mi-8 and Mi-24 type helicopter rotor blades used by the Hungarian Army, we have performed combined neutron- and X-ray radiography measurements at the Budapest Research Reactor. Several types of defects were detected, analysed and typified. Among the most frequent and important defects observed were cavities, holes and or cracks in the sealing elements on the interface of the honeycomb structure and the section boarders. Inhomogeneities of the resin materials (resin-rich or starved areas) at the core-honeycomb surfaces proved to be an other important point. Defects were detected at the adhesive filling, and water percolation was visualized at the sealing interfaces of the honeycomb sections. Corrosion effects, and metal inclusions have also been detected

  17. Using support vector machines with tract-based spatial statistics for automated classification of Tourette syndrome children

    Science.gov (United States)

    Wen, Hongwei; Liu, Yue; Wang, Jieqiong; Zhang, Jishui; Peng, Yun; He, Huiguang

    2016-03-01

    Tourette syndrome (TS) is a developmental neuropsychiatric disorder with the cardinal symptoms of motor and vocal tics which emerges in early childhood and fluctuates in severity in later years. To date, the neural basis of TS is not fully understood yet and TS has a long-term prognosis that is difficult to accurately estimate. Few studies have looked at the potential of using diffusion tensor imaging (DTI) in conjunction with machine learning algorithms in order to automate the classification of healthy children and TS children. Here we apply Tract-Based Spatial Statistics (TBSS) method to 44 TS children and 48 age and gender matched healthy children in order to extract the diffusion values from each voxel in the white matter (WM) skeleton, and a feature selection algorithm (ReliefF) was used to select the most salient voxels for subsequent classification with support vector machine (SVM). We use a nested cross validation to yield an unbiased assessment of the classification method and prevent overestimation. The accuracy (88.04%), sensitivity (88.64%) and specificity (87.50%) were achieved in our method as peak performance of the SVM classifier was achieved using the axial diffusion (AD) metric, demonstrating the potential of a joint TBSS and SVM pipeline for fast, objective classification of healthy and TS children. These results support that our methods may be useful for the early identification of subjects with TS, and hold promise for predicting prognosis and treatment outcome for individuals with TS.

  18. An on-line automated sleep-wake classification system for laboratory animals

    NARCIS (Netherlands)

    Witting, W; vanderWerf, D; Mirmiran, M

    A computerized sleep-wake classification program is presented that is capable of classifying sleep-wake states on-line in four animals simultaneously. Every 10 s the classification algorithm assigns sleep-wake states on the basis of the power spectrum of an EEG signal and the standard deviation of

  19. Automated retinal nerve fiber layer defect detection using fundus imaging in glaucoma.

    Science.gov (United States)

    Panda, Rashmi; Puhan, N B; Rao, Aparna; Padhy, Debananda; Panda, Ganapati

    2018-06-01

    Retinal nerve fiber layer defect (RNFLD) provides an early objective evidence of structural changes in glaucoma. RNFLD detection is currently carried out using imaging modalities like OCT and GDx which are expensive for routine practice. In this regard, we propose a novel automatic method for RNFLD detection and angular width quantification using cost effective redfree fundus images to be practically useful for computer-assisted glaucoma risk assessment. After blood vessel inpainting and CLAHE based contrast enhancement, the initial boundary pixels are identified by local minima analysis of the 1-D intensity profiles on concentric circles. The true boundary pixels are classified using random forest trained by newly proposed cumulative zero count local binary pattern (CZC-LBP) and directional differential energy (DDE) along with Shannon, Tsallis entropy and intensity features. Finally, the RNFLD angular width is obtained by random sample consensus (RANSAC) line fitting on the detected set of boundary pixels. The proposed method is found to achieve high RNFLD detection performance on a newly created dataset with sensitivity (SN) of 0.7821 at 0.2727 false positives per image (FPI) and the area under curve (AUC) value is obtained as 0.8733. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Automated classification of limb fractures from free-text radiology reports using a clinician-informed gazetteer methodology

    Directory of Open Access Journals (Sweden)

    Amol Wagholikar

    2013-05-01

    Full Text Available BackgroundTimely diagnosis and reporting of patient symptoms in hospital emergency departments (ED is a critical component of health services delivery. However, due to dispersed information resources and a vast amount of manual processing of unstructured information, accurate point-of-care diagnosis is often difficult. AimsThe aim of this research is to report initial experimental evaluation of a clinician-informed automated method for the issue of initial misdiagnoses associated with delayed receipt of unstructured radiology reports. Method A method was developed that resembles clinical reasoning for identifying limb abnormalities. The method consists of a gazetteer of keywords related to radiological findings; the method classifies an X-ray report as abnormal if it contains evidence contained in the gazetteer. A set of 99 narrative reports of radiological findings was sourced from a tertiary hospital. Reports were manually assessed by two clinicians and discrepancies were validated by a third expert ED clinician; the final manual classification generated by the expert ED clinician was used as ground truth to empirically evaluate the approach.ResultsThe automated method that attempts to individuate limb abnormalities by searching for keywords expressed by clinicians achieved an F-measure of 0.80 and an accuracy of 0.80.ConclusionWhile the automated clinician-driven method achieved promising performances, a number of avenues for improvement were identified using advanced natural language processing (NLP and machine learning techniques.

  1. Precision automation of cell type classification and sub-cellular fluorescence quantification from laser scanning confocal images

    Directory of Open Access Journals (Sweden)

    Hardy Craig Hall

    2016-02-01

    Full Text Available While novel whole-plant phenotyping technologies have been successfully implemented into functional genomics and breeding programs, the potential of automated phenotyping with cellular resolution is largely unexploited. Laser scanning confocal microscopy has the potential to close this gap by providing spatially highly resolved images containing anatomic as well as chemical information on a subcellular basis. However, in the absence of automated methods, the assessment of the spatial patterns and abundance of fluorescent markers with subcellular resolution is still largely qualitative and time-consuming. Recent advances in image acquisition and analysis, coupled with improvements in microprocessor performance, have brought such automated methods within reach, so that information from thousands of cells per image for hundreds of images may be derived in an experimentally convenient time-frame. Here, we present a MATLAB-based analytical pipeline to 1 segment radial plant organs into individual cells, 2 classify cells into cell type categories based upon random forest classification, 3 divide each cell into sub-regions, and 4 quantify fluorescence intensity to a subcellular degree of precision for a separate fluorescence channel. In this research advance, we demonstrate the precision of this analytical process for the relatively complex tissues of Arabidopsis hypocotyls at various stages of development. High speed and robustness make our approach suitable for phenotyping of large collections of stem-like material and other tissue types.

  2. Exploratory analysis of methods for automated classification of laboratory test orders into syndromic groups in veterinary medicine.

    Science.gov (United States)

    Dórea, Fernanda C; Muckle, C Anne; Kelton, David; McClure, J T; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W

    2013-01-01

    Recent focus on earlier detection of pathogen introduction in human and animal populations has led to the development of surveillance systems based on automated monitoring of health data. Real- or near real-time monitoring of pre-diagnostic data requires automated classification of records into syndromes--syndromic surveillance--using algorithms that incorporate medical knowledge in a reliable and efficient way, while remaining comprehensible to end users. This paper describes the application of two of machine learning (Naïve Bayes and Decision Trees) and rule-based methods to extract syndromic information from laboratory test requests submitted to a veterinary diagnostic laboratory. High performance (F1-macro = 0.9995) was achieved through the use of a rule-based syndrome classifier, based on rule induction followed by manual modification during the construction phase, which also resulted in clear interpretability of the resulting classification process. An unmodified rule induction algorithm achieved an F(1-micro) score of 0.979 though this fell to 0.677 when performance for individual classes was averaged in an unweighted manner (F(1-macro)), due to the fact that the algorithm failed to learn 3 of the 16 classes from the training set. Decision Trees showed equal interpretability to the rule-based approaches, but achieved an F(1-micro) score of 0.923 (falling to 0.311 when classes are given equal weight). A Naïve Bayes classifier learned all classes and achieved high performance (F(1-micro)= 0.994 and F(1-macro) = .955), however the classification process is not transparent to the domain experts. The use of a manually customised rule set allowed for the development of a system for classification of laboratory tests into syndromic groups with very high performance, and high interpretability by the domain experts. Further research is required to develop internal validation rules in order to establish automated methods to update model rules without user

  3. Exploratory analysis of methods for automated classification of laboratory test orders into syndromic groups in veterinary medicine.

    Directory of Open Access Journals (Sweden)

    Fernanda C Dórea

    Full Text Available BACKGROUND: Recent focus on earlier detection of pathogen introduction in human and animal populations has led to the development of surveillance systems based on automated monitoring of health data. Real- or near real-time monitoring of pre-diagnostic data requires automated classification of records into syndromes--syndromic surveillance--using algorithms that incorporate medical knowledge in a reliable and efficient way, while remaining comprehensible to end users. METHODS: This paper describes the application of two of machine learning (Naïve Bayes and Decision Trees and rule-based methods to extract syndromic information from laboratory test requests submitted to a veterinary diagnostic laboratory. RESULTS: High performance (F1-macro = 0.9995 was achieved through the use of a rule-based syndrome classifier, based on rule induction followed by manual modification during the construction phase, which also resulted in clear interpretability of the resulting classification process. An unmodified rule induction algorithm achieved an F(1-micro score of 0.979 though this fell to 0.677 when performance for individual classes was averaged in an unweighted manner (F(1-macro, due to the fact that the algorithm failed to learn 3 of the 16 classes from the training set. Decision Trees showed equal interpretability to the rule-based approaches, but achieved an F(1-micro score of 0.923 (falling to 0.311 when classes are given equal weight. A Naïve Bayes classifier learned all classes and achieved high performance (F(1-micro= 0.994 and F(1-macro = .955, however the classification process is not transparent to the domain experts. CONCLUSION: The use of a manually customised rule set allowed for the development of a system for classification of laboratory tests into syndromic groups with very high performance, and high interpretability by the domain experts. Further research is required to develop internal validation rules in order to establish

  4. Automated Surface Classification of SRF Cavities for the Investigation of the Influence of Surface Properties onto the Operational Performance

    International Nuclear Information System (INIS)

    Wenskat, Marc

    2015-07-01

    Superconducting niobium radio-frequency cavities are fundamental for the European XFEL and the International Linear Collider. To use the operational advantages of superconducting cavities, the inner surface has to fulfill quite demanding requirements. The surface roughness and cleanliness improved over the last decades and with them, the achieved maximal accelerating field. Still, limitations of the maximal achieved accelerating field are observed, which are not explained by localized geometrical defects or impurities. The scope of this thesis is a better understanding of these limitations in defect free cavities based on global, rather than local, surface properties. For this goal, more than 30 cavities underwent subsequent surface treatments, cold RF tests and optical inspections within the ILC-HiGrade research program and the XFEL cavity production. An algorithm was developed which allows an automated surface characterization based on an optical inspection robot. This algorithm delivers a set of optical surface properties, which describes the inner cavity surface. These optical surface properties deliver a framework for a quality assurance of the fabrication procedures. Furthermore, they shows promising results for a better understanding of the observed limitations in defect free cavities.

  5. Automated Processing of Imaging Data through Multi-tiered Classification of Biological Structures Illustrated Using Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Mei Zhan

    2015-04-01

    Full Text Available Quantitative imaging has become a vital technique in biological discovery and clinical diagnostics; a plethora of tools have recently been developed to enable new and accelerated forms of biological investigation. Increasingly, the capacity for high-throughput experimentation provided by new imaging modalities, contrast techniques, microscopy tools, microfluidics and computer controlled systems shifts the experimental bottleneck from the level of physical manipulation and raw data collection to automated recognition and data processing. Yet, despite their broad importance, image analysis solutions to address these needs have been narrowly tailored. Here, we present a generalizable formulation for autonomous identification of specific biological structures that is applicable for many problems. The process flow architecture we present here utilizes standard image processing techniques and the multi-tiered application of classification models such as support vector machines (SVM. These low-level functions are readily available in a large array of image processing software packages and programming languages. Our framework is thus both easy to implement at the modular level and provides specific high-level architecture to guide the solution of more complicated image-processing problems. We demonstrate the utility of the classification routine by developing two specific classifiers as a toolset for automation and cell identification in the model organism Caenorhabditis elegans. To serve a common need for automated high-resolution imaging and behavior applications in the C. elegans research community, we contribute a ready-to-use classifier for the identification of the head of the animal under bright field imaging. Furthermore, we extend our framework to address the pervasive problem of cell-specific identification under fluorescent imaging, which is critical for biological investigation in multicellular organisms or tissues. Using these examples as a

  6. Classification of acute decompensated heart failure: an automated algorithm compared with a physician reviewer panel: the Atherosclerosis Risk in Communities study.

    Science.gov (United States)

    Loehr, Laura R; Agarwal, Sunil K; Baggett, Chris; Wruck, Lisa M; Chang, Patricia P; Solomon, Scott D; Shahar, Eyal; Ni, Hanyu; Rosamond, Wayne D; Heiss, Gerardo

    2013-07-01

    An algorithm to classify heart failure (HF) end points inclusive of contemporary measures of biomarkers and echocardiography was recently proposed by an international expert panel. Our objective was to assess agreement of HF classification by this contemporaneous algorithm with that by a standardized physician reviewer panel, when applied to data abstracted from community-based hospital records. During 2005-2007, all hospitalizations were identified from 4 US communities under surveillance as part of the Atherosclerosis Risk in Communities (ARIC) study. Potential HF hospitalizations were sampled by International Classification of Diseases discharge codes and demographics from men and women aged ≥ 55 years. The HF classification algorithm was automated and applied to 2729 (n=13854 weighted hospitalizations) hospitalizations in which either brain natriuretic peptide measures or ejection fraction were documented (mean age, 75 years). There were 1403 (54%; n=7534 weighted) events classified as acute decompensated HF by the automated algorithm, and 1748 (68%; n=9276 weighted) such events by the ARIC reviewer panel. The chance-corrected agreement between acute decompensated HF by physician reviewer panel and the automated algorithm was moderate (κ=0.39). Sensitivity and specificity of the automated algorithm with ARIC reviewer panel as the referent standard were 0.68 (95% confidence interval, 0.67-0.69) and 0.75 (95% confidence interval, 0.74-0.76), respectively. Although the automated classification improved efficiency and decreased costs, its accuracy in classifying HF hospitalizations was modest compared with a standardized physician reviewer panel.

  7. Systems Operation Studies for Automated Guideway Transit Systems - Classification and Definition of AGT Systems

    Science.gov (United States)

    1980-02-01

    The report describes the development of an AGT classification structure. Five classes are defined based on three system characteristics: service type, minimum travelling unit capacity, and maximum operating velocity. The five classes defined are: Per...

  8. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    Science.gov (United States)

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  9. Accuracy of automated classification of major depressive disorder as a function of symptom severity

    Directory of Open Access Journals (Sweden)

    Rajamannar Ramasubbu, MD, FRCPC, MSc

    2016-01-01

    Conclusions: Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  10. Support-vector-machine tree-based domain knowledge learning toward automated sports video classification

    Science.gov (United States)

    Xiao, Guoqiang; Jiang, Yang; Song, Gang; Jiang, Jianmin

    2010-12-01

    We propose a support-vector-machine (SVM) tree to hierarchically learn from domain knowledge represented by low-level features toward automatic classification of sports videos. The proposed SVM tree adopts a binary tree structure to exploit the nature of SVM's binary classification, where each internal node is a single SVM learning unit, and each external node represents the classified output type. Such a SVM tree presents a number of advantages, which include: 1. low computing cost; 2. integrated learning and classification while preserving individual SVM's learning strength; and 3. flexibility in both structure and learning modules, where different numbers of nodes and features can be added to address specific learning requirements, and various learning models can be added as individual nodes, such as neural networks, AdaBoost, hidden Markov models, dynamic Bayesian networks, etc. Experiments support that the proposed SVM tree achieves good performances in sports video classifications.

  11. Genetic algorithm based feature selection combined with dual classification for the automated detection of proliferative diabetic retinopathy.

    Science.gov (United States)

    Welikala, R A; Fraz, M M; Dehmeshki, J; Hoppe, A; Tah, V; Mann, S; Williamson, T H; Barman, S A

    2015-07-01

    Proliferative diabetic retinopathy (PDR) is a condition that carries a high risk of severe visual impairment. The hallmark of PDR is the growth of abnormal new vessels. In this paper, an automated method for the detection of new vessels from retinal images is presented. This method is based on a dual classification approach. Two vessel segmentation approaches are applied to create two separate binary vessel map which each hold vital information. Local morphology features are measured from each binary vessel map to produce two separate 4-D feature vectors. Independent classification is performed for each feature vector using a support vector machine (SVM) classifier. The system then combines these individual outcomes to produce a final decision. This is followed by the creation of additional features to generate 21-D feature vectors, which feed into a genetic algorithm based feature selection approach with the objective of finding feature subsets that improve the performance of the classification. Sensitivity and specificity results using a dataset of 60 images are 0.9138 and 0.9600, respectively, on a per patch basis and 1.000 and 0.975, respectively, on a per image basis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Rule-driven defect detection in CT images of hardwood logs

    Science.gov (United States)

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2000-01-01

    This paper deals with automated detection and identification of internal defects in hardwood logs using computed tomography (CT) images. We have developed a system that employs artificial neural networks to perform tentative classification of logs on a pixel-by-pixel basis. This approach achieves a high level of classification accuracy for several hardwood species (...

  13. An Automated Method for Semantic Classification of Regions in Coastal Images

    NARCIS (Netherlands)

    Hoonhout, B.M.; Radermacher, M.; Baart, F.; Van der Maaten, L.J.P.

    2015-01-01

    Large, long-term coastal imagery datasets are nowadays a low-cost source of information for various coastal research disciplines. However, the applicability of many existing algorithms for coastal image analysis is limited for these large datasets due to a lack of automation and robustness.

  14. Automated and unbiased image analyses as tools in phenotypic classification of small-spored Alternaria species

    DEFF Research Database (Denmark)

    Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn

    2005-01-01

    often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...

  15. An Automated Cropland Classification Algorithm (ACCA) for Tajikistan by combining Landsat, MODIS, and secondary data

    Science.gov (United States)

    Thenkabail, Prasad S.; Wu, Zhuoting

    2012-01-01

    The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA) that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed) over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs) involving data from Landsat Global Land Survey (GLS), Landsat Enhanced Thematic Mapper Plus (ETM+) 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature), and in situ data. First, the process involved producing an accurate reference (or truth) cropland layer (TCL), consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005). The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI) maximum value composites (MVC) time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan). Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs) match accurately with TCLs. Third, the ACCA derived cropland

  16. An Automated Cropland Classification Algorithm (ACCA for Tajikistan by Combining Landsat, MODIS, and Secondary Data

    Directory of Open Access Journals (Sweden)

    Prasad S. Thenkabail

    2012-09-01

    Full Text Available The overarching goal of this research was to develop and demonstrate an automated Cropland Classification Algorithm (ACCA that will rapidly, routinely, and accurately classify agricultural cropland extent, areas, and characteristics (e.g., irrigated vs. rainfed over large areas such as a country or a region through combination of multi-sensor remote sensing and secondary data. In this research, a rule-based ACCA was conceptualized, developed, and demonstrated for the country of Tajikistan using mega file data cubes (MFDCs involving data from Landsat Global Land Survey (GLS, Landsat Enhanced Thematic Mapper Plus (ETM+ 30 m, Moderate Resolution Imaging Spectroradiometer (MODIS 250 m time-series, a suite of secondary data (e.g., elevation, slope, precipitation, temperature, and in situ data. First, the process involved producing an accurate reference (or truth cropland layer (TCL, consisting of cropland extent, areas, and irrigated vs. rainfed cropland areas, for the entire country of Tajikistan based on MFDC of year 2005 (MFDC2005. The methods involved in producing TCL included using ISOCLASS clustering, Tasseled Cap bi-spectral plots, spectro-temporal characteristics from MODIS 250 m monthly normalized difference vegetation index (NDVI maximum value composites (MVC time-series, and textural characteristics of higher resolution imagery. The TCL statistics accurately matched with the national statistics of Tajikistan for irrigated and rainfed croplands, where about 70% of croplands were irrigated and the rest rainfed. Second, a rule-based ACCA was developed to replicate the TCL accurately (~80% producer’s and user’s accuracies or within 20% quantity disagreement involving about 10 million Landsat 30 m sized cropland pixels of Tajikistan. Development of ACCA was an iterative process involving series of rules that are coded, refined, tweaked, and re-coded till ACCA derived croplands (ACLs match accurately with TCLs. Third, the ACCA derived

  17. Automated classification of histopathology images of prostate cancer using a Bag-of-Words approach

    Science.gov (United States)

    Sanghavi, Foram M.; Agaian, Sos S.

    2016-05-01

    The goals of this paper are (1) test the Computer Aided Classification of the prostate cancer histopathology images based on the Bag-of-Words (BoW) approach (2) evaluate the performance of the classification grade 3 and 4 of the proposed method using the results of the approach proposed by the authors Khurd et al. in [9] and (3) classify the different grades of cancer namely, grade 0, 3, 4, and 5 using the proposed approach. The system performance is assessed using 132 prostate cancer histopathology of different grades. The system performance of the SURF features are also analyzed by comparing the results with SIFT features using different cluster sizes. The results show 90.15% accuracy in detection of prostate cancer images using SURF features with 75 clusters for k-mean clustering. The results showed higher sensitivity for SURF based BoW classification compared to SIFT based BoW.

  18. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  19. Use of self-organizing maps for classification of defects in the tubes from the steam generator of nuclear power plants

    International Nuclear Information System (INIS)

    Mesquita, Roberto Navarro de

    2002-01-01

    This thesis obtains a new classification method for different steam generator tube defects in nuclear power plants using Eddy Current Test signals. The method uses self-organizing maps to compare different signal characteristics efficiency to identify and classify these defects. A multiple inference system is proposed which composes the different extracted characteristic trained maps classification to infer the final defect type. The feature extraction methods used are the Wavelet zero-crossings representation, the linear predictive coding (LPC), and other basic signal representations on time like module and phase. Many characteristic vectors are obtained with combinations of these extracted characteristics. These vectors are tested to classify the defects and the best ones are applied to the multiple inference system. A systematic study of pre-processing, calibration and analysis methods for the steam generator tube defect signals in nuclear power plants is done. The method efficiency is demonstrated and characteristic maps with the main prototypes are obtained for each steam generator tube defect type. (author)

  20. Technical Note: Semi-automated classification of time-lapse RGB imagery for a remote Greenlandic river

    Science.gov (United States)

    Gleason, C. J.; Smith, L. C.; Finnegan, D. C.; LeWinter, A. L.; Pitcher, L. H.; Chu, V. W.

    2015-01-01

    River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements. In such cases, remote sensing, especially terrestrial time lapse imaging platforms, offer a means to better understand these fluvial systems. One such environment is found at the proglacial Isortoq River in southwest Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible. In order to derive relevant hydraulic parameters for this river, two RGB cameras were installed in July of 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September of 2012. Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study. As such, automated image filters were developed that removed images with environmental obstacles (e.g. shadows, sun glint, snow) from the processing stream. Further image filtering was accomplished via a novel automated histogram similarity filtering process. This similarity filtering allowed successful (mean accuracy 79.6%) supervised classification of filtered images from training data collected from just 10% of those images. Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River between 2011 and 2012. This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.

  1. Automated Classification of Variable Stars in the Asteroseismology Program of the Kepler Space Mission

    DEFF Research Database (Denmark)

    Blomme, J.; Debosscher, J.; De Ridder, J.

    2010-01-01

    We present the first results of the application of supervised classification methods to the Kepler Q1 long-cadence light curves of a subsample of 2288 stars measured in the asteroseismology program of the mission. The methods, originally developed in the framework of the CoRoT and Gaia space...

  2. Enhancing social tagging with automated keywords from the Dewey Decimal Classification

    DEFF Research Database (Denmark)

    Golub, Koraljka; Lykke, Marianne; Tudhope, Duglas

    2014-01-01

    Purpose – The purpose of this paper is to explore the potential of applying the Dewey Decimal Classification (DDC) as an established knowledge organization system (KOS) for enhancing social tagging, with the ultimate purpose of improving subject indexing and information retrieval. Design...

  3. Exploring repetitive DNA landscapes using REPCLASS, a tool that automates the classification of transposable elements in eukaryotic genomes.

    Science.gov (United States)

    Feschotte, Cédric; Keswani, Umeshkumar; Ranganathan, Nirmal; Guibotsy, Marcel L; Levine, David

    2009-07-23

    Eukaryotic genomes contain large amount of repetitive DNA, most of which is derived from transposable elements (TEs). Progress has been made to develop computational tools for ab initio identification of repeat families, but there is an urgent need to develop tools to automate the annotation of TEs in genome sequences. Here we introduce REPCLASS, a tool that automates the classification of TE sequences. Using control repeat libraries, we show that the program can classify accurately virtually any known TE types. Combining REPCLASS to ab initio repeat finding in the genomes of Caenorhabditis elegans and Drosophila melanogaster allowed us to recover the contrasting TE landscape characteristic of these species. Unexpectedly, REPCLASS also uncovered several novel TE families in both genomes, augmenting the TE repertoire of these model species. When applied to the genomes of distant Caenorhabditis and Drosophila species, the approach revealed a remarkable conservation of TE composition profile within each genus, despite substantial interspecific covariations in genome size and in the number of TEs and TE families. Lastly, we applied REPCLASS to analyze 10 fungal genomes from a wide taxonomic range, most of which have not been analyzed for TE content previously. The results showed that TE diversity varies widely across the fungi "kingdom" and appears to positively correlate with genome size, in particular for DNA transposons. Together, these data validate REPCLASS as a powerful tool to explore the repetitive DNA landscapes of eukaryotes and to shed light onto the evolutionary forces shaping TE diversity and genome architecture.

  4. Automated segmentation of thyroid gland on CT images with multi-atlas label fusion and random classification forest

    Science.gov (United States)

    Liu, Jiamin; Chang, Kevin; Kim, Lauren; Turkbey, Evrim; Lu, Le; Yao, Jianhua; Summers, Ronald

    2015-03-01

    The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.

  5. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    Directory of Open Access Journals (Sweden)

    Bart Rogiers

    Full Text Available Cone penetration testing (CPT is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  6. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  7. Intelligent Classification of Heartbeats for Automated Real-Time ECG Monitoring

    Science.gov (United States)

    Park, Juyoung

    2014-01-01

    Abstract Background: The automatic interpretation of electrocardiography (ECG) data can provide continuous analysis of heart activity, allowing the effective use of wireless devices such as the Holter monitor. Materials and Methods: We propose an intelligent heartbeat monitoring system to detect the possibility of arrhythmia in real time. We detected heartbeats and extracted features such as the QRS complex and P wave from ECG signals using the Pan–Tompkins algorithm, and the heartbeats were then classified into 16 types using a decision tree. Results: We tested the sensitivity, specificity, and accuracy of our system against data from the MIT-BIH Arrhythmia Database. Our system achieved an average accuracy of 97% in heartbeat detection and an average heartbeat classification accuracy of above 96%, which is comparable with the best competing schemes. Conclusions: This work provides a guide to the systematic design of an intelligent classification system for decision support in Holter ECG monitoring. PMID:25010717

  8. Automated and unbiased classification of chemical profiles from fungi using high performance liquid chromatography

    DEFF Research Database (Denmark)

    Hansen, Michael Edberg; Andersen, Birgitte; Smedsgaard, Jørn

    2005-01-01

    In this paper we present a method for unbiased/unsupervised classification and identification of closely related fungi, using chemical analysis of secondary metabolite profiles created by HPLC with UV diode array detection. For two chromatographic data matrices a vector of locally aligned full...... Penicillium species. Then the algorithm was validated on fungal isolates belonging to the genus Alternaria. The results showed that the species may be segregated into taxa in full accordance with published taxonomy....

  9. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    Science.gov (United States)

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra

    2016-08-05

    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a

  10. Automated Region of Interest Retrieval of Metallographic Images for Quality Classification in Industry

    Directory of Open Access Journals (Sweden)

    Petr Kotas

    2012-01-01

    Full Text Available The aim of the research is development and testing of new methods to classify the quality of metallographic samples of steels with high added value (for example grades X70 according API. In this paper, we address the development of methods to classify the quality of slab samples images with the main emphasis on the quality of the image center called as segregation area. For this reason, we introduce an alternative method for automated retrieval of region of interest. In the first step, the metallographic image is segmented using both spectral method and thresholding. Then, the extracted macrostructure of the metallographic image is automatically analyzed by statistical methods. Finally, automatically extracted region of interests are compared with results of human experts.  Practical experience with retrieval of non-homogeneous noised digital images in industrial environment is discussed as well.

  11. Automated fault extraction and classification using 3-D seismic data for the Ekofisk field development

    Energy Technology Data Exchange (ETDEWEB)

    Signer, C.; Nickel, M.; Randen, T.; Saeter, T.; Soenneland, H.H.

    1998-12-31

    Mapping of fractures is important for the prediction of fluid flow in many reservoir types. The fluid flow depends mainly on the efficiency of the reservoir seals. Improved spatial mapping of the open and closed fracture systems will allow a better prediction of the fluid flow pattern. The primary objectives of this paper is to present fracture characterization at the reservoir scale combined with seismic facies mapping. The complexity of the giant Ekofisk field on the Norwegian continental shelf provides an ideal framework for testing the validity and the applicability of an automated seismic fault and fracture detection and mapping tool. The mapping of the faults can be based on seismic attribute grids, which means that attribute-responses related to faults are extracted along key horizons which were interpreted in the reservoir interval. 3 refs., 3 figs.

  12. A controlled trial of automated classification of negation from clinical notes

    Directory of Open Access Journals (Sweden)

    Carruth William

    2005-05-01

    Full Text Available Abstract Background Identification of negation in electronic health records is essential if we are to understand the computable meaning of the records: Our objective is to compare the accuracy of an automated mechanism for assignment of Negation to clinical concepts within a compositional expression with Human Assigned Negation. Also to perform a failure analysis to identify the causes of poorly identified negation (i.e. Missed Conceptual Representation, Inaccurate Conceptual Representation, Missed Negation, Inaccurate identification of Negation. Methods 41 Clinical Documents (Medical Evaluations; sometimes outside of Mayo these are referred to as History and Physical Examinations were parsed using the Mayo Vocabulary Server Parsing Engine. SNOMED-CT™ was used to provide concept coverage for the clinical concepts in the record. These records resulted in identification of Concepts and textual clues to Negation. These records were reviewed by an independent medical terminologist, and the results were tallied in a spreadsheet. Where questions on the review arose Internal Medicine Faculty were employed to make a final determination. Results SNOMED-CT was used to provide concept coverage of the 14,792 Concepts in 41 Health Records from John's Hopkins University. Of these, 1,823 Concepts were identified as negative by Human review. The sensitivity (Recall of the assignment of negation was 97.2% (p Conclusion Automated assignment of negation to concepts identified in health records based on review of the text is feasible and practical. Lexical assignment of negation is a good test of true Negativity as judged by the high sensitivity, specificity and positive likelihood ratio of the test. SNOMED-CT had overall coverage of 88.7% of the concepts being negated.

  13. Automated tissue classification of intracardiac optical coherence tomography images (Conference Presentation)

    Science.gov (United States)

    Gan, Yu; Tsay, David; Amir, Syed B.; Marboe, Charles C.; Hendon, Christine P.

    2016-03-01

    Remodeling of the myocardium is associated with increased risk of arrhythmia and heart failure. Our objective is to automatically identify regions of fibrotic myocardium, dense collagen, and adipose tissue, which can serve as a way to guide radiofrequency ablation therapy or endomyocardial biopsies. Using computer vision and machine learning, we present an automated algorithm to classify tissue compositions from cardiac optical coherence tomography (OCT) images. Three dimensional OCT volumes were obtained from 15 human hearts ex vivo within 48 hours of donor death (source, NDRI). We first segmented B-scans using a graph searching method. We estimated the boundary of each region by minimizing a cost function, which consisted of intensity, gradient, and contour smoothness. Then, features, including texture analysis, optical properties, and statistics of high moments, were extracted. We used a statistical model, relevance vector machine, and trained this model with abovementioned features to classify tissue compositions. To validate our method, we applied our algorithm to 77 volumes. The datasets for validation were manually segmented and classified by two investigators who were blind to our algorithm results and identified the tissues based on trichrome histology and pathology. The difference between automated and manual segmentation was 51.78 +/- 50.96 μm. Experiments showed that the attenuation coefficients of dense collagen were significantly different from other tissue types (P < 0.05, ANOVA). Importantly, myocardial fibrosis tissues were different from normal myocardium in entropy and kurtosis. The tissue types were classified with an accuracy of 84%. The results show good agreements with histology.

  14. A detailed comparison of analysis processes for MCC-IMS data in disease classification-Automated methods can replace manual peak annotations.

    Directory of Open Access Journals (Sweden)

    Salome Horsch

    Full Text Available Disease classification from molecular measurements typically requires an analysis pipeline from raw noisy measurements to final classification results. Multi capillary column-ion mobility spectrometry (MCC-IMS is a promising technology for the detection of volatile organic compounds in the air of exhaled breath. From raw measurements, the peak regions representing the compounds have to be identified, quantified, and clustered across different experiments. Currently, several steps of this analysis process require manual intervention of human experts. Our goal is to identify a fully automatic pipeline that yields competitive disease classification results compared to an established but subjective and tedious semi-manual process.We combine a large number of modern methods for peak detection, peak clustering, and multivariate classification into analysis pipelines for raw MCC-IMS data. We evaluate all combinations on three different real datasets in an unbiased cross-validation setting. We determine which specific algorithmic combinations lead to high AUC values in disease classifications across the different medical application scenarios.The best fully automated analysis process achieves even better classification results than the established manual process. The best algorithms for the three analysis steps are (i SGLTR (Savitzky-Golay Laplace-operator filter thresholding regions and LM (Local Maxima for automated peak identification, (ii EM clustering (Expectation Maximization and DBSCAN (Density-Based Spatial Clustering of Applications with Noise for the clustering step and (iii RF (Random Forest for multivariate classification. Thus, automated methods can replace the manual steps in the analysis process to enable an unbiased high throughput use of the technology.

  15. Towards automated human gait disease classification using phase space representation of intrinsic mode functions

    Science.gov (United States)

    Pratiher, Sawon; Patra, Sayantani; Pratiher, Souvik

    2017-06-01

    A novel analytical methodology for segregating healthy and neurological disorders from gait patterns is proposed by employing a set of oscillating components called intrinsic mode functions (IMF's). These IMF's are generated by the Empirical Mode Decomposition of the gait time series and the Hilbert transformed analytic signal representation forms the complex plane trace of the elliptical shaped analytic IMFs. The area measure and the relative change in the centroid position of the polygon formed by the Convex Hull of these analytic IMF's are taken as the discriminative features. Classification accuracy of 79.31% with Ensemble learning based Adaboost classifier validates the adequacy of the proposed methodology for a computer aided diagnostic (CAD) system for gait pattern identification. Also, the efficacy of several potential biomarkers like Bandwidth of Amplitude Modulation and Frequency Modulation IMF's and it's Mean Frequency from the Fourier-Bessel expansion from each of these analytic IMF's has been discussed for its potency in diagnosis of gait pattern identification and classification.

  16. Topological fingerprints for intermetallic compounds for the automated classification of atomistic simulation data

    International Nuclear Information System (INIS)

    Schablitzki, T; Rogal, J; Drautz, R

    2013-01-01

    We introduce a method to determine intermetallic crystal phases by creating topological fingerprints using coordination polyhedra. Many intermetallic crystal phases have complex structures that cannot be determined from the information of their nearest neighbour environment alone, but need information from a further reaching local environment. We obtain the coordination polyhedra of each atom in the structure and use this information in a topological fingerprint to determine the crystal phases in the structure as locally as possible. This allows us to analyse complex crystal phases like the topologically close-packed phases and multi-phase structures. With the information extracted from the coordination polyhedra and topological fingerprint, it is also possible to find and identify point and extended defects. Therefore, our method is able to track interface regions in multi-phase structures, and follow structural changes during phase transformations. (paper)

  17. Automated classification of maxillofacial cysts in cone beam CT images using contourlet transformation and Spherical Harmonics.

    Science.gov (United States)

    Abdolali, Fatemeh; Zoroofi, Reza Aghaeizadeh; Otake, Yoshito; Sato, Yoshinobu

    2017-02-01

    Accurate detection of maxillofacial cysts is an essential step for diagnosis, monitoring and planning therapeutic intervention. Cysts can be of various sizes and shapes and existing detection methods lead to poor results. Customizing automatic detection systems to gain sufficient accuracy in clinical practice is highly challenging. For this purpose, integrating the engineering knowledge in efficient feature extraction is essential. This paper presents a novel framework for maxillofacial cysts detection. A hybrid methodology based on surface and texture information is introduced. The proposed approach consists of three main steps as follows: At first, each cystic lesion is segmented with high accuracy. Then, in the second and third steps, feature extraction and classification are performed. Contourlet and SPHARM coefficients are utilized as texture and shape features which are fed into the classifier. Two different classifiers are used in this study, i.e. support vector machine and sparse discriminant analysis. Generally SPHARM coefficients are estimated by the iterative residual fitting (IRF) algorithm which is based on stepwise regression method. In order to improve the accuracy of IRF estimation, a method based on extra orthogonalization is employed to reduce linear dependency. We have utilized a ground-truth dataset consisting of cone beam CT images of 96 patients, belonging to three maxillofacial cyst categories: radicular cyst, dentigerous cyst and keratocystic odontogenic tumor. Using orthogonalized SPHARM, residual sum of squares is decreased which leads to a more accurate estimation. Analysis of the results based on statistical measures such as specificity, sensitivity, positive predictive value and negative predictive value is reported. The classification rate of 96.48% is achieved using sparse discriminant analysis and orthogonalized SPHARM features. Classification accuracy at least improved by 8.94% with respect to conventional features. This study

  18. Challenges in the automated classification of variable stars in large databases

    Directory of Open Access Journals (Sweden)

    Graham Matthew

    2017-01-01

    Full Text Available With ever-increasing numbers of astrophysical transient surveys, new facilities and archives of astronomical time series, time domain astronomy is emerging as a mainstream discipline. However, the sheer volume of data alone - hundreds of observations for hundreds of millions of sources – necessitates advanced statistical and machine learning methodologies for scientific discovery: characterization, categorization, and classification. Whilst these techniques are slowly entering the astronomer’s toolkit, their application to astronomical problems is not without its issues. In this paper, we will review some of the challenges posed by trying to identify variable stars in large data collections, including appropriate feature representations, dealing with uncertainties, establishing ground truths, and simple discrete classes.

  19. Automated fault detection and classification of etch systems using modular neural networks

    Science.gov (United States)

    Hong, Sang J.; May, Gary S.; Yamartino, John; Skumanich, Andrew

    2004-04-01

    Modular neural networks (MNNs) are investigated as a tool for modeling process behavior and fault detection and classification (FDC) using tool data in plasma etching. Principal component analysis (PCA) is initially employed to reduce the dimensionality of the voluminous multivariate tool data and to establish relationships between the acquired data and the process state. MNNs are subsequently used to identify anomalous process behavior. A gradient-based fuzzy C-means clustering algorithm is implemented to enhance MNN performance. MNNs for eleven individual steps of etch runs are trained with data acquired from baseline, control (acceptable), and perturbed (unacceptable) runs, and then tested with data not used for training. In the fault identification phase, a 0% of false alarm rate for the control runs is achieved.

  20. An automated satellite cloud classification scheme using self-organizing maps: Alternative ISCCP weather states

    Science.gov (United States)

    McDonald, Adrian J.; Cassano, John J.; Jolly, Ben; Parsons, Simon; Schuddeboom, Alex

    2016-11-01

    This study explores the application of the self-organizing map (SOM) methodology to cloud classification. In particular, the SOM is applied to the joint frequency distribution of the cloud top pressure and optical depth from the International Satellite Cloud Climatology Project (ISCCP) D1 data set. We demonstrate that this scheme produces clusters which have geographical and seasonal patterns similar to those produced in previous studies using the k-means clustering technique but potentially provides complementary information. For example, this study identifies a wider range of clusters representative of low cloud cover states with distinct geographic patterns. We also demonstrate that two rather similar clusters, which might be considered the same cloud regime in other classifications, are distinct based on the seasonal variation of their geographic distributions and their cloud radiative effect in the shortwave. Examination of the transitions between regimes at particular geographic positions between one day and the next also shows that the SOM produces an objective organization of the various cloud regimes that can aid in their interpretation. This is also supported by examination of the SOM's Sammon map and correlations between neighboring nodes geographic distributions. Ancillary ERA-Interim reanalysis output also allows us to demonstrate that the clusters, identified based on the joint histograms, are related to an ordered continuum of vertical velocity profiles and two-dimensional vertical velocity versus lower tropospheric stability histograms which have a clear structure within the SOM. The different nodes can also be separated by their longwave and shortwave cloud radiative effect at the top of the atmosphere.

  1. Assessing Rotation-Invariant Feature Classification for Automated Wildebeest Population Counts.

    Directory of Open Access Journals (Sweden)

    Colin J Torney

    Full Text Available Accurate and on-demand animal population counts are the holy grail for wildlife conservation organizations throughout the world because they enable fast and responsive adaptive management policies. While the collection of image data from camera traps, satellites, and manned or unmanned aircraft has advanced significantly, the detection and identification of animals within images remains a major bottleneck since counting is primarily conducted by dedicated enumerators or citizen scientists. Recent developments in the field of computer vision suggest a potential resolution to this issue through the use of rotation-invariant object descriptors combined with machine learning algorithms. Here we implement an algorithm to detect and count wildebeest from aerial images collected in the Serengeti National Park in 2009 as part of the biennial wildebeest count. We find that the per image error rates are greater than, but comparable to, two separate human counts. For the total count, the algorithm is more accurate than both manual counts, suggesting that human counters have a tendency to systematically over or under count images. While the accuracy of the algorithm is not yet at an acceptable level for fully automatic counts, our results show this method is a promising avenue for further research and we highlight specific areas where future research should focus in order to develop fast and accurate enumeration of aerial count data. If combined with a bespoke image collection protocol, this approach may yield a fully automated wildebeest count in the near future.

  2. Automated Arabidopsis plant root cell segmentation based on SVM classification and region merging.

    Science.gov (United States)

    Marcuzzo, Monica; Quelhas, Pedro; Campilho, Ana; Mendonça, Ana Maria; Campilho, Aurélio

    2009-09-01

    To obtain development information of individual plant cells, it is necessary to perform in vivo imaging of the specimen under study, through time-lapse confocal microscopy. Automation of cell detection/marking process is important to provide research tools in order to ease the search for special events, such as cell division. In this paper we discuss an automatic cell detection approach for Arabidopsis thaliana based on segmentation, which selects the best cell candidates from a starting watershed-based image segmentation and improves the result by merging adjacent regions. The selection of individual cells is obtained using a support vector machine (SVM) classifier, based on a cell descriptor constructed from the shape and edge strength of the cells' contour. In addition we proposed a novel cell merging criterion based on edge strength along the line that connects adjacent cells' centroids, which is a valuable tool in the reduction of cell over-segmentation. The result is largely pruned of badly segmented and over-segmented cells, thus facilitating the study of cells. When comparing the results after merging with the basic watershed segmentation, we obtain 1.5% better coverage (increase in F-measure) and up to 27% better precision in correct cell segmentation.

  3. An Automated Strategy for Unbiased Morphometric Analyses and Classifications of Growth Cones In Vitro.

    Directory of Open Access Journals (Sweden)

    Daryan Chitsaz

    Full Text Available During neural circuit development, attractive or repulsive guidance cue molecules direct growth cones (GCs to their targets by eliciting cytoskeletal remodeling, which is reflected in their morphology. The experimental power of in vitro neuronal cultures to assay this process and its molecular mechanisms is well established, however, a method to rapidly find and quantify multiple morphological aspects of GCs is lacking. To this end, we have developed a free, easy to use, and fully automated Fiji macro, Conographer, which accurately identifies and measures many morphological parameters of GCs in 2D explant culture images. These measurements are then subjected to principle component analysis and k-means clustering to mathematically classify the GCs as "collapsed" or "extended". The morphological parameters measured for each GC are found to be significantly different between collapsed and extended GCs, and are sufficient to classify GCs as such with the same level of accuracy as human observers. Application of a known collapse-inducing ligand results in significant changes in all parameters, resulting in an increase in 'collapsed' GCs determined by k-means clustering, as expected. Our strategy provides a powerful tool for exploring the relationship between GC morphology and guidance cue signaling, which in particular will greatly facilitate high-throughput studies of the effects of drugs, gene silencing or overexpression, or any other experimental manipulation in the context of an in vitro axon guidance assay.

  4. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles.

    Science.gov (United States)

    Barker, Jocelyn; Hoogi, Assaf; Depeursinge, Adrien; Rubin, Daniel L

    2016-05-01

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p < 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically

  5. SpineNet: Automated classification and evidence visualization in spinal MRIs.

    Science.gov (United States)

    Jamaludin, Amir; Kadir, Timor; Zisserman, Andrew

    2017-10-01

    The objective of this work is to automatically produce radiological gradings of spinal lumbar MRIs and also localize the predicted pathologies. We show that this can be achieved via a Convolutional Neural Network (CNN) framework that takes intervertebral disc volumes as inputs and is trained only on disc-specific class labels. Our contributions are: (i) a CNN architecture that predicts multiple gradings at once, and we propose variants of the architecture including using 3D convolutions; (ii) showing that this architecture can be trained using a multi-task loss function without requiring segmentation level annotation; and (iii) a localization method that clearly shows pathological regions in the disc volumes. We compare three visualization methods for the localization. The network is applied to a large corpus of MRI T2 sagittal spinal MRIs (using a standard clinical scan protocol) acquired from multiple machines, and is used to automatically compute disk and vertebra gradings for each MRI. These are: Pfirrmann grading, disc narrowing, upper/lower endplate defects, upper/lower marrow changes, spondylolisthesis, and central canal stenosis. We report near human performances across the eight gradings, and also visualize the evidence for these gradings localized on the original scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Automated Thermal Image Processing for Detection and Classification of Birds and Bats - FY2012 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Duberstein, Corey A.; Matzner, Shari; Cullinan, Valerie I.; Virden, Daniel J.; Myers, Joshua R.; Maxwell, Adam R.

    2012-09-01

    Surveying wildlife at risk from offshore wind energy development is difficult and expensive. Infrared video can be used to record birds and bats that pass through the camera view, but it is also time consuming and expensive to review video and determine what was recorded. We proposed to conduct algorithm and software development to identify and to differentiate thermally detected targets of interest that would allow automated processing of thermal image data to enumerate birds, bats, and insects. During FY2012 we developed computer code within MATLAB to identify objects recorded in video and extract attribute information that describes the objects recorded. We tested the efficiency of track identification using observer-based counts of tracks within segments of sample video. We examined object attributes, modeled the effects of random variability on attributes, and produced data smoothing techniques to limit random variation within attribute data. We also began drafting and testing methodology to identify objects recorded on video. We also recorded approximately 10 hours of infrared video of various marine birds, passerine birds, and bats near the Pacific Northwest National Laboratory (PNNL) Marine Sciences Laboratory (MSL) at Sequim, Washington. A total of 6 hours of bird video was captured overlooking Sequim Bay over a series of weeks. An additional 2 hours of video of birds was also captured during two weeks overlooking Dungeness Bay within the Strait of Juan de Fuca. Bats and passerine birds (swallows) were also recorded at dusk on the MSL campus during nine evenings. An observer noted the identity of objects viewed through the camera concurrently with recording. These video files will provide the information necessary to produce and test software developed during FY2013. The annotation will also form the basis for creation of a method to reliably identify recorded objects.

  7. Automated Image Sampling and Classification Can Be Used to Explore Perceived Naturalness of Urban Spaces.

    Directory of Open Access Journals (Sweden)

    Roger Hyam

    Full Text Available The psychological restorative effects of exposure to nature are well established and extend to just viewing of images of nature. A previous study has shown that Perceived Naturalness (PN of images correlates with their restorative value. This study tests whether it is possible to detect degree of PN of images using an image classifier. It takes images that have been scored by humans for PN (including a subset that have been assessed for restorative value and passes them through the Google Vision API image classification service. The resulting labels are assigned to broad semantic classes to create a Calculated Semantic Naturalness (CSN metric for each image. It was found that CSN correlates with PN. CSN was then calculated for a geospatial sampling of Google Street View images across the city of Edinburgh. CSN was found to correlate with PN in this sample also indicating the technique may be useful in large scale studies. Because CSN correlates with PN which correlates with restorativeness it is suggested that CSN or a similar measure may be useful in automatically detecting restorative images and locations. In an exploratory aside CSN was not found to correlate with an indicator of socioeconomic deprivation.

  8. ViCTree: An automated framework for taxonomic classification from protein sequences.

    Science.gov (United States)

    Modha, Sejal; Thanki, Anil; Cotmore, Susan F; Davison, Andrew J; Hughes, Joseph

    2018-02-20

    The increasing rate of submission of genetic sequences into public databases is providing a growing resource for classifying the organisms that these sequences represent. To aid viral classification, we have developed ViCTree, which automatically integrates the relevant sets of sequences in NCBI GenBank and transforms them into an interactive maximum likelihood phylogenetic tree that can be updated automatically. ViCTree incorporates ViCTreeView, which is a JavaScript-based visualisation tool that enables the tree to be explored interactively in the context of pairwise distance data. To demonstrate utility, ViCTree was applied to subfamily Densovirinae of family Parvoviridae. This led to the identification of six new species of insect virus. ViCTree is open-source and can be run on any Linux- or Unix-based computer or cluster. A tutorial, the documentation and the source code are available under a GPL3 license, and can be accessed at http://bioinformatics.cvr.ac.uk/victree_web/. sejal.modha@glasgow.ac.uk.

  9. VizieR Online Data Catalog: GALAH semi-automated classification scheme (Traven+, 2017)

    Science.gov (United States)

    Traven, G.; Matijevic, G.; Zwitter, T.; Zerjal, M.; Kos, J.; Asplund, M.; Bland-Hawthorn, J.; Casey, A. R.; de Silva, G.; Freeman, K.; Lin, J.; Martell, S. L.; Schlesinger, K. J.; Sharma, S.; Simpson, J. D.; Zucker, D. B.; Anguiano, B.; da Costa, G.; Duong, L.; Horner, J.; Hyde, E. A.; Kafle, P. R.; Munari, U.; Nataf, D.; Navin, C. A.; Reid, W.; Ting, Y.-S.

    2017-04-01

    The GALactic Archaeology with HERMES (GALAH) survey was the main driver for the construction of Hermes (High Efficiency and Resolution Multi-Element Spectrograph), a fiber-fed multi-object spectrograph on the 3.9m Anglo-Australian Telescope. Its spectral resolving power (R) is about 28000, and there is also an R=45000 mode using a slit mask. Hermes has four simultaneous non-contiguous spectral arms centered at 4800, 5761, 6610, and 7740Å, covering about 1000Å in total, including Hα and Hβ lines. About 300000 spectra have been taken to date, including various calibration exposures. However, we concentrate on ~210000 spectra recorded before 2016 January 30. We devise a custom classification procedure which is based on two independently developed methods, the novel dimensionality reduction technique t-SNE (t-distributed stochastic neighbor embedding; van der Maaten & Hinton 2008, Journal of Machine Learning Research 9, 2579) and the renowned clustering algorithm DBSCAN (Ester+ 1996, Proc. 2nd Int. Conf. on KDD, 226 ed. E. Simoudis, J. Han, and U. Fayyad). (4 data files).

  10. Automated web usage data mining and recommendation system using K-Nearest Neighbor (KNN classification method

    Directory of Open Access Journals (Sweden)

    D.A. Adeniyi

    2016-01-01

    Full Text Available The major problem of many on-line web sites is the presentation of many choices to the client at a time; this usually results to strenuous and time consuming task in finding the right product or information on the site. In this work, we present a study of automatic web usage data mining and recommendation system based on current user behavior through his/her click stream data on the newly developed Really Simple Syndication (RSS reader website, in order to provide relevant information to the individual without explicitly asking for it. The K-Nearest-Neighbor (KNN classification method has been trained to be used on-line and in Real-Time to identify clients/visitors click stream data, matching it to a particular user group and recommend a tailored browsing option that meet the need of the specific user at a particular time. To achieve this, web users RSS address file was extracted, cleansed, formatted and grouped into meaningful session and data mart was developed. Our result shows that the K-Nearest Neighbor classifier is transparent, consistent, straightforward, simple to understand, high tendency to possess desirable qualities and easy to implement than most other machine learning techniques specifically when there is little or no prior knowledge about data distribution.

  11. Automated age-related macular degeneration classification in OCT using unsupervised feature learning

    Science.gov (United States)

    Venhuizen, Freerk G.; van Ginneken, Bram; Bloemen, Bart; van Grinsven, Mark J. J. P.; Philipsen, Rick; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2015-03-01

    Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people. The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss. Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression. However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss. In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy. The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina. The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained. Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984. Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.

  12. Automated time activity classification based on global positioning system (GPS) tracking data.

    Science.gov (United States)

    Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph

    2011-11-14

    Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust

  13. Automated Multiclass Classification of Spontaneous EEG Activity in Alzheimer’s Disease and Mild Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Saúl J. Ruiz-Gómez

    2018-01-01

    Full Text Available The discrimination of early Alzheimer’s disease (AD and its prodromal form (i.e., mild cognitive impairment, MCI from cognitively healthy control (HC subjects is crucial since the treatment is more effective in the first stages of the dementia. The aim of our study is to evaluate the usefulness of a methodology based on electroencephalography (EEG to detect AD and MCI. EEG rhythms were recorded from 37 AD patients, 37 MCI subjects and 37 HC subjects. Artifact-free trials were analyzed by means of several spectral and nonlinear features: relative power in the conventional frequency bands, median frequency, individual alpha frequency, spectral entropy, Lempel–Ziv complexity, central tendency measure, sample entropy, fuzzy entropy, and auto-mutual information. Relevance and redundancy analyses were also conducted through the fast correlation-based filter (FCBF to derive an optimal set of them. The selected features were used to train three different models aimed at classifying the trials: linear discriminant analysis (LDA, quadratic discriminant analysis (QDA and multi-layer perceptron artificial neural network (MLP. Afterwards, each subject was automatically allocated in a particular group by applying a trial-based majority vote procedure. After feature extraction, the FCBF method selected the optimal set of features: individual alpha frequency, relative power at delta frequency band, and sample entropy. Using the aforementioned set of features, MLP showed the highest diagnostic performance in determining whether a subject is not healthy (sensitivity of 82.35% and positive predictive value of 84.85% for HC vs. all classification task and whether a subject does not suffer from AD (specificity of 79.41% and negative predictive value of 84.38% for AD vs. all comparison. Our findings suggest that our methodology can help physicians to discriminate AD, MCI and HC.

  14. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  15. Visual detection of defects in solder joints

    Science.gov (United States)

    Blaignan, V. B.; Bourbakis, Nikolaos G.; Moghaddamzadeh, Ali; Yfantis, Evangelos A.

    1995-03-01

    The automatic, real-time visual acquisition and inspection of VLSI boards requires the use of machine vision and artificial intelligence methodologies in a new `frame' for the achievement of better results regarding efficiency, products quality and automated service. In this paper the visual detection and classification of different types of defects on solder joints in PC boards is presented by combining several image processing methods, such as smoothing, segmentation, edge detection, contour extraction and shape analysis. The results of this paper are based on simulated solder defects and a real one.

  16. Wavelet based automated postural event detection and activity classification with single imu - biomed 2013.

    Science.gov (United States)

    Lockhart, Thurmon E; Soangra, Rahul; Zhang, Jian; Wu, Xuefan

    2013-01-01

    and classification algorithm using denoised signals from single wireless IMU placed at sternum. The algorithm was further validated and verified with motion capture system in laboratory environment. Wavelet denoising highlighted postural events and transition durations that further provided clinical information on postural control and motor coordination. The presented method can be applied in real life ambulatory monitoring approaches for assessing condition of elderly.

  17. Classification

    Science.gov (United States)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  18. Automatically high accurate and efficient photomask defects management solution for advanced lithography manufacture

    Science.gov (United States)

    Zhu, Jun; Chen, Lijun; Ma, Lantao; Li, Dejian; Jiang, Wei; Pan, Lihong; Shen, Huiting; Jia, Hongmin; Hsiang, Chingyun; Cheng, Guojie; Ling, Li; Chen, Shijie; Wang, Jun; Liao, Wenkui; Zhang, Gary

    2014-04-01

    Defect review is a time consuming job. Human error makes result inconsistent. The defects located on don't care area would not hurt the yield and no need to review them such as defects on dark area. However, critical area defects can impact yield dramatically and need more attention to review them such as defects on clear area. With decrease in integrated circuit dimensions, mask defects are always thousands detected during inspection even more. Traditional manual or simple classification approaches are unable to meet efficient and accuracy requirement. This paper focuses on automatic defect management and classification solution using image output of Lasertec inspection equipment and Anchor pattern centric image process technology. The number of mask defect found during an inspection is always in the range of thousands or even more. This system can handle large number defects with quick and accurate defect classification result. Our experiment includes Die to Die and Single Die modes. The classification accuracy can reach 87.4% and 93.3%. No critical or printable defects are missing in our test cases. The missing classification defects are 0.25% and 0.24% in Die to Die mode and Single Die mode. This kind of missing rate is encouraging and acceptable to apply on production line. The result can be output and reloaded back to inspection machine to have further review. This step helps users to validate some unsure defects with clear and magnification images when captured images can't provide enough information to make judgment. This system effectively reduces expensive inline defect review time. As a fully inline automated defect management solution, the system could be compatible with current inspection approach and integrated with optical simulation even scoring function and guide wafer level defect inspection.

  19. Population-based evaluation of a suggested anatomic and clinical classification of congenital heart defects based on the International Paediatric and Congenital Cardiac Code

    Directory of Open Access Journals (Sweden)

    Goffinet François

    2011-10-01

    Full Text Available Abstract Background Classification of the overall spectrum of congenital heart defects (CHD has always been challenging, in part because of the diversity of the cardiac phenotypes, but also because of the oft-complex associations. The purpose of our study was to establish a comprehensive and easy-to-use classification of CHD for clinical and epidemiological studies based on the long list of the International Paediatric and Congenital Cardiac Code (IPCCC. Methods We coded each individual malformation using six-digit codes from the long list of IPCCC. We then regrouped all lesions into 10 categories and 23 subcategories according to a multi-dimensional approach encompassing anatomic, diagnostic and therapeutic criteria. This anatomic and clinical classification of congenital heart disease (ACC-CHD was then applied to data acquired from a population-based cohort of patients with CHD in France, made up of 2867 cases (82% live births, 1.8% stillbirths and 16.2% pregnancy terminations. Results The majority of cases (79.5% could be identified with a single IPCCC code. The category "Heterotaxy, including isomerism and mirror-imagery" was the only one that typically required more than one code for identification of cases. The two largest categories were "ventricular septal defects" (52% and "anomalies of the outflow tracts and arterial valves" (20% of cases. Conclusion Our proposed classification is not new, but rather a regrouping of the known spectrum of CHD into a manageable number of categories based on anatomic and clinical criteria. The classification is designed to use the code numbers of the long list of IPCCC but can accommodate ICD-10 codes. Its exhaustiveness, simplicity, and anatomic basis make it useful for clinical and epidemiologic studies, including those aimed at assessment of risk factors and outcomes.

  20. A methodology for the automated creation of fuzzy expert systems for ischaemic and arrhythmic beat classification based on a set of rules obtained by a decision tree.

    Science.gov (United States)

    Exarchos, Themis P; Tsipouras, Markos G; Exarchos, Costas P; Papaloukas, Costas; Fotiadis, Dimitrios I; Michalis, Lampros K

    2007-07-01

    In the current work we propose a methodology for the automated creation of fuzzy expert systems, applied in ischaemic and arrhythmic beat classification. The proposed methodology automatically creates a fuzzy expert system from an initial training dataset. The approach consists of three stages: (a) extraction of a crisp set of rules from a decision tree induced from the training dataset, (b) transformation of the crisp set of rules into a fuzzy model and (c) optimization of the fuzzy model's parameters using global optimization. The above methodology is employed in order to create fuzzy expert systems for ischaemic and arrhythmic beat classification in ECG recordings. The fuzzy expert system for ischaemic beat detection is evaluated in a cardiac beat dataset that was constructed using recordings from the European Society of Cardiology ST-T database. The arrhythmic beat classification fuzzy expert system is evaluated using the MIT-BIH arrhythmia database. The fuzzy expert system for ischaemic beat classification reported 91% sensitivity and 92% specificity. The arrhythmic beat classification fuzzy expert system reported 96% average sensitivity and 99% average specificity for all categories. The proposed methodology provides high accuracy and the ability to interpret the decisions made. The fuzzy expert systems for ischaemic and arrhythmic beat classification compare well with previously reported results, indicating that they could be part of an overall clinical system for ECG analysis and diagnosis.

  1. Web-Enabled Distributed Health-Care Framework for Automated Malaria Parasite Classification: an E-Health Approach.

    Science.gov (United States)

    Maity, Maitreya; Dhane, Dhiraj; Mungle, Tushar; Maiti, A K; Chakraborty, Chandan

    2017-10-26

    Web-enabled e-healthcare system or computer assisted disease diagnosis has a potential to improve the quality and service of conventional healthcare delivery approach. The article describes the design and development of a web-based distributed healthcare management system for medical information and quantitative evaluation of microscopic images using machine learning approach for malaria. In the proposed study, all the health-care centres are connected in a distributed computer network. Each peripheral centre manages its' own health-care service independently and communicates with the central server for remote assistance. The proposed methodology for automated evaluation of parasites includes pre-processing of blood smear microscopic images followed by erythrocytes segmentation. To differentiate between different parasites; a total of 138 quantitative features characterising colour, morphology, and texture are extracted from segmented erythrocytes. An integrated pattern classification framework is designed where four feature selection methods viz. Correlation-based Feature Selection (CFS), Chi-square, Information Gain, and RELIEF are employed with three different classifiers i.e. Naive Bayes', C4.5, and Instance-Based Learning (IB1) individually. Optimal features subset with the best classifier is selected for achieving maximum diagnostic precision. It is seen that the proposed method achieved with 99.2% sensitivity and 99.6% specificity by combining CFS and C4.5 in comparison with other methods. Moreover, the web-based tool is entirely designed using open standards like Java for a web application, ImageJ for image processing, and WEKA for data mining considering its feasibility in rural places with minimal health care facilities.

  2. Automated morphological analysis of bone marrow cells in microscopic images for diagnosis of leukemia: nucleus-plasma separation and cell classification using a hierarchical tree model of hematopoesis

    Science.gov (United States)

    Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2016-03-01

    The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.

  3. Toward automated classification of acetabular shape in ultrasound for diagnosis of DDH: Contour alpha angle and the rounding index.

    Science.gov (United States)

    Hareendranathan, Abhilash Rakkunedeth; Mabee, Myles; Punithakumar, Kumaradevan; Noga, Michelle; Jaremko, Jacob L

    2016-06-01

    The diagnosis of Developmental Dysplasia of the Hip (DDH) in infants is currently made primarily by ultrasound. However, two-dimensional ultrasound (2DUS) images capture only an incomplete portion of the acetabular shape, and the alpha and beta angles measured on 2DUS for the Graf classification technique show high inter-scan and inter-observer variability. This variability relates partly to the manual determination of the apex point separating the acetabular roof from the ilium during index measurement. This study proposes a new 2DUS image processing technique for semi-automated tracing of the bony surface followed by automatic calculation of two indices: a contour-based alpha angle (αA), and a new modality-independent quantitative rounding index (M). The new index M is independent of the apex point, and can be directly extended to 3D surface models. We tested the proposed indices on a dataset of 114 2DUS scans of infant hips aged between 4 and 183 days scanned using a 12MHz linear transducer. We calculated the manual alpha angle (αM), coverage, contour-based alpha angle and rounding index for each of the recordings and statistically evaluated these indices based on regression analysis, area under the receiver operating characteristic curve (AUC) and analysis of variance (ANOVA). Processing time for calculating αA and M was similar to manual alpha angle measurement, ∼30s per image. Reliability of the new indices was high, with inter-observer intraclass correlation coefficients (ICC) 0.90 for αA and 0.89 for M. For a diagnostic test classifying hips as normal or dysplastic, AUC was 93.0% for αA vs. 92.7% for αM, 91.6% for M alone, and up to 95.7% for combination of M with αM, αA or coverage. The rounding index provides complimentary information to conventional indices such as alpha angle and coverage. Calculation of the contour-based alpha angle and rounding index is rapid, shows potential to improve the reliability and accuracy of DDH diagnosis from 2DUS

  4. Classification of the ground states and topological defects in a rotating two-component Bose-Einstein condensate

    Energy Technology Data Exchange (ETDEWEB)

    Mason, Peter [Laboratoire de Physique Statistique, Ecole Normale Superieure, UPMC Paris 06, Universite Paris Diderot, CNRS, 24 rue Lhomond, F-75005 Paris (France); Institut Jean Le Rond D' Alembert, UMR 7190 CNRS-UPMC, 4 place Jussieu, F-75005 Paris (France); Aftalion, Amandine [CNRS and Universite Versailles-Saint-Quentin-en-Yvelines, Laboratoire de Mathematiques de Versailles, CNRS UMR 8100, 45 avenue des Etats-Unis, F-78035 Versailles Cedex (France)

    2011-09-15

    We classify the ground states and topological defects of a rotating two-component condensate when varying several parameters: the intracomponent coupling strengths, the intercomponent coupling strength, and the particle numbers. No restriction is placed on the masses or trapping frequencies of the individual components. We present numerical phase diagrams which show the boundaries between the regions of coexistence, spatial separation, and symmetry breaking. Defects such as triangular coreless vortex lattices, square coreless vortex lattices, and giant skyrmions are classified. Various aspects of the phase diagrams are analytically justified thanks to a nonlinear {sigma} model that describes the condensate in terms of the total density and a pseudo-spin representation.

  5. AUTOMATED CLASSIFICATION OF LAND COVER USING LANDSAT 8 OLI SURFACE REFLECTANCE PRODUCT AND SPECTRAL PATTERN ANALYSIS CONCEPT - CASE STUDY IN HANOI, VIETNAM

    Directory of Open Access Journals (Sweden)

    D. Nguyen Dinh

    2016-06-01

    Full Text Available Recently USGS released provisional Landsat 8 Surface Reflectance product, which allows conducting land cover mapping over large composed of number of image scenes without necessity of atmospheric correction. In this study, the authors present a new concept for automated classification of land cover. This concept is based on spectral patterns analysis of reflected bands and can be automated using predefined classification rule set constituted of spectral pattern shape, total reflected radiance index (TRRI and ratios of spectral bands. Given a pixel vector B6 = {b1,b2,b3,b4,b5,b6} where b1, b2,...,b6 denote bands 2, 3, ...,7 of OLI sensor respectively. By using the pixel vector B6 we can construct spectral reflectance curve. Each spectral curve is featured by a shape, which can be described in simplified form of an analogue pattern, which is consisted of 15 digits of 0, 1 and 2 showing mutual relative position of spectral vertices. Value of comparison between band i and j is 2 if bj > bi, 1 if bj = bi and 0 if bj i. Simplified spectral pattern is defined by 15 digits as m1,2m1,3m1,4m1,5m1,6m2,3m2,4m2,5m2,6m3,4m3,5m3,6m4,5m4,6m5,6 where mi,j is result of comparison of reflectance between bi and bj and has values of 0, 1 and 2. After construction of SSP for each pixel in the input image, the original image will be decomposed to component images, which contain pixels with the same SRCS pattern. The decomposition can be written analytically by equation A = Σnk=1Ck where A stands for original image with 6 spectral bands, n is number of component images decomposed from A and Ck is component image. For this study, we use Landsat 8 OLI reflectance image LC81270452013352LGN00 and LC81270452015182LGN00. For the decomposition, we use only six reflective bands. Each land cover class is defined by SSP code, threshold values for TRRI and band ratios. Automated classification of land cover was realized with 8 classes: forest, shrub, grass, water, wetland

  6. MSCT follow-up in malignant lymphoma: comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification.

    Science.gov (United States)

    Weßling, J; Puesken, M; Koch, R; Kohlhase, N; Persigehl, T; Mesters, R; Heindel, W; Buerke, B

    2012-09-01

    Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e. g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 ± 17.4 sec) was the same as with the manual approach (29.1 ± 14.5 sec). Semi-automatically derived "lymph node volume" and "bi-dimensional WHO" significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. © Georg Thieme Verlag KG Stuttgart · New York.

  7. An image analysis pipeline for automated classification of imaging light conditions and for quantification of wheat canopy cover time series in field phenotyping.

    Science.gov (United States)

    Yu, Kang; Kirchgessner, Norbert; Grieder, Christoph; Walter, Achim; Hund, Andreas

    2017-01-01

    Robust segmentation of canopy cover (CC) from large amounts of images taken under different illumination/light conditions in the field is essential for high throughput field phenotyping (HTFP). We attempted to address this challenge by evaluating different vegetation indices and segmentation methods for analyzing images taken at varying illuminations throughout the early growth phase of wheat in the field. 40,000 images taken on 350 wheat genotypes in two consecutive years were assessed for this purpose. We proposed an image analysis pipeline that allowed for image segmentation using automated thresholding and machine learning based classification methods and for global quality control of the resulting CC time series. This pipeline enabled accurate classification of imaging light conditions into two illumination scenarios, i.e. high light-contrast (HLC) and low light-contrast (LLC), in a series of continuously collected images by employing a support vector machine (SVM) model. Accordingly, the scenario-specific pixel-based classification models employing decision tree and SVM algorithms were able to outperform the automated thresholding methods, as well as improved the segmentation accuracy compared to general models that did not discriminate illumination differences. The three-band vegetation difference index (NDI3) was enhanced for segmentation by incorporating the HSV-V and the CIE Lab-a color components, i.e. the product images NDI3*V and NDI3*a. Field illumination scenarios can be successfully identified by the proposed image analysis pipeline, and the illumination-specific image segmentation can improve the quantification of CC development. The integrated image analysis pipeline proposed in this study provides great potential for automatically delivering robust data in HTFP.

  8. MSCT follow-up in malignant lymphoma. Comparison of manual linear measurements with semi-automated lymph node analysis for therapy response classification

    International Nuclear Information System (INIS)

    Wessling, J.; Puesken, M.; Kohlhase, N.; Persigehl, T.; Mesters, R.; Heindel, W.; Buerke, B.; Koch, R.

    2012-01-01

    Purpose: Assignment of semi-automated lymph node analysis compared to manual measurements for therapy response classification of malignant lymphoma in MSCT. Materials and Methods: MSCT scans of 63 malignant lymphoma patients before and after 2 cycles of chemotherapy (307 target lymph nodes) were evaluated. The long axis diameter (LAD), short axis diameter (SAD) and bi-dimensional WHO were determined manually and semi-automatically. The time for manual and semi-automatic segmentation was evaluated. The ref. standard response was defined as the mean relative change across all manual and semi-automatic measurements (mean manual/semi-automatic LAD, SAD, semi-automatic volume). Statistical analysis encompassed t-test and McNemar's test for clustered data. Results: Response classification per lymph node revealed semi-automated volumetry and bi-dimensional WHO to be significantly more accurate than manual linear metric measurements. Response classification per patient based on RECIST revealed more patients to be correctly classified by semi-automatic measurements, e.g. 96.0 %/92.9 % (WHO bi-dimensional/volume) compared to 85.7/84.1 % for manual LAD and SAD, respectively (mean reduction in misclassified patients of 9.95 %). Considering the use of correction tools, the time expenditure for lymph node segmentation (29.7 ± 17.4 sec) was the same as with the manual approach (29.1 ± 14.5 sec). Conclusion: Semi-automatically derived 'lymph node volume' and 'bi-dimensional WHO' significantly reduce the number of misclassified patients in the CT follow-up of malignant lymphoma by at least 10 %. However, lymph node volumetry does not outperform bi-dimensional WHO. (orig.)

  9. Detection, identification and classification of defects using ANN and a robotic manipulator of 2 G.L. (Kohonen and MLP algorithms)

    International Nuclear Information System (INIS)

    Barrera, G.; Fabian, M. A.; Ugalde, C. A.

    2002-01-01

    The ultrasonic inspection technique had a sustained growth since the 80's It has several advantages, compared with the contact technique. A flexible and low cost solution is presented based on virtual instrumentation for the servomechanism (manipulator) control of the ultrasound inspection transducer in the immersion technique. The developed system uses a personal computer (PC). a Windows Operating System. Virtual Instrumentation Software. DAQ cards and a GPIB card. As a solution to detection, classification and evaluation of defects an Artificial Neuronal Networks technique proposed. It consists of characterization and interpretation of acoustic signals (echoes) acquired by the immersion ultrasonic inspection technique. Two neuronal networks are proposed: Kohonen and Multilayer Perceptron (MLP). With this techniques non-linear complex processes can be modeled with great precision. The 2-degree of freedom manipulator control, the data acquisition and the net training have been carried out in a virtual instrument environment using LabVIEV and Data Engine. (Author) 14 refs

  10. Sensitivity and specificity of manual and automated measurements of reticulocyte parameters for classification of anemia in dogs: 174 cases (1993-2013).

    Science.gov (United States)

    Paltrinieri, Saverio; Rossi, Gabriele; Manca, Michela; Scarpa, Paola; Vitiello, Tiziana; Giordano, Alessia

    2016-10-01

    OBJECTIVE To assess sensitivity and specificity of manual and automated measurements of reticulocyte percentage, number, and production index for classification of anemia in dogs. DESIGN Retrospective case series SAMPLE 174 blood smears from client-owned dogs with anemia collected between 1993 and 2013 for which reticulocyte parameters were determined manually (nonregenerative anemia, 22; preregenerative anemia, 23; regenerative anemia, 28) or with an automated laser-based counter (nonregenerative anemia, 66; preregenerative anemia, 17; regenerative anemia, 18). PROCEDURES Diagnostic performance was evaluated with receiver operating characteristic (ROC) curves by considering preregenerative anemia as nonregenerative or regenerative. Sensitivity, specificity, and positive likelihood ratio were calculated by use of cutoffs determined from ROC curves or published reference limits. RESULTS Considering preregenerative anemia as non regenerative, areas under the curve (AUCs) for reticulocyte percentage, number, and production index were 97%, 93%, and 91% for manual counting and 93%, 90%, and 93% for automated counting. Sensitivity, specificity, and positive likelihood ratio were 82% to 86%, 82% to 87%, and 4.6 to 6.4, respectively. Considering preregenerative anemia as regenerative, AUCs were 77%, 82%, and 80% for manual counting and 81%, 82%, and 92% for automated counting. Sensitivity, specificity, and positive likelihood ratio were 72% to 74%, 76 to 87%, and 2.7 to 6.2, respectively. CONCLUSIONS AND CLINICAL RELEVANCE Whereas all reticulocyte parameters identified regeneration in anemic dogs, the performance of specific parameters was dependent on the method used. Findings suggested that lower cutoffs than published reference limits are preferred for reticulocyte number and production index and higher cutoffs are preferred for reticulocyte percentage. Reticulocyte production index may be useful when the pretest probability of regeneration is moderate.

  11. Late gadolinium uptake demonstrated with magnetic resonance in patients where automated PERFIT analysis of myocardial SPECT suggests irreversible perfusion defect

    International Nuclear Information System (INIS)

    Rosendahl, Lene; Blomstrand, Peter; Ohlsson, Jan L; Björklund, Per-Gunnar; Ahlander, Britt-Marie; Starck, Sven-Åke; Engvall, Jan E

    2008-01-01

    Myocardial perfusion single photon emission computed tomography (MPS) is frequently used as the reference method for the determination of myocardial infarct size. PERFIT ® is a software utilizing a three-dimensional gender specific, averaged heart model for the automatic evaluation of myocardial perfusion. The purpose of this study was to compare the perfusion defect size on MPS, assessed with PERFIT, with the hyperenhanced volume assessed by late gadolinium enhancement magnetic resonance imaging (LGE) and to relate their effect on the wall motion score index (WMSI) assessed with cine magnetic resonance imaging (cine-MRI) and echocardiography (echo). LGE was performed in 40 patients where clinical MPS showed an irreversible uptake reduction suggesting a myocardial scar. Infarct volume, extent and major coronary supply were compared between MPS and LGE as well as the relationship between infarct size from both methods and WMSI. MPS showed a slightly larger infarct volume than LGE (MPS 29.6 ± 23.2 ml, LGE 22.1 ± 16.9 ml, p = 0.01), while no significant difference was found in infarct extent (MPS 11.7 ± 9.4%, LGE 13.0 ± 9.6%). The correlation coefficients between methods in respect to infarct size and infarct extent were 0.71 and 0.63 respectively. WMSI determined with cine-MRI correlated moderately with infarct volume and infarct extent (cine-MRI vs MPS volume r = 0.71, extent r = 0.71, cine-MRI vs LGE volume r = 0.62, extent r = 0.60). Similar results were achieved when wall motion was determined with echo. Both MPS and LGE showed the same major coronary supply to the infarct area in a majority of patients, Kappa = 0.84. MPS and LGE agree moderately in the determination of infarct size in both absolute and relative terms, although infarct volume is slightly larger with MPS. The correlation between WMSI and infarct size is moderate

  12. Computational mask defect review for contamination and haze inspections

    Science.gov (United States)

    Morgan, Paul; Rost, Daniel; Price, Daniel; Corcoran, Noel; Satake, Masaki; Hu, Peter; Peng, Danping; Yonenaga, Dean; Tolani, Vikram; Wolf, Yulian; Shah, Pinkesh

    2013-09-01

    the mask manufacturing process. The latter characterization qualifies real defect signatures, such as pin-dots or pin-holes, extrusions or intrusions, assist-feature or dummy-fill defects, writeerrors or un-repairable defects, chrome-on-shifter or missing chrome-from-shifter defects, particles, etc., and also false defect signatures, such as those due to inspection tool registration or image alignment, interlace artifacts, CCD camera artifacts, optical shimmer, focus errors, etc. Such qualitative characterization of defects has enabled better inspection tool SPC and process defect control in the mask shop. In this paper, the same computational approach to defect review has been extended to contamination style defect inspections, including Die-to-Die reflected, and non Die-to-Die or single-die inspections. In addition to the computational methods used for transmitted aerial images, defects detected in die-to-die reflected light mode are analyzed based on special defect and background coloring in reflected-light, and other characteristics to determine the exact type and severity. For those detected in the non Die-to-Die mode, only defect images are available from the inspection tool. Without a reference, i.e., defect-free image, it is often difficult to determine the true nature or impact of the defect in question. Using a combination of inspection-tool modeling and image inversion techniques, Luminescent's LAIPHTM system generates an accurate reference image, and then proceeds with automated defect characterization as if the images were simply from a die-to-die inspection. The disposition of contamination style defects this way, filters out >90% of false and nuisance defects that otherwise would have been manually reviewed or measured on AIMSTM. Such computational defect review, unifying defect disposition across all available inspection modes, has been imperative to ensuring no yield losses due to errors in operator defect classification on one hand, and on the other

  13. Spider Neurotoxins, Short Linear Cationic Peptides and Venom Protein Classification Improved by an Automated Competition between Exhaustive Profile HMM Classifiers.

    Science.gov (United States)

    Koua, Dominique; Kuhn-Nentwig, Lucia

    2017-08-08

    Spider venoms are rich cocktails of bioactive peptides, proteins, and enzymes that are being intensively investigated over the years. In order to provide a better comprehension of that richness, we propose a three-level family classification system for spider venom components. This classification is supported by an exhaustive set of 219 new profile hidden Markov models (HMMs) able to attribute a given peptide to its precise peptide type, family, and group. The proposed classification has the advantages of being totally independent from variable spider taxonomic names and can easily evolve. In addition to the new classifiers, we introduce and demonstrate the efficiency of hmmcompete , a new standalone tool that monitors HMM-based family classification and, after post-processing the result, reports the best classifier when multiple models produce significant scores towards given peptide queries. The combined used of hmmcompete and the new spider venom component-specific classifiers demonstrated 96% sensitivity to properly classify all known spider toxins from the UniProtKB database. These tools are timely regarding the important classification needs caused by the increasing number of peptides and proteins generated by transcriptomic projects.

  14. Phenotype classification of zebrafish embryos by supervised learning.

    Directory of Open Access Journals (Sweden)

    Nathalie Jeanray

    Full Text Available Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  15. Phenotype classification of zebrafish embryos by supervised learning.

    Science.gov (United States)

    Jeanray, Nathalie; Marée, Raphaël; Pruvot, Benoist; Stern, Olivier; Geurts, Pierre; Wehenkel, Louis; Muller, Marc

    2015-01-01

    Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  16. Using Global Positioning Systems (GPS) and temperature data to generate time-activity classifications for estimating personal exposure in air monitoring studies: an automated method.

    Science.gov (United States)

    Nethery, Elizabeth; Mallach, Gary; Rainham, Daniel; Goldberg, Mark S; Wheeler, Amanda J

    2014-05-08

    Personal exposure studies of air pollution generally use self-reported diaries to capture individuals' time-activity data. Enhancements in the accuracy, size, memory and battery life of personal Global Positioning Systems (GPS) units have allowed for higher resolution tracking of study participants' locations. Improved time-activity classifications combined with personal continuous air pollution sampling can improve assessments of location-related air pollution exposures for health studies. Data was collected using a GPS and personal temperature from 54 children with asthma living in Montreal, Canada, who participated in a 10-day personal air pollution exposure study. A method was developed that incorporated personal temperature data and then matched a participant's position against available spatial data (i.e., road networks) to generate time-activity categories. The diary-based and GPS-generated time-activity categories were compared and combined with continuous personal PM2.5 data to assess the impact of exposure misclassification when using diary-based methods. There was good agreement between the automated method and the diary method; however, the automated method (means: outdoors = 5.1%, indoors other =9.8%) estimated less time spent in some locations compared to the diary method (outdoors = 6.7%, indoors other = 14.4%). Agreement statistics (AC1 = 0.778) suggest 'good' agreement between methods over all location categories. However, location categories (Outdoors and Transit) where less time is spent show greater disagreement: e.g., mean time "Indoors Other" using the time-activity diary was 14.4% compared to 9.8% using the automated method. While mean daily time "In Transit" was relatively consistent between the methods, the mean daily exposure to PM2.5 while "In Transit" was 15.9 μg/m3 using the automated method compared to 6.8 μg/m3 using the daily diary. Mean times spent in different locations as categorized by a GPS-based method were

  17. Radiological assessment of breast density by visual classification (BI-RADS) compared to automated volumetric digital software (Quantra): implications for clinical practice.

    Science.gov (United States)

    Regini, Elisa; Mariscotti, Giovanna; Durando, Manuela; Ghione, Gianluca; Luparia, Andrea; Campanino, Pier Paolo; Bianchi, Caterina Chiara; Bergamasco, Laura; Fonio, Paolo; Gandini, Giovanni

    2014-10-01

    This study was done to assess breast density on digital mammography and digital breast tomosynthesis according to the visual Breast Imaging Reporting and Data System (BI-RADS) classification, to compare visual assessment with Quantra software for automated density measurement, and to establish the role of the software in clinical practice. We analysed 200 digital mammograms performed in 2D and 3D modality, 100 of which positive for breast cancer and 100 negative. Radiological density was assessed with the BI-RADS classification; a Quantra density cut-off value was sought on the 2D images only to discriminate between BI-RADS categories 1-2 and BI-RADS 3-4. Breast density was correlated with age, use of hormone therapy, and increased risk of disease. The agreement between the 2D and 3D assessments of BI-RADS density was high (K 0.96). A cut-off value of 21% is that which allows us to best discriminate between BI-RADS categories 1-2 and 3-4. Breast density was negatively correlated to age (r = -0.44) and positively to use of hormone therapy (p = 0.0004). Quantra density was higher in breasts with cancer than in healthy breasts. There is no clear difference between the visual assessments of density on 2D and 3D images. Use of the automated system requires the adoption of a cut-off value (set at 21%) to effectively discriminate BI-RADS 1-2 and 3-4, and could be useful in clinical practice.

  18. Use of self-organizing maps for classification of defects in the tubes from the steam generator of nuclear power plants; Classificacao de defeitos em tubos de gerador de vapor de plantas nucleares utilizando mapas auto-organizaveis

    Energy Technology Data Exchange (ETDEWEB)

    Mesquita, Roberto Navarro de

    2002-07-01

    This thesis obtains a new classification method for different steam generator tube defects in nuclear power plants using Eddy Current Test signals. The method uses self-organizing maps to compare different signal characteristics efficiency to identify and classify these defects. A multiple inference system is proposed which composes the different extracted characteristic trained maps classification to infer the final defect type. The feature extraction methods used are the Wavelet zero-crossings representation, the linear predictive coding (LPC), and other basic signal representations on time like module and phase. Many characteristic vectors are obtained with combinations of these extracted characteristics. These vectors are tested to classify the defects and the best ones are applied to the multiple inference system. A systematic study of pre-processing, calibration and analysis methods for the steam generator tube defect signals in nuclear power plants is done. The method efficiency is demonstrated and characteristic maps with the main prototypes are obtained for each steam generator tube defect type. (author)

  19. AUTOMATION RESEARCHES IN FOREST PRODUCTS INDUSTRY

    Directory of Open Access Journals (Sweden)

    İsmail Aydın

    2004-04-01

    Full Text Available Wood is a natural polymeric material which has a heterogenic nature. The natural growth process and environmental influence can lead to features in wood that are undesirable for certain applications and are known as defects. Defects in wood affect the visual appearance and structural properties of wood. The type of defect is based on whether growth, environmental conditions, handling or processing causes it. The definition and acceptability of defect types can vary between industries. Wood materials such as log, lumber and parquet are usually subject to a classification before selling and these materials are sold based on their quality grades. The ability to detect internal defects both in the log and lumber can save mills time and processing costs. In this study, information on the automation research conducted for detection the defects in wood materials were given. As a result, it is indicated that there are numerous scanning methods able to detect wood features, but no one method is adequate for all defect types

  20. Application of a new genetic classification and semi-automated geomorphic mapping approach in the Perth submarine canyon, Australia

    Science.gov (United States)

    Picard, K.; Nanson, R.; Huang, Z.; Nichol, S.; McCulloch, M.

    2017-12-01

    The acquisition of high resolution marine geophysical data has intensified in recent years (e.g. multibeam echo-sounding, sub-bottom profiling). This progress provides the opportunity to classify and map the seafloor in greater detail, using new methods that preserve the links between processes and morphology. Geoscience Australia has developed a new genetic classification approach, nested within the Harris et al (2014) global seafloor mapping framework. The approach divides parent units into sub-features based on established classification schemes and feature descriptors defined by Bradwell et al. (2016: http://nora.nerc.ac.uk/), the International Hydrographic Organization (https://www.iho.int) and the Coastal Marine and Ecological Classification Standard (https://www.cmecscatalog.org). Owing to the ecological significance of submarine canyon systems in particular, much recent attention has focused on defining their variation in form and process, whereby they can be classified using a range of topographic metrics, fluvial dis/connection and shelf-incising status. The Perth Canyon is incised into the continental slope and shelf of southwest Australia, covering an area of >1500 km2 and extending from 4700 m water depth to the shelf break in 170 m. The canyon sits within a Marine Protected Area, incorporating a Marine National Park and Habitat Protection Zone in recognition of its benthic and pelagic biodiversity values. However, detailed information of the spatial patterns of the seabed habitats that influence this biodiversity is lacking. Here we use 20 m resolution bathymetry and acoustic backscatter data acquired in 2015 by the Schmidt Ocean Institute plus sub-bottom datasets and sediment samples collected Geoscience Australia in 2005 to apply the new geomorphic classification system to the Perth Canyon. This presentation will show the results of the geomorphic feature mapping of the canyon and its application to better defining potential benthic habitats.

  1. Food intake monitoring: an acoustical approach to automated food intake activity detection and classification of consumed food

    International Nuclear Information System (INIS)

    Päßler, Sebastian; Fischer, Wolf-Joachim; Wolff, Matthias

    2012-01-01

    Obesity and nutrition-related diseases are currently growing challenges for medicine. A precise and timesaving method for food intake monitoring is needed. For this purpose, an approach based on the classification of sounds produced during food intake is presented. Sounds are recorded non-invasively by miniature microphones in the outer ear canal. A database of 51 participants eating seven types of food and consuming one drink has been developed for algorithm development and model training. The database is labeled manually using a protocol with introductions for annotation. The annotation procedure is evaluated using Cohen's kappa coefficient. The food intake activity is detected by the comparison of the signal energy of in-ear sounds to environmental sounds recorded by a reference microphone. Hidden Markov models are used for the recognition of single chew or swallowing events. Intake cycles are modeled as event sequences in finite-state grammars. Classification of consumed food is realized by a finite-state grammar decoder based on the Viterbi algorithm. We achieved a detection accuracy of 83% and a food classification accuracy of 79% on a test set of 10% of all records. Our approach faces the need of monitoring the time and occurrence of eating. With differentiation of consumed food, a first step toward the goal of meal weight estimation is taken. (paper)

  2. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    Science.gov (United States)

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  3. Semi-Automated Classification of Gray Scale Aerial Photographs using Geographic Object Based Image Analysis (GEOBIA) Technique

    Science.gov (United States)

    Harb Rabia, Ahmed; Terribile, Fabio

    2013-04-01

    Aerial photography is an important source of high resolution remotely sensed data. Before 1970, aerial photographs were the only remote sensing data source for land use and land cover classification. Using these old aerial photographs improve the final output of land use and land cover change detection. However, classic techniques of aerial photographs classification like manual interpretation or screen digitization require great experience, long processing time and vast effort. A new technique needs to be developed in order to reduce processing time and effort and to give better results. Geographic object based image analysis (GEOBIA) is a newly developed area of Geographic Information Science and remote sensing in which automatic segmentation of images into objects of similar spectral, temporal and spatial characteristics is undertaken. Unlike pixel-based technique, GEOBIA deals with the object properties such as texture, square fit, roundness and many other properties that can improve classification results. GEOBIA technique can be divided into two main steps; segmentation and classification. Segmentation process is grouping adjacent pixels into objects of similar spectral and spatial characteristics. Classification process is assigning classes to the generated objects based on the characteristics of the individual objects. This study aimed to use GEOBIA technique to develop a novel approach for land use and land cover classification of aerial photographs that saves time and effort and gives improved results. Aerial photographs from 1954 of Valle Telesina in Italy were used in this study. Images were rectified and georeferenced in Arcmap using topographic maps. Images were then processed in eCognition software to generate land use and land cover map of 1954. A decision tree rule set was developed in eCognition to classify images and finally nine classes of general land use and land cover in the study area were recognized (forest, trees stripes, agricultural

  4. Online Surface Defect Identification of Cold Rolled Strips Based on Local Binary Pattern and Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2018-03-01

    Full Text Available In the production of cold-rolled strip, the strip surface may suffer from various defects which need to be detected and identified using an online inspection system. The system is equipped with high-speed and high-resolution cameras to acquire images from the moving strip surface. Features are then extracted from the images and are used as inputs of a pre-trained classifier to identify the type of defect. New types of defect often appear in production. At this point the pre-trained classifier needs to be quickly retrained and deployed in seconds to meet the requirement of the online identification of all defects in the environment of a continuous production line. Therefore, the method for extracting the image features and the training for the classification model should be automated and fast enough, normally within seconds. This paper presents our findings in investigating the computational and classification performance of various feature extraction methods and classification models for the strip surface defect identification. The methods include Scale Invariant Feature Transform (SIFT, Speeded Up Robust Features (SURF and Local Binary Patterns (LBP. The classifiers we have assessed include Back Propagation (BP neural network, Support Vector Machine (SVM and Extreme Learning Machine (ELM. By comparing various combinations of different feature extraction and classification methods, our experiments show that the hybrid method of LBP for feature extraction and ELM for defect classification results in less training and identification time with higher classification accuracy, which satisfied online real-time identification.

  5. Automated Detection, Localization, and Classification of Traumatic Vertebral Body Fractures in the Thoracic and Lumbar Spine at CT.

    Science.gov (United States)

    Burns, Joseph E; Yao, Jianhua; Muñoz, Hector; Summers, Ronald M

    2016-01-01

    To design and validate a fully automated computer system for the detection and anatomic localization of traumatic thoracic and lumbar vertebral body fractures at computed tomography (CT). This retrospective study was HIPAA compliant. Institutional review board approval was obtained, and informed consent was waived. CT examinations in 104 patients (mean age, 34.4 years; range, 14-88 years; 32 women, 72 men), consisting of 94 examinations with positive findings for fractures (59 with vertebral body fractures) and 10 control examinations (without vertebral fractures), were performed. There were 141 thoracic and lumbar vertebral body fractures in the case set. The locations of fractures were marked and classified by a radiologist according to Denis column involvement. The CT data set was divided into training and testing subsets (37 and 67 subsets, respectively) for analysis by means of prototype software for fully automated spinal segmentation and fracture detection. Free-response receiver operating characteristic analysis was performed. Training set sensitivity for detection and localization of fractures within each vertebra was 0.82 (28 of 34 findings; 95% confidence interval [CI]: 0.68, 0.90), with a false-positive rate of 2.5 findings per patient. The sensitivity for fracture localization to the correct vertebra was 0.88 (23 of 26 findings; 95% CI: 0.72, 0.96), with a false-positive rate of 1.3. Testing set sensitivity for the detection and localization of fractures within each vertebra was 0.81 (87 of 107 findings; 95% CI: 0.75, 0.87), with a false-positive rate of 2.7. The sensitivity for fracture localization to the correct vertebra was 0.92 (55 of 60 findings; 95% CI: 0.79, 0.94), with a false-positive rate of 1.6. The most common cause of false-positive findings was nutrient foramina (106 of 272 findings [39%]). The fully automated computer system detects and anatomically localizes vertebral body fractures in the thoracic and lumbar spine on CT images with a

  6. Automated classification and visualization of healthy and pathological dental tissues based on near-infrared hyper-spectral imaging

    Science.gov (United States)

    Usenik, Peter; Bürmen, Miran; Vrtovec, Tomaž; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots which are difficult to diagnose. If detected early enough, such demineralization can be arrested and reversed by non-surgical means through well established dental treatments (fluoride therapy, anti-bacterial therapy, low intensity laser irradiation). Near-infrared (NIR) hyper-spectral imaging is a new promising technique for early detection of demineralization based on distinct spectral features of healthy and pathological dental tissues. In this study, we apply NIR hyper-spectral imaging to classify and visualize healthy and pathological dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized areas. For this purpose, a standardized teeth database was constructed consisting of 12 extracted human teeth with different degrees of natural dental lesions imaged by NIR hyper-spectral system, X-ray and digital color camera. The color and X-ray images of teeth were presented to a clinical expert for localization and classification of the dental tissues, thereby obtaining the gold standard. Principal component analysis was used for multivariate local modeling of healthy and pathological dental tissues. Finally, the dental tissues were classified by employing multiple discriminant analysis. High agreement was observed between the resulting classification and the gold standard with the classification sensitivity and specificity exceeding 85 % and 97 %, respectively. This study demonstrates that NIR hyper-spectral imaging has considerable diagnostic potential for imaging hard dental tissues.

  7. Present perspectives on the automated classification of the G-protein coupled receptors (GPCRs) at the protein sequence level

    DEFF Research Database (Denmark)

    Davies, Matthew N; Gloriam, David E; Secker, Andrew

    2011-01-01

    to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of sequences. We review here the available methodologies for the classification of GPCRs. Part of this work focuses on how we have tried to build the intrinsically hierarchical nature of sequence......The G-protein coupled receptors--or GPCRs--comprise simultaneously one of the largest and one of the most multi-functional protein families known to modern-day molecular bioscience. From a drug discovery and pharmaceutical industry perspective, the GPCRs constitute one of the most commercially...

  8. Security Classification Using Automated Learning (SCALE): Optimizing Statistical Natural Language Processing Techniques to Assign Security Labels to Unstructured Text

    Science.gov (United States)

    2010-12-01

    2010 © Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2010 Abstract Automating the... fonction de son expérience et des politiques de sécurité. Pour étiqueter de manière efficace toutes les données disponibles dans les réseaux du...bien que l’on ait étudié en profondeur la catégorisation automatique de données en fonction du sujet, peu de recherches axées sur l’évaluation

  9. Hybrid digital signal processing and neural networks for automated diagnostics using NDE methods

    International Nuclear Information System (INIS)

    Upadhyaya, B.R.; Yan, W.

    1993-11-01

    The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identify proper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods

  10. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification.

    Science.gov (United States)

    Wei, Ning; Flaschel, Erwin; Friehs, Karl; Nattkemper, Tim Wilhelm

    2008-10-21

    Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature selection plays a role of excluding redundant or misleading

  11. A machine vision system for automated non-invasive assessment of cell viability via dark field microscopy, wavelet feature selection and classification

    Directory of Open Access Journals (Sweden)

    Friehs Karl

    2008-10-01

    Full Text Available Abstract Background Cell viability is one of the basic properties indicating the physiological state of the cell, thus, it has long been one of the major considerations in biotechnological applications. Conventional methods for extracting information about cell viability usually need reagents to be applied on the targeted cells. These reagent-based techniques are reliable and versatile, however, some of them might be invasive and even toxic to the target cells. In support of automated noninvasive assessment of cell viability, a machine vision system has been developed. Results This system is based on supervised learning technique. It learns from images of certain kinds of cell populations and trains some classifiers. These trained classifiers are then employed to evaluate the images of given cell populations obtained via dark field microscopy. Wavelet decomposition is performed on the cell images. Energy and entropy are computed for each wavelet subimage as features. A feature selection algorithm is implemented to achieve better performance. Correlation between the results from the machine vision system and commonly accepted gold standards becomes stronger if wavelet features are utilized. The best performance is achieved with a selected subset of wavelet features. Conclusion The machine vision system based on dark field microscopy in conjugation with supervised machine learning and wavelet feature selection automates the cell viability assessment, and yields comparable results to commonly accepted methods. Wavelet features are found to be suitable to describe the discriminative properties of the live and dead cells in viability classification. According to the analysis, live cells exhibit morphologically more details and are intracellularly more organized than dead ones, which display more homogeneous and diffuse gray values throughout the cells. Feature selection increases the system's performance. The reason lies in the fact that feature

  12. Present perspectives on the automated classification of the G-protein coupled receptors (GPCRs) at the protein sequence level

    DEFF Research Database (Denmark)

    Davies, Matthew N; Gloriam, David E; Secker, Andrew

    2011-01-01

    The G-protein coupled receptors--or GPCRs--comprise simultaneously one of the largest and one of the most multi-functional protein families known to modern-day molecular bioscience. From a drug discovery and pharmaceutical industry perspective, the GPCRs constitute one of the most commercially...... and economically important groups of proteins known. The GPCRs undertake numerous vital metabolic functions and interact with a hugely diverse range of small and large ligands. Many different methodologies have been developed to efficiently and accurately classify the GPCRs. These range from motif-based techniques...... to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of sequences. We review here the available methodologies for the classification of GPCRs. Part of this work focuses on how we have tried to build the intrinsically hierarchical nature of sequence...

  13. A cellular neural network based method for classification of magnetic resonance images: towards an automated detection of hippocampal sclerosis.

    Science.gov (United States)

    Döhler, Florian; Mormann, Florian; Weber, Bernd; Elger, Christian E; Lehnertz, Klaus

    2008-05-30

    We present a cellular neuronal network (CNN) based approach to classify magnetic resonance images with and without hippocampal or Ammon's horn sclerosis (AHS) in the medial temporal lobe. A CNN combines the architecture of cellular automata and artificial neural networks and is an array of locally coupled nonlinear electrical circuits or cells, which is capable of processing a large amount of information in parallel and in real time. Using an exemplary database that consists of a large number of volumes of interest extracted from T1-weighted magnetic resonance images from 144 subjects we here demonstrate that the network allows to classify brain tissue with respect to the presence or absence of mesial temporal sclerosis. Results indicate the general feasibility of CNN-based computer-aided systems for diagnosis and classification of images generated by medical imaging systems.

  14. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    Science.gov (United States)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of

  15. Evaluation of a rule-based method for epidemiological document classification towards the automation of systematic reviews.

    Science.gov (United States)

    Karystianis, George; Thayer, Kristina; Wolfe, Mary; Tsafnat, Guy

    2017-06-01

    Most data extraction efforts in epidemiology are focused on obtaining targeted information from clinical trials. In contrast, limited research has been conducted on the identification of information from observational studies, a major source for human evidence in many fields, including environmental health. The recognition of key epidemiological information (e.g., exposures) through text mining techniques can assist in the automation of systematic reviews and other evidence summaries. We designed and applied a knowledge-driven, rule-based approach to identify targeted information (study design, participant population, exposure, outcome, confounding factors, and the country where the study was conducted) from abstracts of epidemiological studies included in several systematic reviews of environmental health exposures. The rules were based on common syntactical patterns observed in text and are thus not specific to any systematic review. To validate the general applicability of our approach, we compared the data extracted using our approach versus hand curation for 35 epidemiological study abstracts manually selected for inclusion in two systematic reviews. The returned F-score, precision, and recall ranged from 70% to 98%, 81% to 100%, and 54% to 97%, respectively. The highest precision was observed for exposure, outcome and population (100%) while recall was best for exposure and study design with 97% and 89%, respectively. The lowest recall was observed for the population (54%), which also had the lowest F-score (70%). The generated performance of our text-mining approach demonstrated encouraging results for the identification of targeted information from observational epidemiological study abstracts related to environmental exposures. We have demonstrated that rules based on generic syntactic patterns in one corpus can be applied to other observational study design by simple interchanging the dictionaries aiming to identify certain characteristics (i.e., outcomes

  16. Automated detection and classification of major retinal vessels for determination of diameter ratio of arteries and veins

    Science.gov (United States)

    Muramatsu, Chisako; Hatanaka, Yuji; Iwase, Tatsuhiko; Hara, Takeshi; Fujita, Hiroshi

    2010-03-01

    Abnormalities of retinal vasculatures can indicate health conditions in the body, such as the high blood pressure and diabetes. Providing automatically determined width ratio of arteries and veins (A/V ratio) on retinal fundus images may help physicians in the diagnosis of hypertensive retinopathy, which may cause blindness. The purpose of this study was to detect major retinal vessels and classify them into arteries and veins for the determination of A/V ratio. Images used in this study were obtained from DRIVE database, which consists of 20 cases each for training and testing vessel detection algorithms. Starting with the reference standard of vasculature segmentation provided in the database, major arteries and veins each in the upper and lower temporal regions were manually selected for establishing the gold standard. We applied the black top-hat transformation and double-ring filter to detect retinal blood vessels. From the extracted vessels, large vessels extending from the optic disc to temporal regions were selected as target vessels for calculation of A/V ratio. Image features were extracted from the vessel segments from quarter-disc to one disc diameter from the edge of optic discs. The target segments in the training cases were classified into arteries and veins by using the linear discriminant analysis, and the selected parameters were applied to those in the test cases. Out of 40 pairs, 30 pairs (75%) of arteries and veins in the 20 test cases were correctly classified. The result can be used for the automated calculation of A/V ratio.

  17. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  18. Automated and simultaneous fovea center localization and macula segmentation using the new dynamic identification and classification of edges model

    Science.gov (United States)

    Onal, Sinan; Chen, Xin; Satamraju, Veeresh; Balasooriya, Maduka; Dabil-Karacal, Humeyra

    2016-01-01

    Abstract. Detecting the position of retinal structures, including the fovea center and macula, in retinal images plays a key role in diagnosing eye diseases such as optic nerve hypoplasia, amblyopia, diabetic retinopathy, and macular edema. However, current detection methods are unreliable for infants or certain ethnic populations. Thus, a methodology is proposed here that may be useful for infants and across ethnicities that automatically localizes the fovea center and segments the macula on digital fundus images. First, dark structures and bright artifacts are removed from the input image using preprocessing operations, and the resulting image is transformed to polar space. Second, the fovea center is identified, and the macula region is segmented using the proposed dynamic identification and classification of edges (DICE) model. The performance of the method was evaluated using 1200 fundus images obtained from the relatively large, diverse, and publicly available Messidor database. In 96.1% of these 1200 cases, the distance between the fovea center identified manually by ophthalmologists and automatically using the proposed method remained within 0 to 8 pixels. The dice similarity index comparing the manually obtained results with those of the model for macula segmentation was 96.12% for these 1200 cases. Thus, the proposed method displayed a high degree of accuracy. The methodology using the DICE model is unique and advantageous over previously reported methods because it simultaneously determines the fovea center and segments the macula region without using any structural information, such as optic disc or blood vessel location, and it may prove useful for all populations, including infants. PMID:27660803

  19. Spectral matching techniques (SMTs) and automated cropland classification algorithms (ACCAs) for mapping croplands of Australia using MODIS 250-m time-series (2000–2015) data

    Science.gov (United States)

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Congalton, Russell G.; Oliphant, Adam; Poehnelt, Justin; Yadav, Kamini; Rao, Mahesh N.; Massey, Richard

    2017-01-01

    Mapping croplands, including fallow areas, are an important measure to determine the quantity of food that is produced, where they are produced, and when they are produced (e.g. seasonality). Furthermore, croplands are known as water guzzlers by consuming anywhere between 70% and 90% of all human water use globally. Given these facts and the increase in global population to nearly 10 billion by the year 2050, the need for routine, rapid, and automated cropland mapping year-after-year and/or season-after-season is of great importance. The overarching goal of this study was to generate standard and routine cropland products, year-after-year, over very large areas through the use of two novel methods: (a) quantitative spectral matching techniques (QSMTs) applied at continental level and (b) rule-based Automated Cropland Classification Algorithm (ACCA) with the ability to hind-cast, now-cast, and future-cast. Australia was chosen for the study given its extensive croplands, rich history of agriculture, and yet nonexistent routine yearly generated cropland products using multi-temporal remote sensing. This research produced three distinct cropland products using Moderate Resolution Imaging Spectroradiometer (MODIS) 250-m normalized difference vegetation index 16-day composite time-series data for 16 years: 2000 through 2015. The products consisted of: (1) cropland extent/areas versus cropland fallow areas, (2) irrigated versus rainfed croplands, and (3) cropping intensities: single, double, and continuous cropping. An accurate reference cropland product (RCP) for the year 2014 (RCP2014) produced using QSMT was used as a knowledge base to train and develop the ACCA algorithm that was then applied to the MODIS time-series data for the years 2000–2015. A comparison between the ACCA-derived cropland products (ACPs) for the year 2014 (ACP2014) versus RCP2014 provided an overall agreement of 89.4% (kappa = 0.814) with six classes: (a) producer’s accuracies varying

  20. Automated Water Extraction Index

    DEFF Research Database (Denmark)

    Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus

    2014-01-01

    of various sorts of environmental noise and at the same time offers a stable threshold value. Thus we introduced a new Automated Water Extraction Index (AWEI) improving classification accuracy in areas that include shadow and dark surfaces that other classification methods often fail to classify correctly...

  1. Particle Swarm Optimization approach to defect detection in armour ceramics.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  2. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    Science.gov (United States)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  3. Automated Classification of Power Signals

    Science.gov (United States)

    2008-06-01

    high purity product stream (i.e. purified water) and the concentrated reject stream (i.e. brine ). The brine is piped directly overboard while the...RC7000 Plus Reverse Osmosis Desalination Plant Operations and Maintenance Manual. Gardena, CA : Village Marine Tec, 2004. 108

  4. The Effect of Automation on Job Duties, Classifications, Staffing Patterns, and Labor Costs in the UBC Library's Cataloguing Divisions: A Comparison of 1973 and 1986.

    Science.gov (United States)

    de Bruijn, Erik

    This report discusses an ex post facto study that was done to examine the effect that the implementation of automated systems has had on libraries and support staff, labor costs, and productivity in the cataloging divisions of the library of the University of British Columbia. A comparison was made between two years: 1973, a pre-automated period…

  5. Automated grading of wood-slabs. The development of a prototype system

    DEFF Research Database (Denmark)

    Ersbøll, Bjarne Kjær; Conradsen, Knut

    1992-01-01

    This paper proposes a method for automatically grading small beechwood slabs. The method involves two classification steps: the first step detects defects based on local visual texture; the second step utilizes the relative distribution of defects to perform a final grading assessment. At a major...... Danish plant for manufacture of parquet boards, the quality grading (visual quality) has always been done manually. As it is expected to be both expensive and difficult to recruit sufficient numbers of personnel to do this type of job in the future, it is of great interest to automate the function...

  6. Quality Control in Automated Manufacturing Processes – Combined Features for Image Processing

    Directory of Open Access Journals (Sweden)

    B. Kuhlenkötter

    2006-01-01

    Full Text Available In production processes the use of image processing systems is widespread. Hardware solutions and cameras respectively are available for nearly every application. One important challenge of image processing systems is the development and selection of appropriate algorithms and software solutions in order to realise ambitious quality control for production processes. This article characterises the development of innovative software by combining features for an automatic defect classification on product surfaces. The artificial intelligent method Support Vector Machine (SVM is used to execute the classification task according to the combined features. This software is one crucial element for the automation of a manually operated production process. 

  7. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification....... Although other methods exist, we concentrate on Bayesian modeling approaches, in which generative image models are constructed and subsequently ‘inverted’ to obtain automated segmentations. This general framework encompasses a large number of segmentation methods, including those implemented in widely used...

  8. Birth Defects

    Science.gov (United States)

    A birth defect is a problem that happens while a baby is developing in the mother's body. Most birth defects happen during the first 3 months of ... in the United States is born with a birth defect. A birth defect may affect how the ...

  9. The classification of motor neuron defects in the zebrafish embryo toxicity test (ZFET) as an animal alternative approach to assess developmental neurotoxicity.

    Science.gov (United States)

    Muth-Köhne, Elke; Wichmann, Arne; Delov, Vera; Fenske, Martina

    2012-07-01

    Rodents are widely used to test the developmental neurotoxicity potential of chemical substances. The regulatory test procedures are elaborate and the requirement of numerous animals is ethically disputable. Therefore, non-animal alternatives are highly desirable, but appropriate test systems that meet regulatory demands are not yet available. Hence, we have developed a new developmental neurotoxicity assay based on specific whole-mount immunostainings of primary and secondary motor neurons (using the monoclonal antibodies znp1 and zn8) in zebrafish embryos. By classifying the motor neuron defects, we evaluated the severity of the neurotoxic damage to individual primary and secondary motor neurons caused by chemical exposure and determined the corresponding effect concentration values (EC₅₀). In a proof-of-principle study, we investigated the effects of three model compounds thiocyclam, cartap and disulfiram, which show some neurotoxicity-indicating effects in vertebrates, and the positive controls ethanol and nicotine and the negative controls 3,4-dichloroaniline (3,4-DCA) and triclosan. As a quantitative measure of the neurotoxic potential of the test compounds, we calculated the ratios of the EC₅₀ values for motor neuron defects and the cumulative malformations, as determined in a zebrafish embryo toxicity test (zFET). Based on this index, disulfiram was classified as the most potent and thiocyclam as the least potent developmental neurotoxin. The index also confirmed the control compounds as positive and negative neurotoxicants. Our findings demonstrate that this index can be used to reliably distinguish between neurotoxic and non-neurotoxic chemicals and provide a sound estimate for the neurodevelopmental hazard potential of a chemical. The demonstrated method can be a feasible approach to reduce the number of animals used in developmental neurotoxicity evaluation procedures. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Defining defect specifications to optimize photomask production and requalification

    Science.gov (United States)

    Fiekowsky, Peter

    2006-10-01

    Reducing defect repairs and accelerating defect analysis is becoming more important as the total cost of defect repairs on advanced masks increases. Photomask defect specs based on printability, as measured on AIMS microscopes has been used for years, but the fundamental defect spec is still the defect size, as measured on the photomask, requiring the repair of many unprintable defects. ADAS, the Automated Defect Analysis System from AVI is now available in most advanced mask shops. It makes the use of pure printability specs, or "Optimal Defect Specs" practical. This software uses advanced algorithms to eliminate false defects caused by approximations in the inspection algorithm, classify each defect, simulate each defect and disposition each defect based on its printability and location. This paper defines "optimal defect specs", explains why they are now practical and economic, gives a method of determining them and provides accuracy data.

  11. Anatomical Classifications of the Coronary Arteries in Complete Transposition of the Great Arteries and Double Outlet Right Ventricle with Subpulmonary Ventricular Septal Defect.

    Science.gov (United States)

    Wang, Cuijin; Chen, Shubao; Zhang, Haibo; Liu, Jinfen; Xu, Zhiwei; Zheng, Jinhao; Yan, Qin; Huang, Huimin; Huang, Meirong

    2017-01-01

    Objective  To discuss the anatomical morphologies of the coronary arteries and frequencies of unusual coronary arteries in complete transposition of the great arteries and double outlet right ventricle (DORV) associated with a subpulmonic ventricular septal defect (VSD). Methods  Between March 1999 and August 2012, 1,078 patients with complete transposition of the great arteries or DORV with subpulmonary VSD underwent arterial switch operations (ASOs) and were visually evaluated to classify their coronary artery morphology during open heart surgery. Results  The coronary arteries could be classified into five patterns with several subtypes. Unusual coronary arteries were observed in 248 of the 1,078 cases, providing a frequency of 23.01%. The frequencies of the patients with transposition of the great arteries with intact ventricular septum (TGA/IVS), TGA/VSD, and DORV with subpulmonary VSD were 17.65, 23.28, and 31.84%, respectively. The most common morphologies were the right coronary artery (RCA) originating from sinus 1 and circumflex (CX) originating from sinus 2 (1R, AD; 2CX; 26.50%); the CX originating from sinus 2 (1AD; 2R, CX; 21.36%); the RCA, left anterior descending artery, and CX originating from single sinus 2 (2R, AD, CX; 13.24%). The in-hospital mortalities of the patients with or without unusual coronary arteries after ASO were 14.1 and 6.02%, respectively. Conclusion  Patients with complete transposition of the great arteries or DORV with subpulmonary VSD have a high frequency of unusual coronary arteries, which might greatly impact on the mortality for ASO. Improving the preoperative diagnostic criteria for coronary artery morphology may significantly increase the success rate for ASOs. Georg Thieme Verlag KG Stuttgart · New York.

  12. Toward Intelligent Software Defect Detection

    Science.gov (United States)

    Benson, Markland J.

    2011-01-01

    Source code level software defect detection has gone from state of the art to a software engineering best practice. Automated code analysis tools streamline many of the aspects of formal code inspections but have the drawback of being difficult to construct and either prone to false positives or severely limited in the set of defects that can be detected. Machine learning technology provides the promise of learning software defects by example, easing construction of detectors and broadening the range of defects that can be found. Pinpointing software defects with the same level of granularity as prominent source code analysis tools distinguishes this research from past efforts, which focused on analyzing software engineering metrics data with granularity limited to that of a particular function rather than a line of code.

  13. Integrating dimension reduction and out-of-sample extension in automated classification of ex vivo human patellar cartilage on phase contrast X-ray computed tomography.

    Directory of Open Access Journals (Sweden)

    Mahesh B Nagarajan

    Full Text Available Phase contrast X-ray computed tomography (PCI-CT has been demonstrated as a novel imaging technique that can visualize human cartilage with high spatial resolution and soft tissue contrast. Different textural approaches have been previously investigated for characterizing chondrocyte organization on PCI-CT to enable classification of healthy and osteoarthritic cartilage. However, the large size of feature sets extracted in such studies motivates an investigation into algorithmic feature reduction for computing efficient feature representations without compromising their discriminatory power. For this purpose, geometrical feature sets derived from the scaling index method (SIM were extracted from 1392 volumes of interest (VOI annotated on PCI-CT images of ex vivo human patellar cartilage specimens. The extracted feature sets were subject to linear and non-linear dimension reduction techniques as well as feature selection based on evaluation of mutual information criteria. The reduced feature set was subsequently used in a machine learning task with support vector regression to classify VOIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver-operating characteristic (ROC curve (AUC. Our results show that the classification performance achieved by 9-D SIM-derived geometric feature sets (AUC: 0.96 ± 0.02 can be maintained with 2-D representations computed from both dimension reduction and feature selection (AUC values as high as 0.97 ± 0.02. Thus, such feature reduction techniques can offer a high degree of compaction to large feature sets extracted from PCI-CT images while maintaining their ability to characterize the underlying chondrocyte patterns.

  14. Automated classification of seismic sources in large database using random forest algorithm: First results at Piton de la Fournaise volcano (La Réunion).

    Science.gov (United States)

    Hibert, Clément; Provost, Floriane; Malet, Jean-Philippe; Stumpf, André; Maggi, Alessia; Ferrazzini, Valérie

    2016-04-01

    In the past decades the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, that include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. We propose a multi-class detection method based on the random forests algorithm to automatically classify the source of seismic signals. Random forests is a supervised machine learning technique that is based on the computation of a large number of decision trees. The multiple decision trees are constructed from training sets including each of the target classes. In the case of seismic signals, these attributes may encompass spectral features but also waveform characteristics, multi-stations observations and other relevant information. The Random Forests classifier is used because it provides state-of-the-art performance when compared with other machine learning techniques (e.g. SVM, Neural Networks) and requires no fine tuning. Furthermore it is relatively fast, robust, easy to parallelize, and inherently suitable for multi-class problems. In this work, we present the first results of the classification method applied

  15. Automated Instrumentation System Verification.

    Science.gov (United States)

    1983-04-01

    fUig JDma Entered) i. _-_J I ___________ UNCLASSI FI ED SECURITY CLASSIFICATION OF TIHIS PAGE(II7,m Daca Entod) 20. ABSTRACT (Continued). ) contain...automatic measurement should arise. 15 I "_......_______.....____,_.........____ _ ’ " AFWL-TR-82-137 11. TRADITIONAL PROCEDURES The necessity to measure data...measurement (Ref. 8). Finally, when the necessity for automation was recognized and funds were provided, the effort described in this report was started

  16. Fully automated classification of bone marrow infiltration in low-dose CT of patients with multiple myeloma based on probabilistic density model and supervised learning.

    Science.gov (United States)

    Martínez-Martínez, Francisco; Kybic, Jan; Lambert, Lukáš; Mecková, Zuzana

    2016-04-01

    This paper presents a fully automated method for the identification of bone marrow infiltration in femurs in low-dose CT of patients with multiple myeloma. We automatically find the femurs and the bone marrow within them. In the next step, we create a probabilistic, spatially dependent density model of normal tissue. At test time, we detect unexpectedly high density voxels which may be related to bone marrow infiltration, as outliers to this model. Based on a set of global, aggregated features representing all detections from one femur, we classify the subjects as being either healthy or not. This method was validated on a dataset of 127 subjects with ground truth created from a consensus of two expert radiologists, obtaining an AUC of 0.996 for the task of distinguishing healthy controls and patients with bone marrow infiltration. To the best of our knowledge, no other automatic image-based method for this task has been published before. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Detection of high-silica lava flows and lava morphology at the Alarcon Rise, Gulf of California, Mexico using automated classification of the morphological-compositional relationship in AUV multibeam bathymetry and sonar backscatter

    Science.gov (United States)

    Maschmeyer, C.; White, S. M.; Dreyer, B. M.; Clague, D. A.

    2015-12-01

    An automated compositional classification by adaptive neuro-fuzzy inference system (ANFIS) was developed to study volcanic processes that create high-silica lava at oceanic ridges. The objective of this research is to determine the existence of a relationship between lava morphology and composition. Researchers from the Monterey Bay Aquarium Research Institute (MBARI) recorded morphologic observations and collected samples for geochemical analysis during ROV dives at the Alarcon Rise in 2012 and 2015. The Alarcon Rise is a unique spreading ridge environment where composition ranges from basaltic to rhyolitic, making it an ideal location to examine the compositional-morphologic relationship of lava flows. Preliminary interpretation of field data indicates that high-silica lavas are typically associated with 3-5 m, blocky pillows at the heavily faulted north end of the Alarcon. Visual analysis of multibeam bathymetry and side-scan sonar backscatter from MBARI AUV D. Allen B. and gridded at 1 m suggests that lava flow morphology (pillow, lobate, sheet) can be distinguished by seafloor roughness. Bathymetric products used by ANFIS to quantify the morphologic-compositional relationship were slope, aspect, and bathymetric position index (BPI, a measure of local height relative to the adjacent terrain). Sonar backscatter intensity is influenced by surface roughness and previously used to distinguish lava morphology. Gray-level co-occurrence matrices (GLCM) were applied to backscatter to create edge-detection filters that recognized faults and fissures. Input data are slope, aspect, bathymetric value, BPI at 100 m scale, BPI at 500 m scale, backscatter intensity, and the first principle component of backscatter GLCM. After lava morphology was classified on the Alarcon Rise map, another classification was completed to detect locations of high-silica lava. Application of an expert classifier like ANFIS to distinguish lava composition may become an important tool in oceanic

  18. Defects and defect processes in nonmetallic solids

    CERN Document Server

    Hayes, W

    2004-01-01

    This extensive survey covers defects in nonmetals, emphasizing point defects and point-defect processes. It encompasses electronic, vibrational, and optical properties of defective solids, plus dislocations and grain boundaries. 1985 edition.

  19. Automated visual inspection of textile

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1997-01-01

    A method for automated inspection of two types of textile is presented. The goal of the inspection is to determine defects in the textile. A prototype is constructed for simulating the textile production line. At the prototype the images of the textile are acquired by a high speed line scan camera...

  20. Source finding, parametrization, and classification for the extragalactic Effelsberg-Bonn H i Survey

    Science.gov (United States)

    Flöer, L.; Winkel, B.; Kerp, J.

    2014-09-01

    Context. Source extraction for large-scale H i surveys currently involves large amounts of manual labor. For data volumes expected from future H i surveys with upcoming facilities, this approach is not feasible any longer. Aims: We describe the implementation of a fully automated source finding, parametrization, and classification pipeline for the Effelsberg-Bonn H i Survey (EBHIS). With future radio astronomical facilities in mind, we want to explore the feasibility of a completely automated approach to source extraction for large-scale H i surveys. Methods: Source finding is implemented using wavelet denoising methods, which previous studies show to be a powerful tool, especially in the presence of data defects. For parametrization, we automate baseline fitting, mask optimization, and other tasks based on well-established algorithms, currently used interactively. For the classification of candidates, we implement an artificial neural network, which is trained on a candidate set comprised of false positives from real data and simulated sources. Using simulated data, we perform a thorough analysis of the algorithms implemented. Results: We compare the results from our simulations to the parametrization accuracy of the H i Parkes All-Sky Survey (HIPASS) survey. Even though HIPASS is more sensitive than EBHIS in its current state, the parametrization accuracy and classification reliability match or surpass the manual approach used for HIPASS data.

  1. AUTOMATED PEDAGOGICAL DIAGNOSTICS IN MODERN UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Oleksandr H. Kolhatin

    2010-08-01

    Full Text Available Realisation of the pedagogical assessment functions at using of automated pedagogical diagnostics systems in university instruction process is considered. Pedagogical diagnostics software requirements are determined on the base of automated pedagogical diagnostics systems classifications analysis according to didactical aim of the diagnostics.

  2. Impact of Office Automation: An Empirical Assessment

    Science.gov (United States)

    1988-12-01

    imp F rq(I NAVAL POSTGRADUATE SCHOOL Monterey, California N I < DTIC S ELECTEI THESIS -’° "I I MPACT OF OFFICE AUTOMATION : AN EMPIRICAL ASSESSMENT by...FLNDiNG NUMBERS PROGRAM PROCT TASK IWORK UNIT ELEMNT O NONO ACCESSION NO 11 TITLE (Include Security Classification) IMPACT OF OFFICE AUTOMATION : AN...identity by block number) FIELD GROUP I SB-GROLP Productivity Assessment; SACONS; Office Automation I I 19 ABSTRACT (Continue on reverse if necessary

  3. Advanced defect classification by optical metrology

    NARCIS (Netherlands)

    Maas, D.J.

    2017-01-01

    The goal of the workshop is to provide a high level, invited only, international community that accelerates interactions between the main target groups: universities, institutes, entrepreneurs, intrapreneurs and investors in order to facilitate customer development, application discovery or funding

  4. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors......This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... into the topic of game classifications....

  5. 21 CFR 864.5700 - Automated platelet aggregation system.

    Science.gov (United States)

    2010-04-01

    ... addition of an aggregating reagent to a platelet-rich plasma. (b) Classification. Class II (performance... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated platelet aggregation system. 864.5700... § 864.5700 Automated platelet aggregation system. (a) Identification. An automated platelet aggregation...

  6. 21 CFR 864.5620 - Automated hemoglobin system.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated hemoglobin system. 864.5620 Section 864....5620 Automated hemoglobin system. (a) Identification. An automated hemoglobin system is a fully... hemoglobin content of human blood. (b) Classification. Class II (performance standards). [45 FR 60601, Sept...

  7. Quantitative Estimation for the Effectiveness of Automation

    International Nuclear Information System (INIS)

    Lee, Seung Min; Seong, Poong Hyun

    2012-01-01

    In advanced MCR, various automation systems are applied to enhance the human performance and reduce the human errors in industrial fields. It is expected that automation provides greater efficiency, lower workload, and fewer human errors. However, these promises are not always fulfilled. As the new types of events related to application of the imperfect and complex automation are occurred, it is required to analyze the effects of automation system for the performance of human operators. Therefore, we suggest the quantitative estimation method to analyze the effectiveness of the automation systems according to Level of Automation (LOA) classification, which has been developed over 30 years. The estimation of the effectiveness of automation will be achieved by calculating the failure probability of human performance related to the cognitive activities

  8. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  9. Yield impacting systematic defects search and management

    Science.gov (United States)

    Zhang, Jing; Xu, Qingxiu; Zhang, Xin; Zhao, Xing; Ning, Jay; Cheng, Guojie; Chen, Shijie; Zhang, Gary; Vikram, Abhishek; Su, Bo

    2012-03-01

    Despite great effort before design tapeout, there are still some pattern related systematic defects showing up in production, which impact product yield. Through various check points in the production life cycle endeavor is made to detect these defective patterns. It is seen that apart from the known defective patterns, slight variations of polygon sizes and shapes in the known defective patterns also cause yield loss. This complexity is further compounded when interactions among multiple process layers causes the defect. Normally the exact pattern matching techniques cannot detect these variations of the defective patterns. With the currently existing tools in the fab it is a challenge to define the 'sensitive patterns', which are arbitrary variations in the known 'defective patterns'. A design based approach has been successfully experimented on product wafers to detect yield impacting defects that greatly reduces the TAT for hotspot analysis and also provides optimized care area definition to enable high sensitivity wafer inspection. A novel Rule based pattern search technique developed by Anchor Semiconductor has been used to find sensitive patterns in the full chip design. This technique allows GUI based pattern search rule generation like, edge move or edge-to-edge distance range, so that any variations of a particular sensitive pattern can be captured and flagged. Especially the pattern rules involving multiple process layers, like M1-V1-M2, can be defined easily using this technique. Apart from using this novel pattern search technique, design signatures are also extracted around the defect locations in the wafer and used in defect classification. This enhanced defect classification greatly helps in determining most critical defects among the total defect population. The effectiveness of this technique has been established through design to defect correlation and SEM verification. In this paper we will report details of the design based experiments that

  10. Embedded defects

    International Nuclear Information System (INIS)

    Barriola, M.; Vachaspati, T.; Bucher, M.

    1994-01-01

    We give a prescription for embedding classical solutions and, in particular, topological defects in field theories which are invariant under symmetry groups that are not necessarily simple. After providing examples of embedded defects in field theories based on simple groups, we consider the electroweak model and show that it contains the Z string and a one-parameter family of strings called the W(α) string. It is argued that although the members of this family are gauge equivalent when considered in isolation, each member becomes physically distinct when multistring configurations are considered. We then turn to the issue of stability of embedded defects and demonstrate the instability of a large class of such solutions in the absence of bound states or condensates. The Z string is shown to be unstable for all values of the Higgs boson mass when θ W =π/4. W strings are also shown to be unstable for a large range of parameters. Embedded monopoles suffer from the Brandt-Neri-Coleman instability. Finally, we connect the electroweak string solutions to the sphaleron

  11. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  12. A Generic Deep-Learning-Based Approach for Automated Surface Inspection.

    Science.gov (United States)

    Ren, Ruoxu; Hung, Terence; Tan, Kay Chen

    2018-03-01

    Automated surface inspection (ASI) is a challenging task in industry, as collecting training dataset is usually costly and related methods are highly dataset-dependent. In this paper, a generic approach that requires small training data for ASI is proposed. First, this approach builds classifier on the features of image patches, where the features are transferred from a pretrained deep learning network. Next, pixel-wise prediction is obtained by convolving the trained classifier over input image. An experiment on three public and one industrial data set is carried out. The experiment involves two tasks: 1) image classification and 2) defect segmentation. The results of proposed algorithm are compared against several best benchmarks in literature. In the classification tasks, the proposed method improves accuracy by 0.66%-25.50%. In the segmentation tasks, the proposed method reduces error escape rates by 6.00%-19.00% in three defect types and improves accuracies by 2.29%-9.86% in all seven defect types. In addition, the proposed method achieves 0.0% error escape rate in the segmentation task of industrial data.

  13. Automated spectroscopic tissue classification in colorectal surgery

    NARCIS (Netherlands)

    Schols, R.M.; Alic, L.; Beets, G.L.; Breukink, S.O.; Wieringa, F.P.; Stassen, L.P.S.

    2015-01-01

    In colorectal surgery, detecting ureters and mesenteric arteries is of utmost importance to prevent iatrogenic injury and to facilitate intraoperative decision making. A tool enabling ureter- and artery-specific image enhancement within (and possibly through) surrounding adipose tissue would

  14. Empirical evaluation of three machine learning method for automatic classification of neoplastic diagnoses Evaluación empírica de tres métodos de aprendizaje automático para clasificar automáticamente diagnósticos de neoplasias

    Directory of Open Access Journals (Sweden)

    José Luis Jara

    2011-12-01

    Full Text Available Diagnoses are a valuable source of information for evaluating a health system. However, they are not used extensively by information systems because diagnoses are normally written in natural language. This work empirically evaluates three machine learning methods to automatically assign codes from the International Classification of Diseases (10th Revision to 3,335 distinct diagnoses of neoplasms obtained from UMLS®. This evaluation is conducted on three different types of preprocessing. The results are encouraging: a well-known rule induction method and maximum entropy models achieve 90% accuracy in a balanced cross-validation experiment.Los diagnósticos médicos son una fuente valiosa de información para evaluar el funcionamiento de un sistema de salud. Sin embargo, su utilización en sistemas de información se ve dificultada porque éstos se encuentran normalmente escritos en lenguaje natural. Este trabajo evalúa empíricamente tres métodos de Aprendizaje Automático para asignar códigos de acuerdo a la Clasificación Internacional de Enfermedades (décima versión a 3.335 diferentes diagnósticos de neoplasias extraídos desde UMLS®. Esta evaluación se realiza con tres tipos distintos de preprocesamiento. Los resultados son alentadores: un conocido método de inducción de reglas de decisión y modelos de entropía máxima obtienen alrededor de 90% accuracy en una validación cruzada balanceada.

  15. Precise design-based defect characterization and root cause analysis

    Science.gov (United States)

    Xie, Qian; Venkatachalam, Panneerselvam; Lee, Julie; Chen, Zhijin; Zafar, Khurram

    2017-03-01

    that human operators will typically miss), to obtain the exact defect location on design, to compare all defective patterns thus detected against a library of known patterns, and to classify all defective patterns as either new or known. By applying the computer to these tasks, we automate the entire process from defective pattern identification to pattern classification with high precision, and we perform this operation en masse during R & D, ramp, and volume production. By adopting the methodology, whenever a specific weak pattern is identified, we are able to run a series of characterization operations to ultimately arrive at the root cause. These characterization operations can include (a) searching all pre-existing Review SEM images for the presence of the specific weak pattern to determine whether there is any spatial (within die or within wafer) or temporal (within any particular date range, before or after a mask revision, etc.) correlation and (b) understanding the failure rate of the specific weak pattern to prioritize the urgency of the problem, (c) comparing the weak pattern against an OPC (Optical Procimity Correction) Verification report or a PWQ (Process Window Qualification)/FEM (Focus Exposure Matrix) result to assess the likelihood of it being a litho-sensitive pattern, etc. After resolving the specific weak pattern, we will categorize it as known pattern, and the engineer will move forward with discovering new weak patterns.

  16. Coagulation defects.

    Science.gov (United States)

    Soliman, Doreen E; Broadman, Lynn M

    2006-09-01

    The present understanding of the coagulation process emphasizes the final common pathway and the proteolytic systems that result in the degradation of formed clots and the prevention of unwanted clot formations, as well as a variety of defense systems that include tissue repair, autoimmune processes, arteriosclerosis, tumor growth, the spread of metastases, and defense systems against micro-organisms. This article discusses diagnosis and management of some of the most common bleeding disorders. The goals are to provide a simple guide on how best to manage patients afflicted with congenital or acquired clotting abnormalities during the perioperative period, present a brief overview of the methods of testing and monitoring the coagulation defects, and discuss the appropriate pharmacologic or blood component therapies for each disease.

  17. Text classification

    OpenAIRE

    Deveikis, Karolis

    2016-01-01

    This paper investigates the problem of text classification. The task of text classification is to assign a piece of text to one of several categories based on its content. Text classification is one of the tasks of natural language processing. Like the others, it is often solved using machine learning algorithms. There are many algorithms suitable for text classification. As a result, a problem of choice arises. In an effort to solve this problem, this paper analyzes various feature extractio...

  18. Automated External Defibrillator

    Science.gov (United States)

    ... To Health Topics / Automated External Defibrillator Automated External Defibrillator Also known as What Is An automated external ... in survival. Training To Use an Automated External Defibrillator Learning how to use an AED and taking ...

  19. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  20. Comparison of Threshold Saccadic Vector Optokinetic Perimetry (SVOP) and Standard Automated Perimetry (SAP) in Glaucoma. Part II: Patterns of Visual Field Loss and Acceptability.

    Science.gov (United States)

    McTrusty, Alice D; Cameron, Lorraine A; Perperidis, Antonios; Brash, Harry M; Tatham, Andrew J; Agarwal, Pankaj K; Murray, Ian C; Fleck, Brian W; Minns, Robert A

    2017-09-01

    We compared patterns of visual field loss detected by standard automated perimetry (SAP) to saccadic vector optokinetic perimetry (SVOP) and examined patient perceptions of each test. A cross-sectional study was done of 58 healthy subjects and 103 with glaucoma who were tested using SAP and two versions of SVOP (v1 and v2). Visual fields from both devices were categorized by masked graders as: 0, normal; 1, paracentral defect; 2, nasal step; 3, arcuate defect; 4, altitudinal; 5, biarcuate; and 6, end-stage field loss. SVOP and SAP classifications were cross-tabulated. Subjects completed a questionnaire on their opinions of each test. We analyzed 142 (v1) and 111 (v2) SVOP and SAP test pairs. SVOP v2 had a sensitivity of 97.7% and specificity of 77.9% for identifying normal versus abnormal visual fields. SAP and SVOP v2 classifications showed complete agreement in 54% of glaucoma patients, with a further 23% disagreeing by one category. On repeat testing, 86% of SVOP v2 classifications agreed with the previous test, compared to 91% of SAP classifications; 71% of subjects preferred SVOP compared to 20% who preferred SAP. Eye-tracking perimetry can be used to obtain threshold visual field sensitivity values in patients with glaucoma and produce maps of visual field defects, with patterns exhibiting close agreement to SAP. Patients preferred eye-tracking perimetry compared to SAP. This first report of threshold eye tracking perimetry shows good agreement with conventional automated perimetry and provides a benchmark for future iterations.

  1. Neural Tube Defects

    Science.gov (United States)

    Neural tube defects are birth defects of the brain, spine, or spinal cord. They happen in the ... that she is pregnant. The two most common neural tube defects are spina bifida and anencephaly. In ...

  2. Principles and methods for automated palynology.

    Science.gov (United States)

    Holt, K A; Bennett, K D

    2014-08-01

    Pollen grains are microscopic so their identification and quantification has, for decades, depended upon human observers using light microscopes: a labour-intensive approach. Modern improvements in computing and imaging hardware and software now bring automation of pollen analyses within reach. In this paper, we provide the first review in over 15 yr of progress towards automation of the part of palynology concerned with counting and classifying pollen, bringing together literature published from a wide spectrum of sources. We consider which attempts offer the most potential for an automated palynology system for universal application across all fields of research concerned with pollen classification and counting. We discuss what is required to make the datasets of these automated systems as acceptable as those produced by human palynologists, and present suggestions for how automation will generate novel approaches to counting and classifying pollen that have hitherto been unthinkable.

  3. Automated Inspection Algorithm for Thick Plate Using Dual Light Switching Lighting Method

    OpenAIRE

    Yong-JuJeon; Doo-chul Choi; Jong Pil Yun; Changhyun Park; Homoon Bae; Sang Woo Kim

    2012-01-01

    This paper presents an automated inspection algorithm for a thick plate. Thick plates typically have various types of surface defects, such as scabs, scratches, and roller marks. These defects have individual characteristics including brightness and shape. Therefore, it is not simple to detect all the defects. In order to solve these problems and to detect defects more effectively, we propose a dual light switching lighting method and a defect detection algorithm based on ...

  4. Automated image analysis of the pathological lung in CT

    NARCIS (Netherlands)

    Sluimer, Ingrid Christine

    2005-01-01

    The general objective of the thesis is automation of the analysis of the pathological lung from CT images. Specifically, we aim for automated detection and classification of abnormalities in the lung parenchyma. We first provide a review of computer analysis techniques applied to CT of the

  5. Influence of automated cataloguing system on manual cataloguing ...

    African Journals Online (AJOL)

    This study examied the automation of cataloguing and classification practices in academic libraries in South-West Nigerian and what effect the automated cataloguing systme has on manual cataloguing in the the libraries. The study population comprised 110 library professional and paraprofessional personnel working in ...

  6. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    Science.gov (United States)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  7. Automated Change Detection for Synthetic Aperture Sonar

    Science.gov (United States)

    2014-01-01

    2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Automated Change Detection for Synthetic Aperture Sonar...R. Azimi-Sadjadi and S. Srinivasan, “Coherent Change Detection and Classification in Synthetic Aper - ture Radar Imagery Using Canonical Correlation

  8. Feature analysis and classification of manufacturing signatures based on semiconductor wafermaps

    Energy Technology Data Exchange (ETDEWEB)

    Tobin, K.W.; Gleason, S.S.; Karnowski, T.P. [Oak Ridge National Lab., TN (United States); Cohen, S.L. [SEMATECH, Austin, TX (United States)

    1997-02-01

    Automated tools for semiconductor wafer defect analysis are becoming more necessary as device densities and wafer sizes continue to increase. Trends towards larger wafer formats and smaller critical dimensions have caused an exponential increase in the volume of defect data which must be analyzed and stored. To accommodate these changing factors, automatic analysis tools are required that can efficiently and robustly process the increasing amounts of data, and thus quickly characterize manufacturing processes and accelerate yield learning. During the first year of this cooperative research project between SEMATECH and the Oak Ridge National Laboratory, a robust methodology for segmenting signature events prior to feature analysis and classification was developed. Based on the results of this segmentation procedure, a feature measurement strategy has been designed based on interviews with process engineers coupled with the analysis of approximately 1500 electronic wafermap files. In this paper, the authors represent an automated procedure to rank and select relevant features for use with a fuzzy pair-wise classifier and give examples of the efficacy of the approach taken. Results of the feature selection process are given for two uniquely different types of class data to demonstrate a general improvement in classifier performance.

  9. Pattern recognition and classification an introduction

    CERN Document Server

    Dougherty, Geoff

    2012-01-01

    The use of pattern recognition and classification is fundamental to many of the automated electronic systems in use today. However, despite the existence of a number of notable books in the field, the subject remains very challenging, especially for the beginner. Pattern Recognition and Classification presents a comprehensive introduction to the core concepts involved in automated pattern recognition. It is designed to be accessible to newcomers from varied backgrounds, but it will also be useful to researchers and professionals in image and signal processing and analysis, and in computer visi

  10. Software defect, feature and requirements management system

    OpenAIRE

    Indriūnas, Paulius

    2006-01-01

    Software development is an iterative process which is based on teamwork and information exchange. In order to keep this process running, proper informational flow control techniques in a software development company have to be applied. As number of employees grows, manual control of this process becomes inaffective and automated solutions takes over this task. The most common informational units in the software development process are defects, new features and requirements. This paper address...

  11. Autonomous Systems: Habitat Automation

    Data.gov (United States)

    National Aeronautics and Space Administration — The Habitat Automation Project Element within the Autonomous Systems Project is developing software to automate the automation of habitats and other spacecraft. This...

  12. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  13. Future Control and Automation : Proceedings of the 2nd International Conference on Future Control and Automation

    CERN Document Server

    2012-01-01

    This volume Future Control and Automation- Volume 1 includes best papers selected from 2012 2nd International Conference on Future Control and Automation (ICFCA 2012) held on July 1-2, 2012, Changsha, China. Future control and automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. This volume can be divided into five sessions on the basis of the classification of manuscripts considered, which is listed as follows: Identification and Control, Navigation, Guidance and Sensor, Simulation Technology, Future Telecommunications and Control

  14. Research of the application of the new communication technologies for distribution automation

    Science.gov (United States)

    Zhong, Guoxin; Wang, Hao

    2018-03-01

    Communication network is a key factor of distribution automation. In recent years, new communication technologies for distribution automation have a rapid development in China. This paper introduces the traditional communication technologies of distribution automation and analyse the defects of these traditional technologies. Then this paper gives a detailed analysis on some new communication technologies for distribution automation including wired communication and wireless communication and then gives an application suggestion of these new technologies.

  15. Eliminating Vertical Stripe Defects on Silicon Steel Surface by L1/2 Regularization

    OpenAIRE

    Jing, Wenfeng; Meng, Deyu; Qiao, Chen; Peng, Zhiming

    2011-01-01

    The vertical stripe defects on silicon steel surface seriously affect the appearance and electromagnetic properties of silicon steel products. Eliminating such defects is adifficult and urgent technical problem. This paper investigates the relationship between the defects and their influence factors by classification methods. However, when the common classification methods are used in the problem, we cannot obtain a classifier with high accuracy. Byanalysis of the data set, we find that it is...

  16. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  17. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce...... the convolutional classification RBM, a combination of the existing convolutional RBM and classification RBM, and use it for discriminative feature learning. We evaluate the classification accuracy of convolutional and non-convolutional classification RBMs on two lung CT problems. We find that RBM-learned features...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  18. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  19. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  20. Comparison of Size Modulation Standard Automated Perimetry and Conventional Standard Automated Perimetry with a 10-2 Test Program in Glaucoma Patients.

    Science.gov (United States)

    Hirasawa, Kazunori; Takahashi, Natsumi; Satou, Tsukasa; Kasahara, Masayuki; Matsumura, Kazuhiro; Shoji, Nobuyuki

    2017-08-01

    This prospective observational study compared the performance of size modulation standard automated perimetry with the Octopus 600 10-2 test program, with stimulus size modulation during testing, based on stimulus intensity and conventional standard automated perimetry, with that of the Humphrey 10-2 test program in glaucoma patients. Eighty-seven eyes of 87 glaucoma patients underwent size modulation standard automated perimetry with Dynamic strategy and conventional standard automated perimetry using the SITA standard strategy. The main outcome measures were global indices, point-wise threshold, visual defect size and depth, reliability indices, and test duration; these were compared between size modulation standard automated perimetry and conventional standard automated perimetry. Global indices and point-wise threshold values between size modulation standard automated perimetry and conventional standard automated perimetry were moderately to strongly correlated (p 33.40, p modulation standard automated perimetry than with conventional standard automated perimetry, but the visual-field defect size was smaller (p modulation-standard automated perimetry than on conventional standard automated perimetry. The reliability indices, particularly the false-negative response, of size modulation standard automated perimetry were worse than those of conventional standard automated perimetry (p modulation standard automated perimetry than with conventional standard automated perimetry (p = 0.02). Global indices and the point-wise threshold value of the two testing modalities correlated well. However, the potential of a large stimulus presented at an area with a decreased sensitivity with size modulation standard automated perimetry could underestimate the actual threshold in the 10-2 test protocol, as compared with conventional standard automated perimetry.

  1. Defect of the Eyelids.

    Science.gov (United States)

    Lu, Guanning Nina; Pelton, Ron W; Humphrey, Clinton D; Kriet, John David

    2017-08-01

    Eyelid defects disrupt the complex natural form and function of the eyelids and present a surgical challenge. Detailed knowledge of eyelid anatomy is essential in evaluating a defect and composing a reconstructive plan. Numerous reconstructive techniques have been described, including primary closure, grafting, and a variety of local flaps. This article describes an updated reconstructive ladder for eyelid defects that can be used in various permutations to solve most eyelid defects. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. AUTOMATING THE DATA SECURITY PROCESS

    Directory of Open Access Journals (Sweden)

    Florin Ogigau-Neamtiu

    2017-11-01

    Full Text Available Contemporary organizations face big data security challenges in the cyber environment due to modern threats and actual business working model which relies heavily on collaboration, data sharing, tool integration, increased mobility, etc. The nowadays data classification and data obfuscation selection processes (encryption, masking or tokenization suffer because of the human implication in the process. Organizations need to shirk data security domain by classifying information based on its importance, conduct risk assessment plans and use the most cost effective data obfuscation technique. The paper proposes a new model for data protection by using automated machine decision making procedures to classify data and to select the appropriate data obfuscation technique. The proposed system uses natural language processing capabilities to analyze input data and to select the best course of action. The system has capabilities to learn from previous experiences thus improving itself and reducing the risk of wrong data classification.

  3. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  4. Using Machine Learning for Land Suitability Classification ...

    African Journals Online (AJOL)

    Artificial intelligence and machine learning methods can be used to automate the land suitability classification. Multiple Classifier System (MCS) or ensemble methods are rapidly growing and receiving a lot of attention and proved to be more accurate and robust than an excellent single classifier in many fields. In this study ...

  5. Improving settlement type classification of aerial images

    CSIR Research Space (South Africa)

    Mdakane, L

    2014-10-01

    Full Text Available , an automated method can be used to help identify human settlements in a fixed, repeatable and timely manner. The main contribution of this work is to improve generalisation on settlement type classification of aerial imagery. Images acquired at different dates...

  6. Using Machine Learning for Land Suitability Classification

    African Journals Online (AJOL)

    User

    Abstract. Artificial intelligence and machine learning methods can be used to automate the land suitability classification. Multiple Classifier System (MCS) or ensemble methods are rapidly growing and receiving a lot of attention and proved to be more accurate and robust than an excellent single classifier in many fields.

  7. Evaluation of Advanced Signal Processing Techniques to Improve Detection and Identification of Embedded Defects

    Energy Technology Data Exchange (ETDEWEB)

    Clayton, Dwight A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Santos-Villalobos, Hector J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baba, Justin S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-01

    , or an improvement in contrast over conventional SAFT reconstructed images. This report documents our efforts in four fronts: 1) Comparative study between traditional SAFT and FBD SAFT for concrete specimen with and without Alkali-Silica Reaction (ASR) damage, 2) improvement of our Model-Based Iterative Reconstruction (MBIR) for thick reinforced concrete [5], 3) development of a universal framework for sharing, reconstruction, and visualization of ultrasound NDE datasets, and 4) application of machine learning techniques for automated detection of ASR inside concrete. Our comparative study between FBD and traditional SAFT reconstruction images shows a clear difference between images of ASR and non-ASR specimens. In particular, the left first harmonic shows an increased contrast and sensitivity to ASR damage. For MBIR, we show the superiority of model-based techniques over delay and sum techniques such as SAFT. Improvements include elimination of artifacts caused by direct arrival signals, and increased contrast and Signal to Noise Ratio. For the universal framework, we document a format for data storage based on the HDF5 file format, and also propose a modular Graphic User Interface (GUI) for easy customization of data conversion, reconstruction, and visualization routines. Finally, two techniques for ASR automated detection are presented. The first technique is based on an analysis of the frequency content using Hilbert Transform Indicator (HTI) and the second technique employees Artificial Neural Network (ANN) techniques for training and classification of ultrasound data as ASR or non-ASR damaged classes. The ANN technique shows great potential with classification accuracy above 95%. These approaches are extensible to the detection of additional reinforced, thick concrete defects and damage.

  8. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  9. On holographic defect entropy

    Energy Technology Data Exchange (ETDEWEB)

    Estes, John [Blackett Laboratory, Imperial College,London SW7 2AZ (United Kingdom); Jensen, Kristan [Department of Physics and Astronomy, University of Victoria,Victoria, BC V8W 3P6 (Canada); C.N. Yang Institute for Theoretical Physics, SUNY Stony Brook,Stony Brook, NY 11794-3840 (United States); O’Bannon, Andy [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Tsatis, Efstratios [8 Kotylaiou Street, Athens 11364 (Greece); Wrase, Timm [Stanford Institute for Theoretical Physics, Stanford University,Stanford, CA 94305 (United States)

    2014-05-19

    We study a number of (3+1)- and (2+1)-dimensional defect and boundary conformal field theories holographically dual to supergravity theories. In all cases the defects or boundaries are planar, and the defects are codimension-one. Using holography, we compute the entanglement entropy of a (hemi-)spherical region centered on the defect (boundary). We define defect and boundary entropies from the entanglement entropy by an appropriate background subtraction. For some (3+1)-dimensional theories we find evidence that the defect/boundary entropy changes monotonically under certain renormalization group flows triggered by operators localized at the defect or boundary. This provides evidence that the g-theorem of (1+1)-dimensional field theories generalizes to higher dimensions.

  10. On holographic defect entropy

    Science.gov (United States)

    Estes, John; Jensen, Kristan; O'Bannon, Andy; Tsatis, Efstratios; Wrase, Timm

    2014-05-01

    We study a number of (3 + 1)- and (2 + 1)-dimensional defect and boundary conformal field theories holographically dual to supergravity theories. In all cases the defects or boundaries are planar, and the defects are codimension-one. Using holography, we compute the entanglement entropy of a (hemi-)spherical region centered on the defect (boundary). We define defect and boundary entropies from the entanglement entropy by an appropriate background subtraction. For some (3 + 1)-dimensional theories we find evidence that the defect/boundary entropy changes monotonically under certain renormalization group flows triggered by operators localized at the defect or boundary. This provides evidence that the g-theorem of (1 + 1)-dimensional field theories generalizes to higher dimensions.

  11. Congenital defects of atlantal arch. A report of eight cases

    International Nuclear Information System (INIS)

    Tajima, Yosuke; Saeki, Naokatsu; Sugiyama, Ken; Masuda, Kosuke; Ishige, Satoshi; Yamauchi, Toshihiro; Miyata, Akihiro; Nakamura, Hiroshi; Kobayashi, Shigeki

    2010-01-01

    Atlantal arch defects are rare. The purpose of this paper is to investigate the incidence and clinical implications of these, using Cervical CT with traumatic patients. A retrospective review of 1,534 cervical spine computed tomography (CT) scans was performed to identify patients with atlantal arch defects. Posterior arch defects of the atlas were grouped in accordance with the classification of Currarino et al. Posterior arch defects were found in 7 (7/1534, 0.44%) and anterior arch defects were found in 2 (2/1534, 0.13%) of the 1,534 patients. The type A posterior arch defect was found in 5 patients and the type B posterior arch defect was found in 2 patients. No type C, D, or E defects were observed. One patient with a type B posterior arch defect had an anterior atlantal-arch midline cleft. Associated cervical spine anomaly was not observed in our cases. None of the reviewed patients had neurological deficits because of atlantal arch defects. Most congenital anomalies of the atlantal arch are found incidentally during investigation of neck mass, neck pain, radiculopathy, and after trauma. Almost cases of atlantal arch defects are not need to operate. But it is important to note some cases require surgical treatment. (author)

  12. Defect detection in textured materials using optimized filters.

    Science.gov (United States)

    Kumar, A; Pang, G H

    2002-01-01

    The problem of automated defect detection in textured materials is investigated. A new approach for defect detection using linear FIR filters with optimized energy separation is proposed. The performance of different feature separation criteria with reference to fabric defects has been evaluated. The issues relating to the design of optimal filters for supervised and unsupervised web inspection are addressed. A general web inspection system based on the optimal filters is proposed. The experiments on this new approach have yielded excellent results. The low computational requirement confirms the usefulness of the approach for industrial inspection.

  13. Xenolog classification.

    Science.gov (United States)

    Darby, Charlotte A; Stolzer, Maureen; Ropp, Patrick J; Barker, Daniel; Durand, Dannie

    2017-03-01

    Orthology analysis is a fundamental tool in comparative genomics. Sophisticated methods have been developed to distinguish between orthologs and paralogs and to classify paralogs into subtypes depending on the duplication mechanism and timing, relative to speciation. However, no comparable framework exists for xenologs: gene pairs whose history, since their divergence, includes a horizontal transfer. Further, the diversity of gene pairs that meet this broad definition calls for classification of xenologs with similar properties into subtypes. We present a xenolog classification that uses phylogenetic reconciliation to assign each pair of genes to a class based on the event responsible for their divergence and the historical association between genes and species. Our classes distinguish between genes related through transfer alone and genes related through duplication and transfer. Further, they separate closely-related genes in distantly-related species from distantly-related genes in closely-related species. We present formal rules that assign gene pairs to specific xenolog classes, given a reconciled gene tree with an arbitrary number of duplications and transfers. These xenology classification rules have been implemented in software and tested on a collection of ∼13 000 prokaryotic gene families. In addition, we present a case study demonstrating the connection between xenolog classification and gene function prediction. The xenolog classification rules have been implemented in N otung 2.9, a freely available phylogenetic reconciliation software package. http://www.cs.cmu.edu/~durand/Notung . Gene trees are available at http://dx.doi.org/10.7488/ds/1503 . durand@cmu.edu. Supplementary data are available at Bioinformatics online.

  14. Automated vehicle for railway track fault detection

    Science.gov (United States)

    Bhushan, M.; Sujay, S.; Tushar, B.; Chitra, P.

    2017-11-01

    For the safety reasons, railroad tracks need to be inspected on a regular basis for detecting physical defects or design non compliances. Such track defects and non compliances, if not detected in a certain interval of time, may eventually lead to severe consequences such as train derailments. Inspection must happen twice weekly by a human inspector to maintain safety standards as there are hundreds and thousands of miles of railroad track. But in such type of manual inspection, there are many drawbacks that may result in the poor inspection of the track, due to which accidents may cause in future. So to avoid such errors and severe accidents, this automated system is designed.Such a concept would surely introduce automation in the field of inspection process of railway track and can help to avoid mishaps and severe accidents due to faults in the track.

  15. Automated reticle inspection data analysis for wafer fabs

    Science.gov (United States)

    Summers, Derek; Chen, Gong; Reese, Bryan; Hutchinson, Trent; Liesching, Marcus; Ying, Hai; Dover, Russell

    2009-04-01

    To minimize potential wafer yield loss due to mask defects, most wafer fabs implement some form of reticle inspection system to monitor photomask quality in high-volume wafer manufacturing environments. Traditionally, experienced operators review reticle defects found by an inspection tool and then manually classify each defect as 'pass, warn, or fail' based on its size and location. However, in the event reticle defects are suspected of causing repeating wafer defects on a completed wafer, potential defects on all associated reticles must be manually searched on a layer-by-layer basis in an effort to identify the reticle responsible for the wafer yield loss. This 'problem reticle' search process is a very tedious and time-consuming task and may cause extended manufacturing line-down situations. Often times, Process Engineers and other team members need to manually investigate several reticle inspection reports to determine if yield loss can be tied to a specific layer. Because of the very nature of this detailed work, calculation errors may occur resulting in an incorrect root cause analysis effort. These delays waste valuable resources that could be spent working on other more productive activities. This paper examines an automated software solution for converting KLA-Tencor reticle inspection defect maps into a format compatible with KLA-Tencor's Klarity Defect(R) data analysis database. The objective is to use the graphical charting capabilities of Klarity Defect to reveal a clearer understanding of defect trends for individual reticle layers or entire mask sets. Automated analysis features include reticle defect count trend analysis and potentially stacking reticle defect maps for signature analysis against wafer inspection defect data. Other possible benefits include optimizing reticle inspection sample plans in an effort to support "lean manufacturing" initiatives for wafer fabs.

  16. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  17. Deep convolutional neural networks for detection of rail surface defects

    NARCIS (Netherlands)

    Faghih Roohi, S.; Hajizadeh, S.; Nunez Vicencio, Alfredo; Babuska, R.; De Schutter, B.H.K.; Estevez, Pablo A.; Angelov, Plamen P.; Del Moral Hernandez, Emilio

    2016-01-01

    In this paper, we propose a deep convolutional neural network solution to the analysis of image data for the detection of rail surface defects. The images are obtained from many hours of automated video recordings. This huge amount of data makes it impossible to manually inspect the images and

  18. Methods to prevent turbogenerators design elements defects

    Directory of Open Access Journals (Sweden)

    Валентина Володимирівна Шевченко

    2016-11-01

    Full Text Available The paper shows that the determination of a failure probability due to the design, technological and operational drawbacks, as well as due to the turbogenerators working time exceeding from statistics data is inaccurate. Machine park of turbogenerators being rather limited in number, the classification and the distribution of generators into groups is random. It can not be used in practice to identify the pre-emergency state of turbogenerators and their timely stop. Analysis and classification of most frequent defects of turbogenerators has been performed. Methods for assessing such defects and reduction of their development have been offered. The article notes that expenses should be taken into account when setting up a monitoring system to assess the state and to identify defects. Reduction of expenditures on both operating and new turbogenerators must be justified. Rapid return of investments must be ensured. The list of additional tests has been proposed: measurement of infrared radiation outside the body of the turbogenerator for the estimation of the thermal field distribution and the defects of gas coolers identification; vibroacoustic inspection of the stator core and casing to find out the defects in the suspension of the core in the stator casing; analysis of the impurities in the cooling gas and in the dry remains of the drainage products to detect the products of the steel core and the winding insulation wear; value measurement and establishment of the partial discharges formation position; research of vibrations to reveal the cracks in the shaft, circuiting in the rotor windings and defects in the bearings. The paper notes that at upgrading as power grows overall and mounting dimensions must be preserved so that the existing foundation could be used as well as the existing security systems. Therefore, when designing or upgrading turbogenerators with an increase in power it is necessary to introduce new design decisions

  19. Automated Induction Thermography of Generator Components

    Science.gov (United States)

    Goldammer, M.; Mooshofer, H.; Rothenfusser, M.; Bass, J.; Vrana, J.

    2010-02-01

    Using Active Thermography defects such as cracks can be detected fast and reliably. Choosing from a wide range of excitation techniques the method can be adapted to a number of tasks in non-destructive evaluation. Induction thermography is ideally suited for testing metallic components for cracks at or close to the surface. In power generation a number of components are subjected to high loads and stresses—therefore defect detection is crucial for a safe operation of the engines. Apart from combustion turbines this also applies to generators: At regular inspection intervals even small cracks have to be detected to avoid crack growth and consequently failure of the component. As an imaging technique thermography allows for a fast 100% testing of the complete surface of all relevant parts. An automated setup increases the cost effectiveness of induction thermography significantly. Time needed to test a single part is reduced, the number of tested parts per shift is increased, and cost for testing is reduced significantly. In addition, automation guarantees a reliable testing procedure which detects all critical defects. We present how non-destructive testing can be automated using as an example an industrial application at the Siemens sector Energy, and a new induction thermography setup for generator components.

  20. Classification of composite damage from FBG load monitoring signals

    NARCIS (Netherlands)

    Rajabzadehdizaji, Aydin; Hendriks, R.C.; Heusdens, R.; Groves, R.M.

    2017-01-01

    This paper describes a new method for the classification and identification of two major types of defects in composites, namely delamination and matrix cracks, by classification of the spectral features of fibre Bragg grating (FBG) signals. In aeronautical applications of composites, after a

  1. BENCHMARKING MACHINE LEARNING TECHNIQUES FOR SOFTWARE DEFECT DETECTION

    OpenAIRE

    Saiqa Aleem; Luiz Fernando Capretz; Faheem Ahmed

    2015-01-01

    Machine Learning approaches are good in solving problems that have less information. In most cases, the software domain problems characterize as a process of learning that depend on the various circumstances and changes accordingly. A predictive model is constructed by using machine learning approaches and classified them into defective and non-defective modules. Machine learning techniques help developers to retrieve useful information after the classification and enable them to analyse data...

  2. Defects in semiconductors

    CERN Document Server

    Romano, Lucia; Jagadish, Chennupati

    2015-01-01

    This volume, number 91 in the Semiconductor and Semimetals series, focuses on defects in semiconductors. Defects in semiconductors help to explain several phenomena, from diffusion to getter, and to draw theories on materials' behavior in response to electrical or mechanical fields. The volume includes chapters focusing specifically on electron and proton irradiation of silicon, point defects in zinc oxide and gallium nitride, ion implantation defects and shallow junctions in silicon and germanium, and much more. It will help support students and scientists in their experimental and theoret

  3. Automated result analysis in radiographic testing of NPPs' welded joints

    International Nuclear Information System (INIS)

    Skomorokhov, A.O.; Nakhabov, A.V.; Belousov, P.A.

    2009-01-01

    The article presents development results of algorithms for automated image interpretation of NPP welded joints radiographic inspection. The developed algorithms are based on state-of-the-art pattern recognition methods. The paper covers automatic radiographic image segmentation, defects detection and their parameters evaluation issues. The developed algorithms testing results for actual radiographic images of welded joints with significant variation of defects parameters are given [ru

  4. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  5. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  6. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  7. The Business Case for Automated Software Engineering

    Science.gov (United States)

    Menzies, Tim; Elrawas, Oussama; Hihn, Jairus M.; Feather, Martin S.; Madachy, Ray; Boehm, Barry

    2007-01-01

    Adoption of advanced automated SE (ASE) tools would be more favored if a business case could be made that these tools are more valuable than alternate methods. In theory, software prediction models can be used to make that case. In practice, this is complicated by the 'local tuning' problem. Normally. predictors for software effort and defects and threat use local data to tune their predictions. Such local tuning data is often unavailable. This paper shows that assessing the relative merits of different SE methods need not require precise local tunings. STAR 1 is a simulated annealer plus a Bayesian post-processor that explores the space of possible local tunings within software prediction models. STAR 1 ranks project decisions by their effects on effort and defects and threats. In experiments with NASA systems. STARI found one project where ASE were essential for minimizing effort/ defect/ threats; and another project were ASE tools were merely optional.

  8. Study on on-machine defects measuring system on high power laser optical elements

    Science.gov (United States)

    Luo, Chi; Shi, Feng; Lin, Zhifan; Zhang, Tong; Wang, Guilin

    2017-10-01

    The influence of surface defects on high power laser optical elements will cause some harm to the performances of imaging system, including the energy consumption and the damage of film layer. To further increase surface defects on high power laser optical element, on-machine defects measuring system was investigated. Firstly, the selection and design are completed by the working condition analysis of the on-machine defects detection system. By designing on processing algorithms to realize the classification recognition and evaluation of surface defects. The calibration experiment of the scratch was done by using the self-made standard alignment plate. Finally, the detection and evaluation of surface defects of large diameter semi-cylindrical silicon mirror are realized. The calibration results show that the size deviation is less than 4% that meet the precision requirement of the detection of the defects. Through the detection of images the on-machine defects detection system can realize the accurate identification of surface defects.

  9. Automation in College Libraries.

    Science.gov (United States)

    Werking, Richard Hume

    1991-01-01

    Reports the results of a survey of the "Bowdoin List" group of liberal arts colleges. The survey obtained information about (1) automation modules in place and when they had been installed; (2) financing of automation and its impacts on the library budgets; and (3) library director's views on library automation and the nature of the…

  10. Defects in hardwood timber

    Science.gov (United States)

    Roswell D. Carpenter; David L. Sonderman; Everette D. Rast; Martin J. Jones

    1989-01-01

    Includes detailed information on all common defects that may aRect hardwood trees and logs. Relationships between manufactured products and those forms of round material to be processed from the tree for conversion into marketable products are discussed. This handbook supersedes Agriculture Handbook No. 244, Grade defects in hardwood timber and logs, by C.R. Lockard, J...

  11. Craniotomy Frontal Bone Defect

    African Journals Online (AJOL)

    2018-03-01

    Mar 1, 2018 ... with cosmetic deformity of fore head (Figure 1), and he claimed that he could not get job because of ... 1: Pre-operative forontal view of patient. Figure 2: Intra operative photography of defect (A) reconstructed defect (B) ... with a cosmetic deformity of forehead on left side. (4nA and B). He was a candidate for.

  12. Defects at oxide surfaces

    CERN Document Server

    Thornton, Geoff

    2015-01-01

    This book presents the basics and characterization of defects at oxide surfaces. It provides a state-of-the-art review of the field, containing information to the various types of surface defects, describes analytical methods to study defects, their chemical activity and the catalytic reactivity of oxides. Numerical simulations of defective structures complete the picture developed. Defects on planar surfaces form the focus of much of the book, although the investigation of powder samples also form an important part. The experimental study of planar surfaces opens the possibility of applying the large armoury of techniques that have been developed over the last half-century to study surfaces in ultra-high vacuum. This enables the acquisition of atomic level data under well-controlled conditions, providing a stringent test of theoretical methods. The latter can then be more reliably applied to systems such as nanoparticles for which accurate methods of characterization of structure and electronic properties ha...

  13. CLASSIFICATION OF LEARNING MANAGEMENT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Yu. B. Popova

    2016-01-01

    Full Text Available Using of information technologies and, in particular, learning management systems, increases opportunities of teachers and students in reaching their goals in education. Such systems provide learning content, help organize and monitor training, collect progress statistics and take into account the individual characteristics of each user. Currently, there is a huge inventory of both paid and free systems are physically located both on college servers and in the cloud, offering different features sets of different licensing scheme and the cost. This creates the problem of choosing the best system. This problem is partly due to the lack of comprehensive classification of such systems. Analysis of more than 30 of the most common now automated learning management systems has shown that a classification of such systems should be carried out according to certain criteria, under which the same type of system can be considered. As classification features offered by the author are: cost, functionality, modularity, keeping the customer’s requirements, the integration of content, the physical location of a system, adaptability training. Considering the learning management system within these classifications and taking into account the current trends of their development, it is possible to identify the main requirements to them: functionality, reliability, ease of use, low cost, support for SCORM standard or Tin Can API, modularity and adaptability. According to the requirements at the Software Department of FITR BNTU under the guidance of the author since 2009 take place the development, the use and continuous improvement of their own learning management system.

  14. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  15. Automation of industrial bioprocesses.

    Science.gov (United States)

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  16. Oil defect detection of electrowetting display

    Science.gov (United States)

    Chiang, Hou-Chi; Tsai, Yu-Hsiang; Yan, Yung-Jhe; Huang, Ting-Wei; Mang, Ou-Yang

    2015-08-01

    In recent years, transparent display is an emerging topic in display technologies. Apply in many fields just like mobile device, shopping or advertising window, and etc. Electrowetting Display (EWD) is one kind of potential transparent display technology advantages of high transmittance, fast response time, high contrast and rich color with pigment based oil system. In mass production process of Electrowetting Display, oil defects should be found by Automated Optical Inspection (AOI) detection system. It is useful in determination of panel defects for quality control. According to the research of our group, we proposed a mechanism of AOI detection system detecting the different kinds of oil defects. This mechanism can detect different kinds of oil defect caused by oil overflow or material deteriorated after oil coating or driving. We had experiment our mechanism with a 6-inch Electrowetting Display panel from ITRI, using an Epson V750 scanner with 1200 dpi resolution. Two AOI algorithms were developed, which were high speed method and high precision method. In high precision method, oil jumping or non-recovered can be detected successfully. This mechanism of AOI detection system can be used to evaluate the oil uniformity in EWD panel process. In the future, our AOI detection system can be used in quality control of panel manufacturing for mass production.

  17. Serum PTH reference values established by an automated third-generation assay in vitamin D-replete subjects with normal renal function: consequences of diagnosing primary hyperparathyroidism and the classification of dialysis patients.

    Science.gov (United States)

    Souberbielle, Jean-Claude; Massart, Catherine; Brailly-Tabard, Sylvie; Cormier, Catherine; Cavalier, Etienne; Delanaye, Pierre; Chanson, Philippe

    2016-03-01

    To determine parathyroid hormone (PTH) reference values in French healthy adults, taking into account serum 25-hydroxyvitamin D (25OHD), renal function, age, gender, and BMI. We studied 898 healthy subjects (432 women) aged 18-89 years with a normal BMI and estimated glomerular filtration rate (eGFR), 81 patients with surgically proven primary hyperparathyroidism (PHPT), and 264 dialysis patients. 25OHD and third-generation PTH assays were implemented on the LIAISON XL platform. Median PTH and 25OHD values in the 898 healthy subjects were 18.8  ng/l and 23.6  ng/ml respectively. PTH was lower in subjects with 25OHD ≥30  ng/ml than in those with lower values. Among the 183 subjects with 25OHD ≥30  ng/ml, those aged ≥60 years (n=31) had higher PTH values than younger subjects, independent of 25OHD, BMI, and eGFR (PPTH values for the entire group of 183 vitamin D-replete subjects (9.4-28.9  ng/l) as our reference values. With 28.9  ng/l as the upper limit of normal (ULN) rather than the manufacturer's ULN of 38.4  ng/l, the percentage of PHPT patients with 'high' PTH values rose to 90.1% from 66.6% (PPTH ULN fell by 22.4%, diagnostic sensitivity for PHPT improved, and the classification of dialysis patients was modified. © 2016 European Society of Endocrinology.

  18. Extreme learning machine-based classification of ADHD using brain structural MRI data.

    Directory of Open Access Journals (Sweden)

    Xiaolong Peng

    Full Text Available BACKGROUND: Effective and accurate diagnosis of attention-deficit/hyperactivity disorder (ADHD is currently of significant interest. ADHD has been associated with multiple cortical features from structural MRI data. However, most existing learning algorithms for ADHD identification contain obvious defects, such as time-consuming training, parameters selection, etc. The aims of this study were as follows: (1 Propose an ADHD classification model using the extreme learning machine (ELM algorithm for automatic, efficient and objective clinical ADHD diagnosis. (2 Assess the computational efficiency and the effect of sample size on both ELM and support vector machine (SVM methods and analyze which brain segments are involved in ADHD. METHODS: High-resolution three-dimensional MR images were acquired from 55 ADHD subjects and 55 healthy controls. Multiple brain measures (cortical thickness, etc. were calculated using a fully automated procedure in the FreeSurfer software package. In total, 340 cortical features were automatically extracted from 68 brain segments with 5 basic cortical features. F-score and SFS methods were adopted to select the optimal features for ADHD classification. Both ELM and SVM were evaluated for classification accuracy using leave-one-out cross-validation. RESULTS: We achieved ADHD prediction accuracies of 90.18% for ELM using eleven combined features, 84.73% for SVM-Linear and 86.55% for SVM-RBF. Our results show that ELM has better computational efficiency and is more robust as sample size changes than is SVM for ADHD classification. The most pronounced differences between ADHD and healthy subjects were observed in the frontal lobe, temporal lobe, occipital lobe and insular. CONCLUSION: Our ELM-based algorithm for ADHD diagnosis performs considerably better than the traditional SVM algorithm. This result suggests that ELM may be used for the clinical diagnosis of ADHD and the investigation of different brain diseases.

  19. Application of elastic net and infrared spectroscopy in the discrimination between defective and non-defective roasted coffees.

    Science.gov (United States)

    Craig, Ana Paula; Franca, Adriana S; Oliveira, Leandro S; Irudayaraj, Joseph; Ileleji, Klein

    2014-10-01

    The quality of the coffee beverage is negatively affected by the presence of defective coffee beans and its evaluation still relies on highly subjective sensory panels. To tackle the problem of subjectivity, sophisticated analytical techniques have been developed and have been shown capable of discriminating defective from non-defective coffees after roasting. However, these techniques are not adequate for routine analysis, for they are laborious (sample preparation) and time consuming, and reliable, simpler and faster techniques need to be developed for such purpose. Thus, it was the aim of this study to evaluate the performance of infrared spectroscopic methods, namely FTIR and NIR, for the discrimination of roasted defective and non-defective coffees, employing a novel statistical approach. The classification models based on Elastic Net exhibited high percentage of correct classification, and the discriminant infrared spectra variables extracted provided a good interpretation of the models. The discrimination of defective and non-defective beans was associated with main chemical descriptors of coffee, such as carbohydrates, proteins/amino acids, lipids, caffeine and chlorogenic acids. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Screening Tests for Birth Defects

    Science.gov (United States)

    ... Advocacy For Patients About ACOG Screening Tests for Birth Defects Home For Patients Search FAQs Screening Tests ... FAQ165, April 2014 PDF Format Screening Tests for Birth Defects Pregnancy What is a birth defect? What ...

  1. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  2. 21 CFR 864.9175 - Automated blood grouping and antibody test system.

    Science.gov (United States)

    2010-04-01

    ...) Identification. An automated blood grouping and antibody test system is a device used to group erythrocytes (red blood cells) and to detect antibodies to blood group antigens. (b) Classification. Class II (performance... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated blood grouping and antibody test system...

  3. Research on Scientific Data Sharing and Distribution Policy in Advanced Manufacturing and Automation Fields

    Directory of Open Access Journals (Sweden)

    Liya Li

    2007-12-01

    Full Text Available Scientific data sharing is a long-term and complicated task. The related data sharing and distribution policies are prime concerns. By using both domestic and international experiences in scientific data sharing, the sources, distribution, and classification of scientific data in advanced manufacturing and automation are discussed. A primary data sharing and distribution policy in advanced manufacture and automation is introduced.

  4. Design of Gear Defect Detection System Based on Machine Vision

    Science.gov (United States)

    Wang, Yu; Wu, Zhiheng; Duan, Xianyun; Tong, Jigang; Li, Ping; Chen, min; Lin, Qinglin

    2018-01-01

    In order to solve such problems as low efficiency, low quality and instability of gear surface defect detection, we designed a detection system based on machine vision, sensor coupling. By multisensory coupling, and then CCD camera image collection of gear products, using VS2010 to cooperate with Halcon library for a series of analysis and processing of images. At last, the results are fed back to the control end, and the rejected device is removed to the collecting box. The system has successfully identified defective gear. The test results show that this system can identify and eliminate the defects gear quickly and efficiently. It has reached the requirement of gear product defect detection line automation and has a certain application value.

  5. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Science.gov (United States)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  6. A study on limb reduction defects in six European regions

    NARCIS (Netherlands)

    Stoll, C; Calzolari, E; Cornel, M; GarciaMinaur, S; Garne, E; Nevin, N

    1996-01-01

    Limb reduction defects (LRD) gained especial attention after the thalidomide tragedy in 1962, LRD are common congenital malformations which present as obvious congenital anomalies recognized at birth, Therefore it might be assumed that they are well documented, However classification of LRDs is

  7. A Local Area Network to Facilitate Office Automation in the Administrative Sciences Department.

    Science.gov (United States)

    1986-03-27

    NO (nclude Securtry Classification) A L.OCAL AREA NET.CO IRK TO FACILITATE OFFICE AUTOMATION IN THE ADMINISTR.ATIVE SCIE-NCES DL P.RTMENT ullmHoward...edltorls are ob~solete Approved for public release; distribution is unlimited. A Local Area Network to Facilitate Office Automation in the... office automation . Accesion For NTIS CRA&I ii DTIC TAB 0 Unannounced 0 Justification ................. BY .......... . ........ .. ... D . ibution

  8. Single Ventricle Defects

    Science.gov (United States)

    ... heart defects along with pulmonary atresia. (Children with tetralogy of Fallot who also have pulmonary atresia may have treatment similar to others with tetralogy of Fallot.) How does it affect the heart? An opening ...

  9. Repairing Nanoparticle Surface Defects

    NARCIS (Netherlands)

    Marino, Emanuele; Kodger, Thomas E.; Crisp, R.W.; Timmerman, Dolf; MacArthur, Katherine E.; Heggen, Marc; Schall, Peter

    2017-01-01

    Solar devices based on semiconductor nanoparticles require the use of conductive ligands; however, replacing the native, insulating ligands with conductive metal chalcogenide complexes introduces structural defects within the crystalline nanostructure that act as traps for charge carriers. We

  10. Neural tube defects

    Directory of Open Access Journals (Sweden)

    M.E. Marshall

    1981-09-01

    Full Text Available Neural tube defects refer to any defect in the morphogenesis of the neural tube, the most common types being spina bifida and anencephaly. Spina bifida has been recognised in skeletons found in north-eastern Morocco and estimated to have an age of almost 12 000 years. It was also known to the ancient Greek and Arabian physicians who thought that the bony defect was due to the tumour. The term spina bifida was first used by Professor Nicolai Tulp of Amsterdam in 1652. Many other terms have been used to describe this defect, but spina bifida remains the most useful general term, as it describes the separation of the vertebral elements in the midline.

  11. Defect reduction progress in step and flash imprint lithography

    Science.gov (United States)

    Selenidis, K.; Maltabes, J.; McMackin, I.; Perez, J.; Martin, W.; Resnick, D. J.; Sreenivasan, S. V.

    2007-10-01

    Imprint lithography has been shown to be an effective method for the replication of nanometer-scale structures from a template mold. Step and Flash Imprint Lithography (S-FIL ®) is unique in its ability to address both resolution and alignment. Recently overlay across a 200 mm wafer of less than 20nm, 3σ has been demonstrated. Current S-FIL resolution and alignment performance motivates the consideration of nano-imprint lithography as Next Generation Lithography (NGL) solution for IC production. During the S-FIL process, a transferable image, an imprint, is produced by mechanically molding a liquid UV-curable resist on a wafer. The novelty of this process immediately raises questions about the overall defectivity level of S-FIL. Acceptance of imprint lithography for CMOS manufacturing will require demonstration that it can attain defect levels commensurate with the requirements of cost-effective device production. This report specifically focuses on this challenge and presents the current status of defect reduction in S-FIL technology and will summarize the result of defect inspections of wafers patterned using S-FIL. Wafer inspections were performed with a KLA Tencor- 2132 (KT-2132) automated patterned wafer inspection tool. Recent results show wafer defectivity to be less 5 cm -2. Mask fabrication and inspection techniques used to obtain low defect template will be described. The templates used to imprint wafers for this study were designed specifically to facilitate automated defect inspection and were made by employing CMOS industry standard materials and exposure tools. A KT-576 tool was used for template defect inspection.

  12. 7 CFR 51.713 - Classification of defects.

    Science.gov (United States)

    2010-01-01

    ... Aggregating more than 25 percent of the surface. Split, rough or protruding navels Split is unhealed; navel... splits, or navel protrudes beyond the general contour, and opening is so wide, folded or ridged that it... of all splits exceed 1 inch, or navel protrudes beyond general contour, and opening is so wide...

  13. 7 CFR 51.1175 - Classification of defects.

    Science.gov (United States)

    2010-01-01

    ... Aggregating more than 25 percent of the surface. Split, rough, protruding navels Split is unhealed, or more than 1/8 inch (3.2 mm) in length, or navel protrudes beyond the general contour, and opening is so wide...) in length, or more than three well healed splits, or navel protrudes beyond the general contour, and...

  14. 7 CFR 51.1877 - Classification of defects.

    Science.gov (United States)

    2010-01-01

    .... Table II References to Area, Aggregate Area, Length or Aggregate Length are Based on a Tomato Having a Diameter of 21/2 Inches (64 mm) 1 [See footnote at end of Table II] Factor Damage Serious damage Very..., aggregate length of all radial cracks more than 1 inch (25 mm) measured from edge of stem scar. Any lot of...

  15. 7 CFR 51.3416 - Classification of defects.

    Science.gov (United States)

    2010-01-01

    ... MARKETING ACT OF 1946 FRESH FRUITS, VEGETABLES AND OTHER PRODUCTS 1,2 (INSPECTION, CERTIFICATION, AND... sunken 5% waste 10% waste. Flea Beetle 5% waste 10% waste Folded end 5% waste 10% waste. Fusarium tuber... ring Internal Black Spot, Internal Discoloration, Vascular Browning, Fusarium Wilt, Net Necrosis, Other...

  16. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  17. Support Vector Machine and Parametric Wavelet-Based Texture Classification of Stem Cell Images

    National Research Council Canada - National Science Library

    Jeffreys, Christopher

    2004-01-01

    .... Since colony texture is a major discriminating feature in determining quality, we introduce a non-invasive, semi-automated texture-based stem cell colony classification methodology to aid researchers...

  18. New York State Thruway Authority automatic vehicle classification (AVC) : research report.

    Science.gov (United States)

    2008-03-31

    In December 2007, the N.Y.S. Thruway Authority (Thruway) concluded a Federal : funded research effort to study technology and develop a design for retrofitting : devices required in implementing a fully automated vehicle classification system i...

  19. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  20. Automated stopcock actuator

    OpenAIRE

    Vandehey, N. T.; O\\'Neil, J. P.

    2015-01-01

    Introduction We have developed a low-cost stopcock valve actuator for radiochemistry automation built using a stepper motor and an Arduino, an open-source single-board microcontroller. The con-troller hardware can be programmed to run by serial communication or via two 5–24 V digital lines for simple integration into any automation control system. This valve actuator allows for automated use of a single, disposable stopcock, providing a number of advantages over stopcock manifold systems ...

  1. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...... with the automatic verification of three protocols: a secure exam protocol, Google’s Certificate Transparency, and an improved version of Bingo Voting. We find through automated verification that all three protocols satisfy verifiability while only the first two protocols meet accountability....

  2. Management Planning for Workplace Automation.

    Science.gov (United States)

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  3. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  5. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  6. Automated detection and categorization of genital injuries using digital colposcopy

    DEFF Research Database (Denmark)

    Fernandes, Kelwin; Cardoso, Jaime S.; Astrup, Birgitte Schmidt

    2017-01-01

    handcrafted features and deep learning techniques in the automated processing of colposcopic images for genital injury detection. Positive results where achieved by both paradigms in segmentation and classification subtasks, being traditional and deep models the best strategy for each subtask type...

  7. Automated mapping of building facades by machine learning

    DEFF Research Database (Denmark)

    Höhle, Joachim

    2014-01-01

    Facades of buildings contain various types of objects which have to be recorded for information systems. The article describes a solution for this task focussing on automated classification by means of machine learning techniques. Stereo pairs of oblique images are used to derive 3D point clouds...

  8. Automated Discovery of Speech Act Categories in Educational Games

    Science.gov (United States)

    Rus, Vasile; Moldovan, Cristian; Niraula, Nobal; Graesser, Arthur C.

    2012-01-01

    In this paper we address the important task of automated discovery of speech act categories in dialogue-based, multi-party educational games. Speech acts are important in dialogue-based educational systems because they help infer the student speaker's intentions (the task of speech act classification) which in turn is crucial to providing adequate…

  9. A development of the method of the control signal formation for the hot plate mill automation systems to improve the flatness of the finish plate

    Directory of Open Access Journals (Sweden)

    Voronin Stanislav S.

    2016-01-01

    Full Text Available This article describes how to control of the hot plate mill automation system to improve the quality metrics if the final strip. Based on the data of the modern hot rolling mills the classification of the cage equipment was designed. Depending on the degree of influence on the magnitude of reduction, the equipment was divided into categories. The functioning of every system including the main and the vertical cages was described. The conditions of electrical and hydraulic mechanisms was marked. The developed algorithm allows to improve defects based on the finite number of the thickness measurements given by special non-contact sensors. The example of regulators signals calculating was shown. The result of the algorithm operating was illustrated.

  10. Defect detection on videos using neural network

    Directory of Open Access Journals (Sweden)

    Sizyakin Roman

    2017-01-01

    Full Text Available In this paper, we consider a method for defects detection in a video sequence, which consists of three main steps; frame compensation, preprocessing by a detector, which is base on the ranking of pixel values, and the classification of all pixels having anomalous values using convolutional neural networks. The effectiveness of the proposed method shown in comparison with the known techniques on several frames of the video sequence with damaged in natural conditions. The analysis of the obtained results indicates the high efficiency of the proposed method. The additional use of machine learning as postprocessing significantly reduce the likelihood of false alarm.

  11. Lifecycle, Iteration, and Process Automation with SMS Gateway

    Directory of Open Access Journals (Sweden)

    Fenny Fenny

    2015-12-01

    Full Text Available Producing a better quality software system requires an understanding of the indicators of the software quality through defect detection, and automated testing. This paper aims to elevate the design and automated testing process in an engine water pump of a drinking water plant. This paper proposes how software developers can improve the maintainability and reliability of automated testing system and report the abnormal state when an error occurs on the machine. The method in this paper uses literature to explain best practices and case studies of a drinking water plant. Furthermore, this paper is expected to be able to provide insights into the efforts to better handle errors and perform automated testing and monitoring on a machine.

  12. Automated tone grading of granite

    International Nuclear Information System (INIS)

    Catalina Hernández, J.C.; Fernández Ramón, G.

    2017-01-01

    The production of a natural stone processing plant is subject to the intrinsic variability of the stone blocks that constitute its raw material, which may cause problems of lack of uniformity in the visual appearance of the produced material that often triggers complaints from customers. The best way to tackle this problem is to classify the product according to its visual features, which is traditionally done by hand: an operator observes each and every piece that comes out of the production line and assigns it to the closest match among a number of predefined classes, taking into account visual features of the material such as colour, texture, grain, veins, etc. However, this manual procedure presents significant consistency problems, due to the inherent subjectivity of the classification performed by each operator, and the errors caused by their progressive fatigue. Attempts to employ automated sorting systems like the ones used in the ceramic tile industry have not been successful, as natural stone presents much higher variability than ceramic tiles. Therefore, it has been necessary to develop classification systems specifically designed for the treatment of the visual parameters that distinguish the different types of natural stone. This paper describes the details of a computer vision system developed by AITEMIN for the automatic classification of granite pieces according to their tone, which provides an integral solution to tone grading problems in the granite processing and marketing industry. The system has been designed to be easily trained by the end user, through the learning of the samples established as tone patterns by the user. [es

  13. DEFECTS SIMULATION OF ROLLING STRIP

    OpenAIRE

    Rudolf Mišičko; Tibor Kvačkaj; Martin Vlado; Lucia Gulová; Miloslav Lupták; Jana Bidulská

    2009-01-01

    The defects in the continuous casting slabs can be developed or kept down in principle by rolling technology, especially depend to sort, size and distribution of primary defects, as well as used of rolling parameters. Scope of the article is on observation behavior artificial surface and undersurface defects (scores) without filler (surface defects) and filling by oxides and casting powder (subsurface defects). First phase of hot rolling process have been done by software simulation DEFORM 3D...

  14. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  15. Automation benefits BWR customers

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    A description is given of the increasing use of automation at General Electric's Wilmington fuel fabrication plant. Computerised systems and automated equipment perform a large number of inspections, inventory and process operations, and new advanced systems are being continuously introduced to reduce operator errors and expand product reliability margins. (U.K.)

  16. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  17. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  18. Identity Management Processes Automation

    Directory of Open Access Journals (Sweden)

    A. Y. Lavrukhin

    2010-03-01

    Full Text Available Implementation of identity management systems consists of two main parts, consulting and automation. The consulting part includes development of a role model and identity management processes description. The automation part is based on the results of consulting part. This article describes the most important aspects of IdM implementation.

  19. Work and Programmable Automation.

    Science.gov (United States)

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  20. Library Automation in Pakistan.

    Science.gov (United States)

    Haider, Syed Jalaluddin

    1998-01-01

    Examines the state of library automation in Pakistan. Discusses early developments; financial support by the Netherlands Library Development Project (Pakistan); lack of automated systems in college/university and public libraries; usage by specialist libraries; efforts by private-sector libraries and the National Library in Pakistan; commonly used…

  1. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  2. Planning for Office Automation.

    Science.gov (United States)

    Sherron, Gene T.

    1982-01-01

    The steps taken toward office automation by the University of Maryland are described. Office automation is defined and some types of word processing systems are described. Policies developed in the writing of a campus plan are listed, followed by a section on procedures adopted to implement the plan. (Author/MLW)

  3. The Automated Office.

    Science.gov (United States)

    Naclerio, Nick

    1979-01-01

    Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)

  4. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  5. Quantum computing with defects

    Science.gov (United States)

    Varley, Joel

    2011-03-01

    The development of a quantum computer is contingent upon the identification and design of systems for use as qubits, the basic units of quantum information. One of the most promising candidates consists of a defect in diamond known as the nitrogen-vacancy (NV-1) center, since it is an individually-addressable quantum system that can be initialized, manipulated, and measured with high fidelity at room temperature. While the success of the NV-1 stems from its nature as a localized ``deep-center'' point defect, no systematic effort has been made to identify other defects that might behave in a similar way. We provide guidelines for identifying other defect centers with similar properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate systems. To elucidate these points, we compare electronic structure calculations of the NV-1 center in diamond with those of several deep centers in 4H silicon carbide (SiC). Using hybrid functionals, we report formation energies, configuration-coordinate diagrams, and defect-level diagrams to compare and contrast the properties of these defects. We find that the NC VSi - 1 center in SiC, a structural analog of the NV-1 center in diamond, may be a suitable center with very different optical transition energies. We also discuss how the proposed criteria can be translated into guidelines to discover NV analogs in other tetrahedrally coordinated materials. This work was performed in collaboration with J. R. Weber, W. F. Koehl, B. B. Buckley, A. Janotti, C. G. Van de Walle, and D. D. Awschalom. This work was supported by ARO, AFOSR, and NSF.

  6. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  7. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  8. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  9. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  10. Structure defects in cementite

    International Nuclear Information System (INIS)

    Schmitt, Bernard

    1971-01-01

    After a presentation of experimental techniques (elaboration principles, elaboration techniques, and investigation techniques for cementite thin layers and iron-carbon massive alloys), the author of this research thesis reports the study of cementite structure (interatomic distance, description and representation), reports the study of iron-carbon thin layers (structure, influence of silicon, defects), reports the study of perfect and imperfect dislocations and of plane defects in cementite. The author also reports hardness measurements, and discusses the relationships between cementite and other iron carbides

  11. Eisenmenger ventricular septal defect in a Humboldt penguin (Spheniscus humboldti).

    Science.gov (United States)

    Laughlin, D S; Ialeggio, D M; Trupkiewicz, J G; Sleeper, M M

    2016-09-01

    The Eisenmenger ventricular septal defect is an uncommon type of ventricular septal defect characterised in humans by a traditionally perimembranous ventricular septal defect, anterior deviation (cranioventral deviation in small animal patients) of the muscular outlet septum causing malalignment relative to the remainder of the muscular septum, and overriding of the aortic valve. This anomaly is reported infrequently in human patients and was identified in a 45-day-old Humboldt Penguin, Spheniscus humboldti, with signs of poor growth and a cardiac murmur. This case report describes the findings in this penguin and summarises the anatomy and classification of this cardiac anomaly. To the authors' knowledge this is the first report of an Eisenmenger ventricular septal defect in a veterinary patient. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  13. Using Genetic Algorithms for Texts Classification Problems

    Directory of Open Access Journals (Sweden)

    A. A. Shumeyko

    2009-01-01

    Full Text Available The avalanche quantity of the information developed by mankind has led to concept of automation of knowledge extraction – Data Mining ([1]. This direction is connected with a wide spectrum of problems - from recognition of the fuzzy set to creation of search machines. Important component of Data Mining is processing of the text information. Such problems lean on concept of classification and clustering ([2]. Classification consists in definition of an accessory of some element (text to one of in advance created classes. Clustering means splitting a set of elements (texts on clusters which quantity are defined by localization of elements of the given set in vicinities of these some natural centers of these clusters. Realization of a problem of classification initially should lean on the given postulates, basic of which – the aprioristic information on primary set of texts and a measure of affinity of elements and classes.

  14. Towards Automatic Classification of Wikipedia Content

    Science.gov (United States)

    Szymański, Julian

    Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.

  15. An ordinal classification approach for CTG categorization.

    Science.gov (United States)

    Georgoulas, George; Karvelis, Petros; Gavrilis, Dimitris; Stylios, Chrysostomos D; Nikolakopoulos, George

    2017-07-01

    Evaluation of cardiotocogram (CTG) is a standard approach employed during pregnancy and delivery. But, its interpretation requires high level expertise to decide whether the recording is Normal, Suspicious or Pathological. Therefore, a number of attempts have been carried out over the past three decades for development automated sophisticated systems. These systems are usually (multiclass) classification systems that assign a category to the respective CTG. However most of these systems usually do not take into consideration the natural ordering of the categories associated with CTG recordings. In this work, an algorithm that explicitly takes into consideration the ordering of CTG categories, based on binary decomposition method, is investigated. Achieved results, using as a base classifier the C4.5 decision tree classifier, prove that the ordinal classification approach is marginally better than the traditional multiclass classification approach, which utilizes the standard C4.5 algorithm for several performance criteria.

  16. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  17. Satellite spot defect reduction on 193-nm contact hole lithography using photo cell monitor methodology

    Science.gov (United States)

    Boulenger, Caroline; Caze, Jean-Luc; Mihet, Mihaela

    2006-03-01

    The goal of overall process and yield improvement requires a litho defect management and reduction strategy, which includes several layers of tactical methods. Defects may be identified through a number of schemes, including After-Develop Inspection (ADI), which was the primary tool in this study in our 0,13μ fab. Defects on 193nm contact hole lithography were identified using a KLA-Tencor 2351 High Resolution Imaging Patterned Wafer Inspection System, coupled with in-line Automatic Defect Classification (iADC). The optimized inspection was used at the core of the Photo Cell Monitor (PCM) to isolate critical defect types. PCM uses the fab's standard production resist coat, exposure, develop, and rinse process, with the focus and exposure optimized for resist on silicon test wafers. Through Pareto analysis of 193nm defects, one defect type, called satellite spot, was targeted for immediate improvement and monitoring. This paper describes the work done in improving the litho defectivity. The work includes optimization of inspection and classification parameters and the Design of Experiments (DOE) to identify the source (including the interaction between the resist and developer) and contributing factors. Several process modifications were identified which resulted in lowered defectivity up to complete suppression of satellite spot defects, although at higher process complexity and cost. This work was also done in conjunction with resist suppliers, which used the same inspection to confirm the problem at their facilities. The work with the suppliers continues with the goal of identifying a less expensive permanent solution.

  18. Quantum computing with defects.

    Science.gov (United States)

    Weber, J R; Koehl, W F; Varley, J B; Janotti, A; Buckley, B B; Van de Walle, C G; Awschalom, D D

    2010-05-11

    Identifying and designing physical systems for use as qubits, the basic units of quantum information, are critical steps in the development of a quantum computer. Among the possibilities in the solid state, a defect in diamond known as the nitrogen-vacancy (NV(-1)) center stands out for its robustness--its quantum state can be initialized, manipulated, and measured with high fidelity at room temperature. Here we describe how to systematically identify other deep center defects with similar quantum-mechanical properties. We present a list of physical criteria that these centers and their hosts should meet and explain how these requirements can be used in conjunction with electronic structure theory to intelligently sort through candidate defect systems. To illustrate these points in detail, we compare electronic structure calculations of the NV(-1) center in diamond with those of several deep centers in 4H silicon carbide (SiC). We then discuss the proposed criteria for similar defects in other tetrahedrally coordinated semiconductors.

  19. Defects in semiconductor nanostructures

    Indian Academy of Sciences (India)

    sizes were less than 100 Si atoms due to computational limitations. An interesting parallel is that current first principles calculations alluded to in §5 are size ham- pered for similar reasons. These 'defect molecule' calculations were probably the first studies in SN. We believe that a perusal of this 'ancient' scientific literature.

  20. Production of point defects

    International Nuclear Information System (INIS)

    Zuppiroli, L.

    1975-01-01

    Vacancies at thermodynamic equilibrium and the annealing of these defects are studied first, after which electron irradiations are dealt with. The displacement threshold energy concept is introduced. Part three concerns heavy ion and neutron irradiations. Displacement cascades and the thermal spike concept are discussed [fr

  1. Fetal abdominal wall defects.

    Science.gov (United States)

    Prefumo, Federico; Izzi, Claudia

    2014-04-01

    The most common fetal abdominal wall defects are gastroschisis and omphalocele, both with a prevalence of about three in 10,000 births. Prenatal ultrasound has a high sensitivity for these abnormalities already at the time of the first-trimester nuchal scan. Major unrelated defects are associated with gastroschisis in about 10% of cases, whereas omphalocele is associated with chromosomal or genetic abnormalities in a much higher proportion of cases. Challenges in management of gastroschisis are related to the prevention of late intrauterine death, and the prediction and treatment of complex forms. With omphalocele, the main difficulty is the exclusion of associated conditions, not all diagnosed prenatally. An outline of the postnatal treatment of abdominal wall defects is given. Other rarer forms of abdominal wall defects are pentalogy of Cantrell, omphalocele, bladder exstrophy, imperforate anus, spina bifida complex, prune-belly syndrome, body stalk anomaly, and bladder and cloacal exstrophy; they deserve multidisciplinary counselling and management. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Defects in flexoelectric solids

    Science.gov (United States)

    Mao, Sheng; Purohit, Prashant K.

    2015-11-01

    A solid is said to be flexoelectric when it polarizes in proportion to strain gradients. Since strain gradients are large near defects, we expect the flexoelectric effect to be prominent there and decay away at distances much larger than a flexoelectric length scale. Here, we quantify this expectation by computing displacement, stress and polarization fields near defects in flexoelectric solids. For point defects we recover some well known results from strain gradient elasticity and non-local piezoelectric theories, but with different length scales in the final expressions. For edge dislocations we show that the electric potential is a maximum in the vicinity of the dislocation core. We also estimate the polarized line charge density of an edge dislocation in an isotropic flexoelectric solid which is in agreement with some measurements in ice. We perform an asymptotic analysis of the crack tip fields in flexoelectric solids and show that our results share some features from solutions in strain gradient elasticity and piezoelectricity. We also compute the energy release rate for cracks using simple crack face boundary conditions and use them in classical criteria for crack growth to make predictions. Our analysis can serve as a starting point for more sophisticated analytic and computational treatments of defects in flexoelectric solids which are gaining increasing prominence in the field of nanoscience and nanotechnology.

  3. Semiconductor Nanowires: Defects Update

    Science.gov (United States)

    Kavanagh, Karen L.

    2008-05-01

    Structural defects commonly observed in semiconducting nanowires by electron microscopy will be reviewed and their origins discussed. Their effects on electrical and optical properties will be illustrated with examples from GaSb, InAs, and ZnSe nanowires grown by MOCVD and MBE.

  4. The defect sorting in zircaloy fuel cladding tubes using eddy current signals

    International Nuclear Information System (INIS)

    Sekine, Kazuyoshi; Nitta, Kazuhiko; Tsukui, Kazushige.

    1988-01-01

    The Fourier descriptors method has been used in eddy current signal processing to sort defects in the wall of zircaloy cladding. The Fourier descriptor coefficient of the eddy current Lissajous pattern from defects contains information which describes the shape and character of signals, and therefore its algebraic properties can be used for classifying defect signals. This paper describes a simple procedure for defect characterization using some complex sorting-parameters derived from Fourier coefficients of the Lissajous pattern. The signal classification algorism devoloped is based on the geometrical representation of the complex numbers of sorting-parameter in the two-demensional complex plane. The proposed procedure has been applied to defect sorting of zircaloy cladding tubes having several kinds of artificial defect and the experimental results were successful with the exception of those concerning very small size of defects. (author)

  5. Safety assessment for In-service Pressure Bending Pipe Containing Incomplete Penetration Defects

    Science.gov (United States)

    Wang, M.; Tang, P.; Xia, J. F.; Ling, Z. W.; Cai, G. Y.

    2017-12-01

    Incomplete penetration defect is a common defect in the welded joint of pressure pipes. While the safety classification of pressure pipe containing incomplete penetration defects, according to periodical inspection regulations in present, is more conservative. For reducing the repair of incomplete penetration defect, a scientific and applicable safety assessment method for pressure pipe is needed. In this paper, the stress analysis model of the pipe system was established for the in-service pressure bending pipe containing incomplete penetration defects. The local finite element model was set up to analyze the stress distribution of defect location and the stress linearization. And then, the applicability of two assessment methods, simplified assessment and U factor assessment method, to the assessment of incomplete penetration defects located at pressure bending pipe were analyzed. The results can provide some technical supports for the safety assessment of complex pipelines in the future.

  6. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  7. On-Site School Library Automation: Automation Anywhere with Laptops.

    Science.gov (United States)

    Gunn, Holly; Oxner, June

    2000-01-01

    Four years after the Halifax Regional School Board was formed through amalgamation, over 75% of its school libraries were automated. On-site automation with laptops was a quicker, more efficient way of automating than sending a shelf list to the Technical Services Department. The Eastern Shore School Library Automation Project was a successful…

  8. [Biological downsizing : Acetabular defect reconstruction in revision total hip arthroplasty].

    Science.gov (United States)

    Koob, S; Scheidt, S; Randau, T M; Gathen, M; Wimmer, M D; Wirtz, D C; Gravius, S

    2017-02-01

    Periacetabular bony defects remain a great challenge in revision total hip arthroplasty. After assessment and classification of the defect and selection of a suitable implant the primary stable fixation and sufficient biological reconstitution of a sustainable bone stock are essential for long term success in acetabular revision surgery. Biological defect reconstruction aims for the down-sizing of periacetabular defects for later revision surgeries. In the field of biological augmentation several methods are currently available. Autologous transplants feature a profound osseointegrative capacity. However, limitations such as volume restrictions and secondary complications at the donor site have to be considered. Structural allografts show little weight bearing potential in the long term and high failure rates. In clinical practice, the usage of spongious chips implanted via impaction bone grafting technique in combination with antiprotrusio cages for the management of contained defects have shown promising long time results. Nevertheless, when dealing with craniolateral acetabular and dorsal column defects, the additional implantation of macroporous metal implants or augments should be considered since biological augmentation has shown little clinical success in these particular cases. This article provides an overview of the current clinically available biological augmentation methods of peri-acetabular defects. Due to the limitations of autologous and allogeneic bone transplants in terms of size and availability, the emerging field of innovative implantable tissue engineering constructs gains interest and will also be discussed in this article.

  9. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  10. Defect Detection of Velvet Bathrobe Fabrics and Grading with Demerit Point Systems

    Directory of Open Access Journals (Sweden)

    Deniz Mutlu Ala

    2015-12-01

    Full Text Available Fabric defects that may occur at different stages of woven terry fabric production requires quality control and classification of fabrics as first or second grade before sending to customer. In this study, before shipping of two different terry fabric orders, defects were detected by inspection of fabric rolls on a lighted control board by experienced experts. Number of the defects and dimensions of the defects seen during the inspection were noted on quality control charts. Detected defects were defined and scored according to different demerit point systems. In this way, the fabric rolls were classified according to the demerit point systems before being shipped to garment enterprises. Disputes can be avoided with classification made by a demerit point system on which manufacturer and the customer have agreed.

  11. Defect Detection and Segmentation Framework for Remote Field Eddy Current Sensor Data

    Directory of Open Access Journals (Sweden)

    Raphael Falque

    2017-10-01

    Full Text Available Remote-Field Eddy-Current (RFEC technology is often used as a Non-Destructive Evaluation (NDE method to prevent water pipe failures. By analyzing the RFEC data, it is possible to quantify the corrosion present in pipes. Quantifying the corrosion involves detecting defects and extracting their depth and shape. For large sections of pipelines, this can be extremely time-consuming if performed manually. Automated approaches are therefore well motivated. In this article, we propose an automated framework to locate and segment defects in individual pipe segments, starting from raw RFEC measurements taken over large pipelines. The framework relies on a novel feature to robustly detect these defects and a segmentation algorithm applied to the deconvolved RFEC signal. The framework is evaluated using both simulated and real datasets, demonstrating its ability to efficiently segment the shape of corrosion defects.

  12. Automated Single Cell Data Decontamination Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Tennessen, Kristin [Lawrence Berkeley National Lab. (LBNL), Walnut Creek, CA (United States). Dept. of Energy Joint Genome Inst.; Pati, Amrita [Lawrence Berkeley National Lab. (LBNL), Walnut Creek, CA (United States). Dept. of Energy Joint Genome Inst.

    2014-03-21

    Recent technological advancements in single-cell genomics have encouraged the classification and functional assessment of microorganisms from a wide span of the biospheres phylogeny.1,2 Environmental processes of interest to the DOE, such as bioremediation and carbon cycling, can be elucidated through the genomic lens of these unculturable microbes. However, contamination can occur at various stages of the single-cell sequencing process. Contaminated data can lead to wasted time and effort on meaningless analyses, inaccurate or erroneous conclusions, and pollution of public databases. A fully automated decontamination tool is necessary to prevent these instances and increase the throughput of the single-cell sequencing process

  13. Automated reliability assessment for spectroscopic redshift measurements

    Science.gov (United States)

    Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.

    2018-03-01

    Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for

  14. Ontology Building Using Classification Rules and Discovered Concepts

    Directory of Open Access Journals (Sweden)

    Gorskis Henrihs

    2015-12-01

    Full Text Available Building an ontology is a difficult and time-consuming task. In order to make this task easier and faster, some automatic methods can be employed. This paper examines the feasibility of using rules and concepts discovered during the classification tree building process in the C4.5 algorithm, in a completely automated way, for the purposes of building an ontology from data. By building the ontology directly from continuous data, concepts and relations can be discovered without specific knowledge about the domain. This paper also examines how this method reproduces the classification capabilities of the classification three within an ontology using concepts and class expression axioms.

  15. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  16. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  17. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  18. Classification of ASASSN-18dl as a type Ia supernova

    Science.gov (United States)

    Pessi, P.; Quirola, J.; Navarro, G.; Dennefeld, M.; Ferrero, L.; Sani, E.; Schmidtobreick, L.

    2018-02-01

    We report the classification of the supernova candidate ASASSN-18dl which was discovered as a V 17.6mag transient by the All Sky Automated Survey for SuperNovae (ASAS-SN, Shappee et al. 2014) on 2018-02-21.25 UT. The discovery is reported in ATel #11343 (Stone et al. 2018).

  19. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  20. A Framework for Automated Marmoset Vocalization Detection And Classification

    Science.gov (United States)

    2016-09-08

    7] G. Epple, “Comparative Studies on Vocalization in Marmoset Monkeys,” Folia Primatol. ( Basel ), vol. 8, no. 1, pp. 1–40, 1968. [8] C.-J. Chang...odontocetes in the Southern California Bight,” J. Acoust. Soc. Am., vol. 129, no. 1, pp. 467–475, 2011. [25] V. Berisha, A. Wisler, A. O. Hero III

  1. Automated Classification of Martian Morphology Using a Terrain Fingerprinting Method

    NARCIS (Netherlands)

    Koenders, R.; Lindenbergh, R.C.; Zegers, T.E.

    2009-01-01

    The planet Mars has a relatively short human exploration history, while the size of the scientific community studying Mars is also smaller than its Earth equivalent. On the other hand the interest in Mars is large, basically because it is the planet in the solar system most similar to Earth. Several

  2. Clever Toolbox - the Art of Automated Genre Classification

    DEFF Research Database (Denmark)

    2005-01-01

    -regressive coefficients (ARs) are extracted along with the mean and gain to get a single (30 dimensional) feature vector on the time-scale of 1 second. These features have been used because they have performed well in a previous study (Meng, Ahrendt, Larsen (2005)). Linear regression (or single-layer linear NN...

  3. Advanced Automated Detection Analysis and Classification of Cracks in Pavement

    OpenAIRE

    Scott, Dennis

    2014-01-01

    Functional Session 5: Pavement Management Moderated by Akyiaa Hosten This presentation was held at the Pavement Evaluation 2014 Conference, which took place from September 15-18, 2014 in Blacksburg, Virginia. Presentation only

  4. utomated real-time classification of functional states: the significance of individual tuning stage

    Directory of Open Access Journals (Sweden)

    Galatenko, Vladimir V.

    2013-09-01

    Full Text Available Automated classification of a human functional state is an important problem, with applications including stress resistance evaluation, supervision over operators of critical infrastructure, teaching and phobia therapy. Such classification is particularly efficient in systems for teaching and phobia therapy that include a virtual reality module, and provide the capability for dynamic adjustment of task complexity. In this paper, a method for automated real-time binary classification of human functional states (calm wakefulness vs. stress based on discrete wavelet transform of EEG data is considered. It is shown that an individual tuning stage of the classification algorithm — a stage that allows the involvement of certain information on individual peculiarities in the classification, using very short individual learning samples, significantly increases classification reliability. The experimental study that proved this assertion was based on a specialized scenario in which individuals solved the task of detecting objects with given properties in a dynamic set of flying objects.

  5. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    Science.gov (United States)

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Automation Interface Design Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular...

  7. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  8. Fixed automated spray technology.

    Science.gov (United States)

    2011-04-19

    This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...

  9. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  10. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  11. Ventricular Septal Defect (For Teens)

    Science.gov (United States)

    ... have a heart defect should avoid getting body piercings. Piercing increases the possibility that bacteria can get into ... damage heart valves. If you're considering a piercing and you have a heart defect, talk to ...

  12. Congenital Heart Defects (For Parents)

    Science.gov (United States)

    ... diagnosed until the teen years — or even adulthood. Newborn Screening Newborns in the U.S. are screened at ... Has a Heart Defect Coarctation of the Aorta Arrhythmias Mitral Valve Prolapse Atrial Septal Defect Ventricular Septal ...

  13. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  14. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  15. Automated ISMS control auditability

    OpenAIRE

    Suomu, Mikko

    2015-01-01

    This thesis focuses on researching a possible reference model for automated ISMS’s (Information Security Management System) technical control auditability. The main objective was to develop a generic framework for automated compliance status monitoring of the ISO27001:2013 standard which could be re‐used in any ISMS system. The framework was tested with Proof of Concept (PoC) empirical research in a test infrastructure which simulates the framework target deployment environment. To fulfi...

  16. Marketing automation supporting sales

    OpenAIRE

    Sandell, Niko

    2016-01-01

    The past couple of decades has been a time of major changes in marketing. Digitalization has become a permanent part of marketing and at the same time enabled efficient collection of data. Personalization and customization of content are playing a crucial role in marketing when new customers are acquired. This has also created a need for automation to facilitate the distribution of targeted content. As a result of successful marketing automation more information of the customers is gathered ...

  17. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  18. Automated lattice data generation

    Directory of Open Access Journals (Sweden)

    Ayyar Venkitesh

    2018-01-01

    Full Text Available The process of generating ensembles of gauge configurations (and measuring various observables over them can be tedious and error-prone when done “by hand”. In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  19. Defects in Quantum Computers.

    Science.gov (United States)

    Gardas, Bartłomiej; Dziarmaga, Jacek; Zurek, Wojciech H; Zwolak, Michael

    2018-03-14

    The shift of interest from general purpose quantum computers to adiabatic quantum computing or quantum annealing calls for a broadly applicable and easy to implement test to assess how quantum or adiabatic is a specific hardware. Here we propose such a test based on an exactly solvable many body system-the quantum Ising chain in transverse field-and implement it on the D-Wave machine. An ideal adiabatic quench of the quantum Ising chain should lead to an ordered broken symmetry ground state with all spins aligned in the same direction. An actual quench can be imperfect due to decoherence, noise, flaws in the implemented Hamiltonian, or simply too fast to be adiabatic. Imperfections result in topological defects: Spins change orientation, kinks punctuating ordered sections of the chain. The number of such defects quantifies the extent by which the quantum computer misses the ground state, and is, therefore, imperfect.

  20. Reconstructions of eyelid defects

    Directory of Open Access Journals (Sweden)

    Nirmala Subramanian

    2011-01-01

    Full Text Available Eyelids are the protective mechanism of the eyes. The upper and lower eyelids have been formed for their specific functions by Nature. The eyelid defects are encountered in congenital anomalies, trauma, and postexcision for neoplasm. The reconstructions should be based on both functional and cosmetic aspects. The knowledge of the basic anatomy of the lids is a must. There are different techniques for reconstructing the upper eyelid, lower eyelid, and medial and lateral canthal areas. Many a times, the defects involve more than one area. For the reconstruction of the lid, the lining should be similar to the conjunctiva, a cover by skin and the middle layer to give firmness and support. It is important to understand the availability of various tissues for reconstruction. One layer should have the vascularity to support the other layer which can be a graft. A proper plan and execution of it is very important.

  1. Automated carotid artery intima layer regional segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Meiburger, Kristen M; Molinari, Filippo [Biolab, Department of Electronics, Politecnico di Torino, Torino (Italy); Acharya, U Rajendra [Department of ECE, Ngee Ann Polytechnic (Singapore); Saba, Luca [Department of Radiology, A.O.U. di Cagliari, Cagliari (Italy); Rodrigues, Paulo [Department of Computer Science, Centro Universitario da FEI, Sao Paulo (Brazil); Liboni, William [Neurology Division, Gradenigo Hospital, Torino (Italy); Nicolaides, Andrew [Vascular Screening and Diagnostic Centre, London (United Kingdom); Suri, Jasjit S, E-mail: filippo.molinari@polito.it [Fellow AIMBE, CTO, Global Biomedical Technologies Inc., CA (United States)

    2011-07-07

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge(TM) systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 {+-} 0.186 mm while FOAM showed -0.016 {+-} 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  2. Automated carotid artery intima layer regional segmentation

    Science.gov (United States)

    Meiburger, Kristen M.; Molinari, Filippo; Rajendra Acharya, U.; Saba, Luca; Rodrigues, Paulo; Liboni, William; Nicolaides, Andrew; Suri, Jasjit S.

    2011-07-01

    Evaluation of the carotid artery wall is essential for the assessment of a patient's cardiovascular risk or for the diagnosis of cardiovascular pathologies. This paper presents a new, completely user-independent algorithm called carotid artery intima layer regional segmentation (CAILRS, a class of AtheroEdge™ systems), which automatically segments the intima layer of the far wall of the carotid ultrasound artery based on mean shift classification applied to the far wall. Further, the system extracts the lumen-intima and media-adventitia borders in the far wall of the carotid artery. Our new system is characterized and validated by comparing CAILRS borders with the manual tracings carried out by experts. The new technique is also benchmarked with a semi-automatic technique based on a first-order absolute moment edge operator (FOAM) and compared to our previous edge-based automated methods such as CALEX (Molinari et al 2010 J. Ultrasound Med. 29 399-418, 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CULEX (Delsanto et al 2007 IEEE Trans. Instrum. Meas. 56 1265-74, Molinari et al 2010 IEEE Trans. Ultrason. Ferroelectr. Freq. Control 57 1112-24), CALSFOAM (Molinari et al Int. Angiol. (at press)), and CAUDLES-EF (Molinari et al J. Digit. Imaging (at press)). Our multi-institutional database consisted of 300 longitudinal B-mode carotid images. In comparison to semi-automated FOAM, CAILRS showed the IMT bias of -0.035 ± 0.186 mm while FOAM showed -0.016 ± 0.258 mm. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed uniform behavior over the entire database. CAILRS outperformed all the four previous automated methods. The system's figure of merit was 95.6%, which was lower than that of the semi-automated method (98%), but higher than that of the other automated techniques.

  3. Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification

    Directory of Open Access Journals (Sweden)

    Eduardo Ribeiro

    2016-01-01

    Full Text Available Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the “off-the-shelf” CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and “off-the-shelf” CNNs features can be a good approach to further improve the results.

  4. Benign gastric filling defect

    Energy Technology Data Exchange (ETDEWEB)

    Oh, K. K.; Lee, Y. H.; Cho, O. K.; Park, C. Y. [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    1979-06-15

    The gastric lesion is a common source of complaints to Orientals, however, evaluation of gastric symptoms and laboratory examination offer little specific aid in the diagnosis of gastric diseases. Thus roentgenography of gastrointestinal tract is one of the most reliable method for detail diagnosis. On double contract study of stomach, gastric filling defect is mostly caused by malignant gastric cancer, however, other benign lesions can cause similar pictures which can be successfully treated by surgery. 66 cases of benign causes of gastric filling defect were analyzed at this point of view, which was verified pathologically by endoscope or surgery during recent 7 years in Yensei University College of Medicine, Severance Hospital. The characteristic radiological picture of each disease was discussed for precise radiologic diagnosis. 1. Of total 66 cases, there were 52 cases of benign gastric tumor 10 cases of gastric varices, 5 cases of gastric bezoar, 5 cases of corrosive gastritis, 3 cases of granulomatous disease and one case of gastric hematoma. 2. The most frequent causes of benign tumors were adenomatous polyp (35/42) and the next was leiomyoma (4/42). Others were one of case of carcinoid, neurofibroma and cyst. 3. Characteristic of benign adenomatous polyp were relatively small in size, smooth surface and were observed that large size, benign polyp was frequently type IV lesion with a stalk. 4. Submucosal tumors such as leiomyoma needed differential diagnosis with polypoid malignant cancer. However, the characteristic points of differentiation was well circumscribed smooth margined filling defect without definite mucosal destruction on surface. 5. Gastric varices showed multiple lobulated filling defected especially on gastric fundus that changed its size and shape by respiration and posture of patients. Same varices lesions on esophagus and history of liver disease were helpful for easier diagnosis. 6. Gastric bezoar showed well defined movable mass

  5. Benign gastric filling defect

    International Nuclear Information System (INIS)

    Oh, K. K.; Lee, Y. H.; Cho, O. K.; Park, C. Y.

    1979-01-01

    The gastric lesion is a common source of complaints to Orientals, however, evaluation of gastric symptoms and laboratory examination offer little specific aid in the diagnosis of gastric diseases. Thus roentgenography of gastrointestinal tract is one of the most reliable method for detail diagnosis. On double contract study of stomach, gastric filling defect is mostly caused by malignant gastric cancer, however, other benign lesions can cause similar pictures which can be successfully treated by surgery. 66 cases of benign causes of gastric filling defect were analyzed at this point of view, which was verified pathologically by endoscope or surgery during recent 7 years in Yensei University College of Medicine, Severance Hospital. The characteristic radiological picture of each disease was discussed for precise radiologic diagnosis. 1. Of total 66 cases, there were 52 cases of benign gastric tumor 10 cases of gastric varices, 5 cases of gastric bezoar, 5 cases of corrosive gastritis, 3 cases of granulomatous disease and one case of gastric hematoma. 2. The most frequent causes of benign tumors were adenomatous polyp (35/42) and the next was leiomyoma (4/42). Others were one of case of carcinoid, neurofibroma and cyst. 3. Characteristic of benign adenomatous polyp were relatively small in size, smooth surface and were observed that large size, benign polyp was frequently type IV lesion with a stalk. 4. Submucosal tumors such as leiomyoma needed differential diagnosis with polypoid malignant cancer. However, the characteristic points of differentiation was well circumscribed smooth margined filling defect without definite mucosal destruction on surface. 5. Gastric varices showed multiple lobulated filling defected especially on gastric fundus that changed its size and shape by respiration and posture of patients. Same varices lesions on esophagus and history of liver disease were helpful for easier diagnosis. 6. Gastric bezoar showed well defined movable mass

  6. Enhanced defect of interest [DOI] monitoring by utilizing sensitive inspection and ADRTrue SEM review

    Science.gov (United States)

    Kirsch, Remo; Zeiske, Ulrich; Shabtay, Saar; Beyer, Mirko; Yerushalmi, Liran; Goshen, Oren

    2011-03-01

    As semiconductor process design rules continue to shrink, the ability of optical inspection tools to separate between true defects and nuisance becomes more and more difficult. Therefore, monitoring Defect of Interest (DOI) become a real challenge (Figure 1). This phenomenon occurs due to the lower signal received from real defects while noise levels remain almost the same, resulting in inspection high nuisance rate, which jeopardizes the ability to provide a meaningful, true defect Pareto. A non-representative defect Pareto creates a real challenge to a reliable process monitoring (Figure 4). Traditionally, inspection tool recipes were optimized to keep data load at a manageable level and provide defect maps with ~10% nuisance rate, but as defects of interest get smaller with design rule shrinkage, this requirement results in a painful compromise in detection sensitivity. The inspection is usually followed by defect review and classification using scanning electron microscope (SEM), the classification done manually and it is performed on a small sample of the inspection defect map due to time and manual resources limitations. Sample is usually 50~60 randomly selected locations, review is performed manually most of the times, and manual classification is performed for all the reviewed locations. In the approach described in this paper, the inspection tool recipe is optimized for sensitivity rather than low nuisance rate (i.e. detect all DOI with compromising on a higher nuisance rate). Inspection results with high nuisance rate introduce new challenges for SEM review methodology & tools. This paper describe a new approach which enhances process monitoring quality and the results of collaborative work of the Process Diagnostic & Control Business Unit of Applied Materials® and GLOBALFOUNDRIES® utilizing Applied Materials ADRTrueTM & SEMVisionTM capabilities. The study shows that the new approach reveals new defect types in the Pareto, and improves the ability to

  7. PASTEC: an automatic transposable element classification tool.

    Science.gov (United States)

    Hoede, Claire; Arnoux, Sandie; Moisset, Mark; Chaumier, Timothée; Inizan, Olivier; Jamilloux, Véronique; Quesneville, Hadi

    2014-01-01

    The classification of transposable elements (TEs) is key step towards deciphering their potential impact on the genome. However, this process is often based on manual sequence inspection by TE experts. With the wealth of genomic sequences now available, this task requires automation, making it accessible to most scientists. We propose a new tool, PASTEC, which classifies TEs by searching for structural features and similarities. This tool outperforms currently available software for TE classification. The main innovation of PASTEC is the search for HMM profiles, which is useful for inferring the classification of unknown TE on the basis of conserved functional domains of the proteins. In addition, PASTEC is the only tool providing an exhaustive spectrum of possible classifications to the order level of the Wicker hierarchical TE classification system. It can also automatically classify other repeated elements, such as SSR (Simple Sequence Repeats), rDNA or potential repeated host genes. Finally, the output of this new tool is designed to facilitate manual curation by providing to biologists with all the evidence accumulated for each TE consensus. PASTEC is available as a REPET module or standalone software (http://urgi.versailles.inra.fr/download/repet/REPET_linux-x64-2.2.tar.gz). It requires a Unix-like system. There are two standalone versions: one of which is parallelized (requiring Sun grid Engine or Torque), and the other of which is not.

  8. PASTEC: an automatic transposable element classification tool.

    Directory of Open Access Journals (Sweden)

    Claire Hoede

    Full Text Available SUMMARY: The classification of transposable elements (TEs is key step towards deciphering their potential impact on the genome. However, this process is often based on manual sequence inspection by TE experts. With the wealth of genomic sequences now available, this task requires automation, making it accessible to most scientists. We propose a new tool, PASTEC, which classifies TEs by searching for structural features and similarities. This tool outperforms currently available software for TE classification. The main innovation of PASTEC is the search for HMM profiles, which is useful for inferring the classification of unknown TE on the basis of conserved functional domains of the proteins. In addition, PASTEC is the only tool providing an exhaustive spectrum of possible classifications to the order level of the Wicker hierarchical TE classification system. It can also automatically classify other repeated elements, such as SSR (Simple Sequence Repeats, rDNA or potential repeated host genes. Finally, the output of this new tool is designed to facilitate manual curation by providing to biologists with all the evidence accumulated for each TE consensus. AVAILABILITY: PASTEC is available as a REPET module or standalone software (http://urgi.versailles.inra.fr/download/repet/REPET_linux-x64-2.2.tar.gz. It requires a Unix-like system. There are two standalone versions: one of which is parallelized (requiring Sun grid Engine or Torque, and the other of which is not.

  9. Demonstration of automated robotic workcell for hazardous waste characterization

    International Nuclear Information System (INIS)

    Holliday, M.; Dougan, A.; Gavel, D.; Gustaveson, D.; Johnson, R.; Kettering, B.; Wilhelmsen, K.

    1993-02-01

    An automated robotic workcell to classify hazardous waste stream items with previously unknown characteristics has been designed, tested and demonstrated The object attributes being quantified are radiation signature, metal content, and object orientation and volume. The multi sensor information is used to make segregation decisions plus do automatic grasping of objects. The work-cell control program uses an off-line programming system by Cimetrix Inc. as a server to do both simulation control as well as actual hardware control of the workcell. This paper will discuss the overall workcell layout, sensor specifications, workcell supervisory control, 2D vision based automated grasp planning and object classification algorithms

  10. Classification of cultivated plants.

    NARCIS (Netherlands)

    Brandenburg, W.A.

    1986-01-01

    Agricultural practice demands principles for classification, starting from the basal entity in cultivated plants: the cultivar. In establishing biosystematic relationships between wild, weedy and cultivated plants, the species concept needs re-examination. Combining of botanic classification, based

  11. Classification of Pemphigus

    Directory of Open Access Journals (Sweden)

    Ayşe Akman

    2008-08-01

    Full Text Available Clinical classification of pemphigus is not yet complete. The classic classification based on clinical and histologic features. Because of the progress in the pathogenesis of pemphigus, the current classifications based on accumulating analyses of antigen molecules and subclasses of immunoglobulins and etiologic aspects of pemphigus as weel as the clinical, histologic features. The aim of this paper is to review classification of pemphigus.

  12. Augmenting SCA project management and automation framework

    Science.gov (United States)

    Iyapparaja, M.; Sharma, Bhanupriya

    2017-11-01

    In our daily life we need to keep the records of the things in order to manage it in more efficient and proper way. Our Company manufactures semiconductor chips and sale it to the buyer. Sometimes it manufactures the entire product and sometimes partially and sometimes it sales the intermediary product obtained during manufacturing, so for the better management of the entire process there is a need to keep the track record of all the entity involved in it. Materials and Methods: Therefore to overcome with the problem the need raised to develop the framework for the maintenance of the project and for the automation testing. Project management framework provides an architecture which supports in managing the project by marinating the records of entire requirements, the test cases that were created for testing each unit of the software, defect raised from the past years. So through this the quality of the project can be maintained. Results: Automation framework provides the architecture which supports the development and implementation of the automation test script for the software testing process. Conclusion: For implementing project management framework the product of HP that is Application Lifecycle management is used which provides central repository to maintain the project.

  13. Thermal properties of defective fullerene

    Science.gov (United States)

    Li, Jian; Zheng, Dong-Qin; Zhong, Wei-Rong

    2016-09-01

    We have investigated the thermal conductivity of defective fullerene (C60) by using the nonequilibrium molecular dynamics (MD) method. It is found that the thermal conductivity of C60 with one defect is lower than the thermal conductivity of perfect C60. However, double defects in C60 have either positive or negative influence on the thermal conductivity, which depends on the positions of the defects. The phonon spectra of perfect and defective C60 are also provided to give corresponding supports. Our results can be extended to long C60 chains, which is helpful for the thermal management of C60.

  14. Dynamic time warping and sparse representation classification for birdsong phrase classification using limited training data.

    Science.gov (United States)

    Tan, Lee N; Alwan, Abeer; Kossan, George; Cody, Martin L; Taylor, Charles E

    2015-03-01

    Annotation of phrases in birdsongs can be helpful to behavioral and population studies. To reduce the need for manual annotation, an automated birdsong phrase classification algorithm for limited data is developed. Limited data occur because of limited recordings or the existence of rare phrases. In this paper, classification of up to 81 phrase classes of Cassin's Vireo is performed using one to five training samples per class. The algorithm involves dynamic time warping (DTW) and two passes of sparse representation (SR) classification. DTW improves the similarity between training and test phrases from the same class in the presence of individual bird differences and phrase segmentation inconsistencies. The SR classifier works by finding a sparse linear combination of training feature vectors from all classes that best approximates the test feature vector. When the class decisions from DTW and the first pass SR classification are different, SR classification is repeated using training samples from these two conflicting classes. Compared to DTW, support vector machines, and an SR classifier without DTW, the proposed classifier achieves the highest classification accuracies of 94% and 89% on manually segmented and automatically segmented phrases, respectively, from unseen Cassin's Vireo individuals, using five training samples per class.

  15. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    International Nuclear Information System (INIS)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.; McEwen, Jason D.

    2016-01-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  16. Simulation based optimization on automated fibre placement process

    Science.gov (United States)

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  17. Point defects in nickel

    International Nuclear Information System (INIS)

    Peretto, P.

    1969-01-01

    The defects in electron irradiated nickel (20 deg. K) or neutron irradiated nickel (28 deg. K) are studied by simultaneous analysis using the magnetic after-effect, electron microscopy and electrical resistivity recovery. We use zone refined nickel (99.999 per cent) which, for some experiments, is alloyed with a small amount of iron (for example 0.1 per cent Fe). The temperature dependant electrical recovery may be divided in four stages. The sub-stages I B (31 deg. K), I C (42 deg. K), I D (from to 57 deg. K) and I E (62 deg. K) of stage I are due to the disappearance of single interstitials into vacancies. The interstitial defect has a split configuration with a migration energy of about 0.15 eV. In the close pair which disappears in stage I B the interstitial is found to be in a 3. neighbour position whilst in stage I D it is near the direction from the vacancy. In stage I E there is no longer any interaction between the interstitial and the vacancy. The stage II is due to more complicated interstitial defects: di-interstitials for stage II B (84 deg. K) and larger and larger interstitial loops for the following sub-stages. The loops may be seen by electron microscopy. Impurities can play the role of nucleation centers for the loops. Stages III A (370 deg. K) and III B (376 deg. K) are due to two types of di-vacancies. During stage IV (410 deg. K) the single vacancies migrate. Vacancy type loops and interstitial type loops grow concurrently and disappear at about 800 deg. K as observed by electron microscopy. (author) [fr

  18. Single ventricle cardiac defect

    International Nuclear Information System (INIS)

    Eren, B.; Turkmen, N.; Fedakar, R.; Cetin, V.

    2010-01-01

    Single ventricle heart is defined as a rare cardiac abnormality with a single ventricle chamber involving diverse functional and physiological defects. Our case is of a ten month-old baby boy who died shortly after admission to the hospital due to vomiting and diarrhoea. Autopsy findings revealed cyanosis of finger nails and ears. Internal examination revealed; large heart, weighing 60 grams, single ventricle, without a septum and upper membranous part. Single ventricle is a rare pathology, hence, this paper aims to discuss this case from a medico-legal point of view. (author)

  19. Automation model of sewerage rehabilitation planning.

    Science.gov (United States)

    Yang, M D; Su, T C

    2006-01-01

    The major steps of sewerage rehabilitation include inspection of sewerage, assessment of structural conditions, computation of structural condition grades, and determination of rehabilitation methods and materials. Conventionally, sewerage rehabilitation planning relies on experts with professional background that is tedious and time-consuming. This paper proposes an automation model of planning optimal sewerage rehabilitation strategies for the sewer system by integrating image process, clustering technology, optimization, and visualization display. Firstly, image processing techniques, such as wavelet transformation and co-occurrence features extraction, were employed to extract various characteristics of structural failures from CCTV inspection images. Secondly, a classification neural network was established to automatically interpret the structural conditions by comparing the extracted features with the typical failures in a databank. Then, to achieve optimal rehabilitation efficiency, a genetic algorithm was used to determine appropriate rehabilitation methods and substitution materials for the pipe sections with a risk of mal-function and even collapse. Finally, the result from the automation model can be visualized in a geographic information system in which essential information of the sewer system and sewerage rehabilitation plans are graphically displayed. For demonstration, the automation model of optimal sewerage rehabilitation planning was applied to a sewer system in east Taichung, Chinese Taiwan.

  20. Butter and butter oil classification by PTR-MS

    NARCIS (Netherlands)

    Ruth, van S.M.; Koot, A.H.; Akkermans, W.; Araghipour, N.; Rozijn, M.; Baltussen, M.A.H.; Wisthaler, A.; Mark, T.D.; Frankhuizen, R.

    2008-01-01

    The potential of proton transfer reaction mass spectrometry (PTR-MS) as a tool for classification of milk fats was evaluated in relation to quality and authentication issues. Butters and butter oils were subjected to heat and off-flavouring treatments in order to create sensorially defective