WorldWideScience

Sample records for automatic term identification

  1. Automatic term identification for bibliometric mapping

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo); E.C.M. Noyons (Ed); R.K. Buter (Reindert)

    2010-01-01

    textabstractA term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subject

  2. Automatic modal identification of cable-supported bridges instrumented with a long-term monitoring system

    Science.gov (United States)

    Ni, Y. Q.; Fan, K. Q.; Zheng, G.; Chan, T. H. T.; Ko, J. M.

    2003-08-01

    An automatic modal identification program is developed for continuous extraction of modal parameters of three cable-supported bridges in Hong Kong which are instrumented with a long-term monitoring system. The program employs the Complex Modal Indication Function (CMIF) algorithm to identify modal properties from continuous ambient vibration measurements in an on-line manner. By using the LabVIEW graphical programming language, the software realizes the algorithm in Virtual Instrument (VI) style. The applicability and implementation issues of the developed software are demonstrated by using one-year measurement data acquired from 67 channels of accelerometers deployed on the cable-stayed Ting Kau Bridge. With the continuously identified results, normal variability of modal vectors caused by varying environmental and operational conditions is observed. Such observation is very helpful for selection of appropriate measured modal vectors for structural health monitoring applications.

  3. Automatic Kurdish Dialects Identification

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2016-02-01

    Full Text Available Automatic dialect identification is a necessary Lan guage Technology for processing multi- dialect languages in which the dialects are linguis tically far from each other. Particularly, this becomes crucial where the dialects are mutually uni ntelligible. Therefore, to perform computational activities on these languages, the sy stem needs to identify the dialect that is the subject of the process. Kurdish language encompasse s various dialects. It is written using several different scripts. The language lacks of a standard orthography. This situation makes the Kurdish dialectal identification more interesti ng and required, both form the research and from the application perspectives. In this research , we have applied a classification method, based on supervised machine learning, to identify t he dialects of the Kurdish texts. The research has focused on two widely spoken and most dominant Kurdish dialects, namely, Kurmanji and Sorani. The approach could be applied to the other Kurdish dialects as well. The method is also applicable to the languages which are similar to Ku rdish in their dialectal diversity and differences.

  4. Automatic Identification of Metaphoric Utterances

    Science.gov (United States)

    Dunn, Jonathan Edwin

    2013-01-01

    This dissertation analyzes the problem of metaphor identification in linguistic and computational semantics, considering both manual and automatic approaches. It describes a manual approach to metaphor identification, the Metaphoricity Measurement Procedure (MMP), and compares this approach with other manual approaches. The dissertation then…

  5. Automatic sign language identification

    OpenAIRE

    Gebre, B.G.; Wittenburg, P.; Heskes, T.

    2013-01-01

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy...

  6. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  7. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  8. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  9. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  10. Automatic identification of mass spectra

    International Nuclear Information System (INIS)

    Several approaches to preprocessing and comparison of low resolution mass spectra have been evaluated by various test methods related to library search. It is shown that there is a clear correlation between the nature of any contamination of a spectrum, the basic principle of the transformation or distance measure, and the performance of the identification system. The identification of functionality from low resolution spectra has also been evaluated using several classification methods. It is shown that there is an upper limit to the success of this approach, but also that this can be improved significantly by using a very limited amount of additional information. 10 refs

  11. An efficient automatic firearm identification system

    Science.gov (United States)

    Chuan, Zun Liang; Liong, Choong-Yeun; Jemain, Abdul Aziz; Ghani, Nor Azura Md.

    2014-06-01

    Automatic firearm identification system (AFIS) is highly demanded in forensic ballistics to replace the traditional approach which uses comparison microscope and is relatively complex and time consuming. Thus, several AFIS have been developed for commercial and testing purposes. However, those AFIS are still unable to overcome some of the drawbacks of the traditional firearm identification approach. The goal of this study is to introduce another efficient and effective AFIS. A total of 747 firing pin impression images captured from five different pistols of same make and model are used to evaluate the proposed AFIS. It was demonstrated that the proposed AFIS is capable of producing firearm identification accuracy rate of over 95.0% with an execution time of less than 0.35 seconds per image.

  12. Abbreviation definition identification based on automatic precision estimates

    OpenAIRE

    Kim Won; Comeau Donald C; Sohn Sunghwan; Wilbur W John

    2008-01-01

    Abstract Background The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. ...

  13. Automatic language identification using deep neural networks

    OpenAIRE

    López-Moreno, Ignacio; González-Domínguez, Javier; Oldrich, Plchot; David R Martínez; González-Rodríguez, Joaquín

    2014-01-01

    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. I. López-Moreno, J. González-Domínguez, P. Oldrich, D. R. Martínez, J. González-Rodríguez, "Automatic language identification ...

  14. On the advances of automatic modal identification for SHM

    Directory of Open Access Journals (Sweden)

    Cardoso Rharã

    2015-01-01

    Full Text Available Structural health monitoring of civil infrastructures has great practical importance for engineers, owners and stakeholders. Numerous researches have been carried out using long-term monitoring, for instance the Rio-Niterói Bridge in Brazil, the former Z24 Bridge in Switzerland, the Millau Bridge in France, among others. In fact, some structures are monitored 24/7 in order to supply dynamic measurements that can be used for the identification of structural problems such as the presence of cracks, excessive vibration, damage or even to perform a quite extensive structural evaluation concerning its reliability and life cycle. The outputs of such an analysis, commonly entitled modal identification, are the so-called modal parameters, i.e. natural frequencies, damping ratios and mode shapes. Therefore, the development and validation of tools for the automatic identification of modal parameters based on the structural responses during normal operation is fundamental, as the success of subsequent damage detection algorithms depends on the accuracy of the modal parameters estimates. The proposed methodology uses the data driven stochastic subspace identification method (SSI-DATA, which is then complemented by a novel procedure developed for the automatic analysis of the stabilization diagrams provided by the SSI-DATA method. The efficiency of the proposed approach is attested via experimental investigations on a simply supported beam tested in laboratory and on a motorway bridge.

  15. Statistical pattern recognition for automatic writer identification and verification

    NARCIS (Netherlands)

    Bulacu, Marius Lucian

    2007-01-01

    The thesis addresses the problem of automatic person identification using scanned images of handwriting.Identifying the author of a handwritten sample using automatic image-based methods is an interesting pattern recognition problem with direct applicability in the forensic and historic document ana

  16. Automatic handwriting identification on medieval documents

    NARCIS (Netherlands)

    Bulacu, M.L.; Schomaker, L.R.B.

    2007-01-01

    In this paper, we evaluate the performance of text-independent writer identification methods on a handwriting dataset containing medieval English documents. Applicable identification rates are achieved by combining textural features (joint directional probability distributions) with allographic feat

  17. The problem of automatic identification of concepts

    International Nuclear Information System (INIS)

    This paper deals with the problem of the automatic recognition of concepts and describes an important language tool, the ''linguistic filter'', which facilitates the construction of statistical algorithms. Certain special filters, of prepositions, conjunctions, negatives, logical implication, compound words, are presented. This is followed by a detailed description of a statistical algorithm allowing recognition of pronoun referents, and finally the problem of the automatic treatment of negatives in French is discussed

  18. All-optical automatic pollen identification: Towards an operational system

    Science.gov (United States)

    Crouzy, Benoît; Stella, Michelle; Konzelmann, Thomas; Calpini, Bertrand; Clot, Bernard

    2016-09-01

    We present results from the development and validation campaign of an optical pollen monitoring method based on time-resolved scattering and fluorescence. Focus is first set on supervised learning algorithms for pollen-taxa identification and on the determination of aerosol properties (particle size and shape). The identification capability provides a basis for a pre-operational automatic pollen season monitoring performed in parallel to manual reference measurements (Hirst-type volumetric samplers). Airborne concentrations obtained from the automatic system are compatible with those from the manual method regarding total pollen and the automatic device provides real-time data reliably (one week interruption over five months). In addition, although the calibration dataset still needs to be completed, we are able to follow the grass pollen season. The high sampling from the automatic device allows to go beyond the commonly-presented daily values and we obtain statistically significant hourly concentrations. Finally, we discuss remaining challenges for obtaining an operational automatic monitoring system and how the generic validation environment developed for the present campaign could be used for further tests of automatic pollen monitoring devices.

  19. Automatic defect identification on PWR nuclear power station fuel pellets

    International Nuclear Information System (INIS)

    This article presents a new automatic identification technique of structural failures in nuclear green fuel pellet. This technique was developed to identify failures occurred during the fabrication process. It is based on a smart image analysis technique for automatic identification of the failures on uranium oxide pellets used as fuel in PWR nuclear power stations. In order to achieve this goal, an artificial neural network (ANN) has been trained and validated from image histograms of pellets containing examples not only from normal pellets (flawless), but from defective pellets as well (with the main flaws normally found during the manufacturing process). Based on this technique, a new automatic identification system of flaws on nuclear fuel element pellets, composed by the association of image pre-processing and intelligent, will be developed and implemented on the Brazilian nuclear fuel production industry. Based on the theoretical performance of the technology proposed and presented in this article, it is believed that this new system, NuFAS (Nuclear Fuel Pellets Failures Automatic Identification Neural System) will be able to identify structural failures in nuclear fuel pellets with virtually zero error margins. After implemented, the NuFAS will add value to control quality process of the national production of the nuclear fuel.

  20. Automatic seagrass pattern identification on sonar images

    Science.gov (United States)

    Rahnemoonfar, Maryam; Rahman, Abdullah

    2016-05-01

    Natural and human-induced disturbances are resulting in degradation and loss of seagrass. Freshwater flooding, severe meteorological events and invasive species are among the major natural disturbances. Human-induced disturbances are mainly due to boat propeller scars in the shallow seagrass meadows and anchor scars in the deeper areas. Therefore, there is a vital need to map seagrass ecosystems in order to determine worldwide abundance and distribution. Currently there is no established method for mapping the pothole or scars in seagrass. One of the most precise sensors to map the seagrass disturbance is side scan sonar. Here we propose an automatic method which detects seagrass potholes in sonar images. Side scan sonar images are notorious for having speckle noise and uneven illumination across the image. Moreover, disturbance presents complex patterns where most segmentation techniques will fail. In this paper, by applying mathematical morphology technique and calculating the local standard deviation of the image, the images were enhanced and the pothole patterns were identified. The proposed method was applied on sonar images taken from Laguna Madre in Texas. Experimental results show the effectiveness of the proposed method.

  1. MAC, A System for Automatically IPR Identification, Collection and Distribution

    Science.gov (United States)

    Serrão, Carlos

    Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

  2. Term identification in the biomedical literature.

    Science.gov (United States)

    Krauthammer, Michael; Nenadic, Goran

    2004-12-01

    Sophisticated information technologies are needed for effective data acquisition and integration from a growing body of the biomedical literature. Successful term identification is key to getting access to the stored literature information, as it is the terms (and their relationships) that convey knowledge across scientific articles. Due to the complexities of a dynamically changing biomedical terminology, term identification has been recognized as the current bottleneck in text mining, and--as a consequence--has become an important research topic both in natural language processing and biomedical communities. This article overviews state-of-the-art approaches in term identification. The process of identifying terms is analysed through three steps: term recognition, term classification, and term mapping. For each step, main approaches and general trends, along with the major problems, are discussed. By assessing previous work in context of the overall term identification process, the review also tries to delineate needs for future work in the field. PMID:15542023

  3. Automatic Identification And Data Collection Via Barcode Laser Scanning.

    Science.gov (United States)

    Jacobeus, Michel

    1986-07-01

    How to earn over 100 million a year by investing 40 million ? No this is not the latest Wall Street "tip" but the costsavings obtained by the U.S. Department of Defense. 2 % savings on annual turnover claim supermarkets ! Millions of Dollars saved report automotive companies ! These are not daydreams, but tangible results measured by users after implemen-ting Automatic Identification and Data Collection systems, based on bar codes. To paraphrase the famous sentence "I think, thus I am", with AI/ADC systems "You knonw, thus you are". Indeed, in today's world, an immediate, accurate and precise information is a vital management need for companies growth and survival. AI/ADC techniques fullfill these objectives by supplying automatically and without any delay nor alteration the right information.

  4. Automatic identification of algal community from microscopic images.

    Science.gov (United States)

    Santhi, Natchimuthu; Pradeepa, Chinnaraj; Subashini, Parthasarathy; Kalaiselvi, Senthil

    2013-01-01

    A good understanding of the population dynamics of algal communities is crucial in several ecological and pollution studies of freshwater and oceanic systems. This paper reviews the subsequent introduction to the automatic identification of the algal communities using image processing techniques from microscope images. The diverse techniques of image preprocessing, segmentation, feature extraction and recognition are considered one by one and their parameters are summarized. Automatic identification and classification of algal community are very difficult due to various factors such as change in size and shape with climatic changes, various growth periods, and the presence of other microbes. Therefore, the significance, uniqueness, and various approaches are discussed and the analyses in image processing methods are evaluated. Algal identification and associated problems in water organisms have been projected as challenges in image processing application. Various image processing approaches based on textures, shapes, and an object boundary, as well as some segmentation methods like, edge detection and color segmentations, are highlighted. Finally, artificial neural networks and some machine learning algorithms were used to classify and identifying the algae. Further, some of the benefits and drawbacks of schemes are examined. PMID:24151424

  5. Feature dependence in the automatic identification of musical woodwind instruments.

    Science.gov (United States)

    Brown, J C; Houix, O; McAdams, S

    2001-03-01

    The automatic identification of musical instruments is a relatively unexplored and potentially very important field for its promise to free humans from time-consuming searches on the Internet and indexing of audio material. Speaker identification techniques have been used in this paper to determine the properties (features) which are most effective in identifying a statistically significant number of sounds representing four classes of musical instruments (oboe, sax, clarinet, flute) excerpted from actual performances. Features examined include cepstral coefficients, constant-Q coefficients, spectral centroid, autocorrelation coefficients, and moments of the time wave. The number of these coefficients was varied, and in the case of cepstral coefficients, ten coefficients were sufficient for identification. Correct identifications of 79%-84% were obtained with cepstral coefficients, bin-to-bin differences of the constant-Q coefficients, and autocorrelation coefficients; the latter have not been used previously in either speaker or instrument identification work. These results depended on the training sounds chosen and the number of clusters used in the calculation. Comparison to a human perception experiment with sounds produced by the same instruments indicates that, under these conditions, computers do as well as humans in identifying woodwind instruments. PMID:11303920

  6. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  7. Development of an automatic identification algorithm for antibiogram analysis.

    Science.gov (United States)

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories. PMID:26513468

  8. Semi-automatic long-term acoustic surveying

    DEFF Research Database (Denmark)

    Andreassen, Tórur; Surlykke, Annemarie; Hallam, John

    2014-01-01

    Increasing concern about decline in biodiversity has created a demand for population surveys. Acoustic monitoring is an efficient non-invasive method, which may be deployed for surveys of animals as diverse as insects, birds, and bats. Long-term unmanned automatic monitoring may provide unique...... data sampling rates (500kHz). Using a sound energy threshold criterion for triggering recording, we collected 236GB (Gi=10243) of data at full bandwidth. We implemented a simple automatic method using a Support Vector Machine (SVM) classifier based on a combination of temporal and spectral analyses...... to classify events into bat calls and non-bat events. After experimentation we selected duration, energy, bandwidth, and entropy as classification features to identify short high energy structured sounds in the right frequency range. The spectral entropy makes use of the orderly arrangement of frequencies...

  9. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  10. 33 CFR 164.43 - Automatic Identification System Shipborne Equipment-Prince William Sound.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Automatic Identification System Shipborne Equipment-Prince William Sound. 164.43 Section 164.43 Navigation and Navigable Waters COAST GUARD... Automatic Identification System Shipborne Equipment—Prince William Sound. (a) Until December 31, 2004,...

  11. AUTOMATIC LICENSE PLATE LOCALISATION AND IDENTIFICATION VIA SIGNATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Lorita Angeline

    2014-02-01

    Full Text Available A new algorithm for license plate localisation and identification is proposed on the basis of Signature analysis. Signature analysis has been used to locate license plate candidate and its properties can be further utilised in supporting and affirming the license plate character recognition. This paper presents Signature Analysis and the improved conventional Connected Component Analysis (CCA to design an automatic license plate localisation and identification. A procedure called Euclidean Distance Transform is added to the conventional CCA in order to tackle the multiple bounding boxes that occurred. The developed algorithm, SAICCA achieved 92% successful rate, with 8% failed localisation rate due to the restrictions such as insufficient light level, clarity and license plate perceptual information. The processing time for a license plate localisation and recognition is a crucial criterion that needs to be concerned. Therefore, this paper has utilised several approaches to decrease the processing time to an optimal value. The results obtained show that the proposed system is capable to be implemented in both ideal and non-ideal environments.

  12. Automatic Identification of Interictal Epileptiform Discharges in Secondary Generalized Epilepsy

    Directory of Open Access Journals (Sweden)

    Won-Du Chang

    2016-01-01

    Full Text Available Ictal epileptiform discharges (EDs are characteristic signal patterns of scalp electroencephalogram (EEG or intracranial EEG (iEEG recorded from patients with epilepsy, which assist with the diagnosis and characterization of various types of epilepsy. The EEG signal, however, is often recorded from patients with epilepsy for a long period of time, and thus detection and identification of EDs have been a burden on medical doctors. This paper proposes a new method for automatic identification of two types of EDs, repeated sharp-waves (sharps, and runs of sharp-and-slow-waves (SSWs, which helps to pinpoint epileptogenic foci in secondary generalized epilepsy such as Lennox-Gastaut syndrome (LGS. In the experiments with iEEG data acquired from a patient with LGS, our proposed method detected EDs with an accuracy of 93.76% and classified three different signal patterns with a mean classification accuracy of 87.69%, which was significantly higher than that of a conventional wavelet-based method. Our study shows that it is possible to successfully detect and discriminate sharps and SSWs from background EEG activity using our proposed method.

  13. Automatic annotation of protein motif function with Gene Ontology terms

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2004-09-01

    Full Text Available Abstract Background Conserved protein sequence motifs are short stretches of amino acid sequence patterns that potentially encode the function of proteins. Several sequence pattern searching algorithms and programs exist foridentifying candidate protein motifs at the whole genome level. However, amuch needed and importanttask is to determine the functions of the newly identified protein motifs. The Gene Ontology (GO project is an endeavor to annotate the function of genes or protein sequences with terms from a dynamic, controlled vocabulary and these annotations serve well as a knowledge base. Results This paperpresents methods to mine the GO knowledge base and use the association between the GO terms assigned to a sequence and the motifs matched by the same sequence as evidence for predicting the functions of novel protein motifs automatically. The task of assigning GO terms to protein motifsis viewed as both a binary classification and information retrieval problem, where PROSITE motifs are used as samples for mode training and functional prediction. The mutual information of a motif and aGO term association isfound to be a very useful feature. We take advantageof the known motifs to train a logistic regression classifier, which allows us to combine mutual information with other frequency-based features and obtain a probability of correctassociation. The trained logistic regression model has intuitively meaningful and logically plausible parameter values, and performs very well empirically according to our evaluation criteria. Conclusions In this research, different methods for automatic annotation of protein motifs have been investigated. Empirical result demonstrated that the methods have a great potential for detecting and augmenting information about thefunctions of newly discovered candidate protein motifs.

  14. Search and decoy: the automatic identification of mass spectra.

    Science.gov (United States)

    Eisenacher, Martin; Kohl, Michael; Turewicz, Michael; Koch, Markus-Hermann; Uszkoreit, Julian; Stephan, Christian

    2012-01-01

    In recent years, the generation and interpretation of MS/MS spectra for the identification of peptides and proteins has matured to a frequently used automatic workflow in Proteomics. Several software solutions for the automated analysis of MS/MS spectra allow for high-throughput/high-performance analyses of complex samples. Related to MS/MS searches, target-decoy approaches have gained more and more popularity: in a "decoy" part of the search database nonexistent sequences mimic real sequences (the "target" sequences). With their help, the number of falsely identified peptides/proteins can be estimated after a search and the resulting protein list can be cut at a specified false discovery rate (FDR). This is an essential prerequisite for all quantitative approaches, as they rely on correct identifications. Especially the label-free approach "spectral counting"-gaining more and more popularity due to low costs and simplicity-depends directly on the correctness of peptide-spectrum matches (PSMs). This work's aim is to describe five popular search engines-especially their general properties regarding protein identification, but also their quantification abilities, if those go beyond spectral counting. By doing so, Proteomics researchers are enabled to compare their features and to choose an appropriate solution for their specific question. Furthermore, the search engines are applied to a spectrum data set generated from a complex sample with a Thermo LTQ Velos OrbiTrap (Thermo Fisher Scientific, Waltham, MA, USA). The results of the search engines are compared, e.g., regarding time requirements, peptides and proteins found, and the search engines' behavior using the decoy approach. PMID:22665317

  15. Automatic failure identification of the nuclear power plant pellet fuel

    International Nuclear Information System (INIS)

    This paper proposed the development of an automatic technique for evaluating defects to help in the stage of fabrication of fuel elements. Was produced an intelligent image analysis for automatic recognition of defects in uranium pellets. Therefore, an Artificial Neural Network (ANN) was trained using segments of histograms of pellets, containing examples of both normal (no fault) and of defectives pellets (with major defects normally found). The images of the pellets were segmented into 11 shares. Histograms were made of these segments and trained the ANN. Besides automating the process, the system was able to obtain this classification accuracy of 98.33%. Although this percentage represents a significant advance ever in the quality control process, the use of more advanced techniques of photography and lighting will reduce it to insignificant levels with low cost. Technologically, the method developed, should it ever be implemented, will add substantial value in terms of process quality control and production outages in relation to domestic manufacturing of nuclear fuel. (author)

  16. Automatic Personal Identification Using Feature Similarity Index Matching

    Directory of Open Access Journals (Sweden)

    R. Gayathri

    2012-01-01

    Full Text Available Problem statement: Biometrics based personal identification is as an effective method for automatically recognizing, a persons identity with high confidence. Palmprint is an essential biometric feature for use in access control and forensic applications. In this study, we present a multi feature extraction, based on edge detection scheme, applying Log Gabor filter to enhance image structures and suppress noise. Approach: A novel Feature-Similarity Indexing (FSIM of image algorithm is used to generate the matching score between the original image in database and the input test image. Feature Similarity (FSIM index for full reference (image quality assurance IQA is proposed based on the fact that Human Visual System (HVS understands an image mainly according to its low-level features. Results and Conclusion: The experimental results achieve recognition accuracy using canny and perwitt FSIM of 97.3227 and 94.718%, respectively, on the publicly available database of Hong Kong Polytechnic University. Totally 500 images of 100 individuals, 4 samples for each palm are randomly selected to train in this research. Then we get every person each palm image as a template (total 100. Experimental evaluation using palmprint image databases clearly demonstrates the efficient recognition performance of the proposed algorithm compared with the conventional palmprint recognition algorithms.

  17. Principal Component Analysis and Automatic Relevance Determination in Damage Identification

    CERN Document Server

    Mdlazi, L; Stander, C J; Scheffer, C; Heyns, P S

    2007-01-01

    This paper compares two neural network input selection schemes, the Principal Component Analysis (PCA) and the Automatic Relevance Determination (ARD) based on Mac-Kay's evidence framework. The PCA takes all the input data and projects it onto a lower dimension space, thereby reduc-ing the dimension of the input space. This input reduction method often results with parameters that have significant influence on the dynamics of the data being diluted by those that do not influence the dynamics of the data. The ARD selects the most relevant input parameters and discards those that do not contribute significantly to the dynamics of the data being modelled. The ARD sometimes results with important input parameters being discarded thereby compromising the dynamics of the data. The PCA and ARD methods are implemented together with a Multi-Layer-Perceptron (MLP) network for fault identification in structures and the performance of the two methods is as-sessed. It is observed that ARD and PCA give similar accu-racy le...

  18. Wavelet Packet Based Features for Automatic Script Identification

    Directory of Open Access Journals (Sweden)

    M.C. Padma & P. A. Vijaya

    2010-08-01

    Full Text Available In a multi script environment, an archive of documents printed in different scriptsis in practice. For automatic processing of such documents through OpticalCharacter Recognition (OCR, it is necessary to identify the script type of thedocument. In this paper, a novel texture-based approach is presented to identifythe script type of the collection of documents printed in ten Indian scripts -Bangla, Devanagari, Roman (English, Gujarati, Malayalam, Oriya, Tamil,Telugu, Kannada and Urdu. The document images are decomposed through theWavelet Packet Decomposition using the Haar basis function up to level two.Gray level co-occurrence matrix is constructed for the coefficient sub bands ofthe wavelet transform. The Haralick texture features are extracted from the cooccurrencematrix and then used in the identification of the script of a machineprinted document. Experimentation conducted involved 3000 text images forlearning and 2500 text images for testing. Script classification performance isanalyzed using the K-nearest neighbor classifier. The average success rate isfound to be 98.24%.

  19. Rewriting and suppressing UMLS terms for improved biomedical term identification

    Directory of Open Access Journals (Sweden)

    Hettne Kristina M

    2010-03-01

    Full Text Available Abstract Background Identification of terms is essential for biomedical text mining.. We concentrate here on the use of vocabularies for term identification, specifically the Unified Medical Language System (UMLS. To make the UMLS more suitable for biomedical text mining we implemented and evaluated nine term rewrite and eight term suppression rules. The rules rely on UMLS properties that have been identified in previous work by others, together with an additional set of new properties discovered by our group during our work with the UMLS. Our work complements the earlier work in that we measure the impact on the number of terms identified by the different rules on a MEDLINE corpus. The number of uniquely identified terms and their frequency in MEDLINE were computed before and after applying the rules. The 50 most frequently found terms together with a sample of 100 randomly selected terms were evaluated for every rule. Results Five of the nine rewrite rules were found to generate additional synonyms and spelling variants that correctly corresponded to the meaning of the original terms and seven out of the eight suppression rules were found to suppress only undesired terms. Using the five rewrite rules that passed our evaluation, we were able to identify 1,117,772 new occurrences of 14,784 rewritten terms in MEDLINE. Without the rewriting, we recognized 651,268 terms belonging to 397,414 concepts; with rewriting, we recognized 666,053 terms belonging to 410,823 concepts, which is an increase of 2.8% in the number of terms and an increase of 3.4% in the number of concepts recognized. Using the seven suppression rules, a total of 257,118 undesired terms were suppressed in the UMLS, notably decreasing its size. 7,397 terms were suppressed in the corpus. Conclusions We recommend applying the five rewrite rules and seven suppression rules that passed our evaluation when the UMLS is to be used for biomedical term identification in MEDLINE. A software

  20. An automatic identification and monitoring system for coral reef fish

    Science.gov (United States)

    Wilder, Joseph; Tonde, Chetan; Sundar, Ganesh; Huang, Ning; Barinov, Lev; Baxi, Jigesh; Bibby, James; Rapport, Andrew; Pavoni, Edward; Tsang, Serena; Garcia, Eri; Mateo, Felix; Lubansky, Tanya M.; Russell, Gareth J.

    2012-10-01

    To help gauge the health of coral reef ecosystems, we developed a prototype of an underwater camera module to automatically census reef fish populations. Recognition challenges include pose and lighting variations, complicated backgrounds, within-species color variations and within-family similarities among species. An open frame holds two cameras, LED lights, and two `background' panels in an L-shaped configuration. High-resolution cameras send sequences of 300 synchronized image pairs at 10 fps to an on-shore PC. Approximately 200 sequences containing fish were recorded at the New York Aquarium's Glover's Reef exhibit. These contained eight `common' species with 85-672 images, and eight `rare' species with 5-27 images that were grouped into an `unknown/rare' category for classification. Image pre-processing included background modeling and subtraction, and tracking of fish across frames for depth estimation, pose correction, scaling, and disambiguation of overlapping fish. Shape features were obtained from PCA analysis of perimeter points, color features from opponent color histograms, and `banding' features from DCT of vertical projections. Images were classified to species using feedforward neural networks arranged in a three-level hierarchy in which errors remaining after each level are targeted by networks in the level below. Networks were trained and tested on independent image sets. Overall accuracy of species-specific identifications typically exceeded 96% across multiple training runs. A seaworthy version of our system will allow for population censuses with high temporal resolution, and therefore improved statistical power to detect trends. A network of such devices could provide an `early warning system' for coral ecosystem collapse.

  1. Gust Front Statistical Characteristics and Automatic Identification Algorithm for CINRAD

    Institute of Scientific and Technical Information of China (English)

    郑佳锋; 张杰; 朱克云; 刘黎平; 刘艳霞

    2014-01-01

    Gust front is a kind of meso-and micro-scale weather phenomenon that often causes serious ground wind and wind shear. This paper presents an automatic gust front identification algorithm. Totally 879 radar volume-scan samples selected from 21 gust front weather processes that occurred in China between 2009 and 2012 are examined and analyzed. Gust front echo statistical features in reflectivity, velocity, and spectrum width fields are obtained. Based on these features, an algorithm is designed to recognize gust fronts and generate output products and quantitative indices. Then, 315 samples are used to verify the algorithm and 3 typical cases are analyzed. Major conclusions include: 1) for narrow band echoes intensity is between 5 and 30 dBZ, widths are between 2 and 10 km, maximum heights are less than 4 km (89.33%are lower than 3 km), and the lengths are between 50 and 200 km. The narrow-band echo is higher than its surrounding echo. 2) Gust fronts present a convergence line or a wind shear in the velocity field;the frontal wind speed gradually decreases when the distance increases radially outward. Spectral widths of gust fronts are large, with 87.09% exceeding 4 m s-1 . 3) Using 315 gust front volume-scan samples to test the algorithm reveals that the algorithm is highly stable and has successfully recognized 277 samples. The algorithm also works for small-scale or weak gust fronts. 4) Radar data quality has certain impact on the algorithm.

  2. Identification of Car Passengers with RFID for Automatic Crash Notification

    OpenAIRE

    Ouyang, Dongfang

    2009-01-01

    Automatic Crash Notification is a system designed to be used in a crash situation. When a crash occurs, the intelligent system is activated and automatically sends select crash details to the appropriate Emergency Medical Service Center. These details can be the position of the vehicle and the likely severity of the damage. Using the information, the medical treatment resources demanded for the accident is assessed at Emergency Center. Accordingly, first-aid facilities are promptly and proper...

  3. Automatic identification technology tracking weapons and ammunition for the Norwegian Armed Forces

    OpenAIRE

    Lien, Tord Hjalmar.

    2011-01-01

    Approved for public release; distribution is unlimited. The purpose of this study is to recommend technology and solutions that improve the accountability and accuracy of small arms and ammunition inventories in the Norwegian Armed Forces (NAF). Radio Frequency Identification (RFID) and Item Unique Identification (IUID) are described, and challenges and benefits of these two major automatic identification technologies are discussed. A case study for the NAF is conducted where processes a...

  4. Automatic resource identification for FPGA-based reconfigurable measurement and control systems with mezzanines in FMC standard

    Science.gov (United States)

    Wojenski, Andrzej; Kasprowicz, Grzegorz; Pozniak, Krzysztof T.; Romaniuk, Ryszard

    2013-10-01

    The paper describes a concept of automatic resources identification algorithm used in reconfigurable measurement systems. In the paper is also presented a concept of algorithm for automatic generation of HDL codes (firmware) and management of reconfigurable measurement and control systems. Following sections are described in details: definition of measurement system, FMC boards installation, automatic FPGA startup configuration, automatic FMC detection and automatic card identification. Reconfigurable measurement and control systems are using FPGA devices and mezzanines in FMC standard. This work is a part of a wider project for automatic firmware generation and management of reconfigurable systems.

  5. A Wireless Framework for Lecturers' Attendance System with Automatic Vehicle Identification (AVI Technology

    Directory of Open Access Journals (Sweden)

    Emammer Khamis Shafter

    2015-10-01

    Full Text Available Automatic Vehicle Identification (AVI technology is one type of Radio Frequency Identification (RFID method which can be used to significantly improve the efficiency of lecturers' attendance system. It provides the capability of automatic data capture for attendance records using mobile device equipped in users’ vehicle. The intent of this article is to propose a framework for automatic lecturers' attendance system using AVI technology. The first objective of this work involves gathering of requirements for Automatic Lecturers' Attendance System and to represent them using UML diagrams. The second objective is to put forward a framework that will provide guidelines for developing the system. A prototype has also been created as a pilot project.

  6. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  7. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity. PMID:26357393

  8. Automatic Identification of Human Erythrocytes in Microscopic Fecal Specimens.

    Science.gov (United States)

    Liu, Lin; Lei, Haoting; Zhang, Jing; Yuan, Yang; Zhang, Zhenglong; Liu, Juanxiu; Xie, Yu; Ni, Guangming; Liu, Yong

    2015-11-01

    Traditional fecal erythrocyte detection is performed via a manual operation that is unsuitable because it depends significantly on the expertise of individual inspectors. To recognize human erythrocytes automatically and precisely, automatic segmentation is very important for extraction of characteristics. In addition, multiple recognition algorithms are also essential. This paper proposes an algorithm based on morphological segmentation and a fuzzy neural network. The morphological segmentation process comprises three operational steps: top-hat transformation, Otsu's method, and image binarization. Following initial screening by area and circularity, fuzzy c-means clustering and the neural network algorithms are used for secondary screening. Subsequently, the erythrocytes are screened by combining the results of five images obtained at different focal lengths. Experimental results show that even when the illumination, noise pollution, and position of the erythrocytes are different, they are all segmented and labeled accurately by the proposed method. Thus, the proposed method is robust even in images with significant amounts of noise. PMID:26349804

  9. Automatic Part Primitive Feature Identification Based on Faceted Models

    Directory of Open Access Journals (Sweden)

    Muizuddin Azka

    2012-09-01

    Full Text Available Feature recognition technology has been developed along with the process of integrating CAD/CAPP/CAM. Automatic feature detection applications based on faceted models expected to speed up the manufacturing process design activities such as setting tool to be used or required machining process in a variety of different features. This research focuses on detection of primitive features available in a part. This is done by applying part slicing and grouping adjacent facets. Type of feature is identified by simply evaluating normal vector direction of all features group. In order to identify features on various planes of a part, planes, one at a time, are rotated to be parallel with the reference plane. The results showed that this method can identify the primitive features automatically accurately in all planes of tested part, this covered : pocket, cylindrical and profile feature.

  10. Automatic script identification from images using cluster-based templates

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  11. Automatic Identification of Personal Life Events in Twitter

    OpenAIRE

    Dickinson, Thomas; Fernández, Miriam; Thomas, Lisa A.; Mulholland, Paul; Briggs, Pam; Alani, Harith

    2015-01-01

    New social media has led to an explosion in personal digital data that encompasses both those expressions of self chosen by the individual as well as reflections of self provided by other, third parties. The resulting Digital Personhood (DP) data is complex and for many users it is too easy to become lost in the mire of digital data. This paper studies the automatic detection of personal life events in Twitter. Six relevant life events are considered from psychological research including: beg...

  12. Wavelet Packet Based Features for Automatic Script Identification

    OpenAIRE

    M.C. Padma & P. A. Vijaya

    2010-01-01

    In a multi script environment, an archive of documents printed in different scriptsis in practice. For automatic processing of such documents through OpticalCharacter Recognition (OCR), it is necessary to identify the script type of thedocument. In this paper, a novel texture-based approach is presented to identifythe script type of the collection of documents printed in ten Indian scripts -Bangla, Devanagari, Roman (English), Gujarati, Malayalam, Oriya, Tamil,Telugu, Kannada and Urdu. The do...

  13. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  14. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  15. An Evaluation of Cellular Neural Networks for the Automatic Identification of Cephalometric Landmarks on Digital Images

    Directory of Open Access Journals (Sweden)

    Rosalia Leonardi

    2009-01-01

    Full Text Available Several efforts have been made to completely automate cephalometric analysis by automatic landmark search. However, accuracy obtained was worse than manual identification in every study. The analogue-to-digital conversion of X-ray has been claimed to be the main problem. Therefore the aim of this investigation was to evaluate the accuracy of the Cellular Neural Networks approach for automatic location of cephalometric landmarks on softcopy of direct digital cephalometric X-rays. Forty-one, direct-digital lateral cephalometric radiographs were obtained by a Siemens Orthophos DS Ceph and were used in this study and 10 landmarks (N, A Point, Ba, Po, Pt, B Point, Pg, PM, UIE, LIE were the object of automatic landmark identification. The mean errors and standard deviations from the best estimate of cephalometric points were calculated for each landmark. Differences in the mean errors of automatic and manual landmarking were compared with a 1-way analysis of variance. The analyses indicated that the differences were very small, and they were found at most within 0.59 mm. Furthermore, only few of these differences were statistically significant, but differences were so small to be in most instances clinically meaningless. Therefore the use of X-ray files with respect to scanned X-ray improved landmark accuracy of automatic detection. Investigations on softcopy of digital cephalometric X-rays, to search more landmarks in order to enable a complete automatic cephalometric analysis, are strongly encouraged.

  16. Automatic Priming Effects for New Associations in Lexical Decision and Perceptual Identification

    NARCIS (Netherlands)

    D. Pecher (Diane); J.G.W. Raaijmakers (Jeroen)

    1999-01-01

    textabstractInformation storage in semantic memory was investigated by looking at automatic priming effects for new associations in two experiments. In the study phase word pairs were presented in a paired-associate learning task. Lexical decision and perceptual identification were used to examine p

  17. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard;

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV is...

  18. Exploring features for automatic identification of news queries through query logs

    Institute of Scientific and Technical Information of China (English)

    Xiaojuan; ZHANG; Jian; LI

    2014-01-01

    Purpose:Existing researches of predicting queries with news intents have tried to extract the classification features from external knowledge bases,this paper tries to present how to apply features extracted from query logs for automatic identification of news queries without using any external resources.Design/methodology/approach:First,we manually labeled 1,220 news queries from Sogou.com.Based on the analysis of these queries,we then identified three features of news queries in terms of query content,time of query occurrence and user click behavior.Afterwards,we used 12 effective features proposed in literature as baseline and conducted experiments based on the support vector machine(SVM)classifier.Finally,we compared the impacts of the features used in this paper on the identification of news queries.Findings:Compared with baseline features,the F-score has been improved from 0.6414 to0.8368 after the use of three newly-identified features,among which the burst point(bst)was the most effective while predicting news queries.In addition,query expression(qes)was more useful than query terms,and among the click behavior-based features,news URL was the most effective one.Research limitations:Analyses based on features extracted from query logs might lead to produce limited results.Instead of short queries,the segmentation tool used in this study has been more widely applied for long texts.Practical implications:The research will be helpful for general-purpose search engines to address search intents for news events.Originality/value:Our approach provides a new and different perspective in recognizing queries with news intent without such large news corpora as blogs or Twitter.

  19. Automatic Validation of Phosphopeptide Identifications from Tandem Mass Spectra

    OpenAIRE

    Lu, Bingwen; Ruse, Cristian; Xu, Tao; Park, Sung Kyu; Yates, John

    2007-01-01

    We developed and compared two approaches for automated validation of phosphopeptides tandem mass spectra identified using database searching algorithms. Phosphopeptide identifications were obtained through SEQUEST searches of a protein database appended with its decoy (reversed sequences). Statistical evaluation and iterative searches were employed to create a high quality dataset of phosphopeptides. Automation of post-search validation was approached by two different strategies. By using sta...

  20. Defect Automatic Identification of Eddy Current Pulsed Thermography

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2014-01-01

    Full Text Available Eddy current pulsed thermography (ECPT is an effective nondestructive testing and evaluation (NDT&E technique, and has been applied for a wide range of conductive materials. Manual selected frames have been used for defects detection and quantification. Defects are indicated by high/low temperature in the frames. However, the variation of surface emissivity sometimes introduces illusory temperature inhomogeneity and results in false alarm. To improve the probability of detection, this paper proposes a two-heat balance states-based method which can restrain the influence of the emissivity. In addition, the independent component analysis (ICA is also applied to automatically identify defect patterns and quantify the defects. An experiment was carried out to validate the proposed methods.

  1. Perspective of the applications of automatic identification technologies in the Serbian Army

    Directory of Open Access Journals (Sweden)

    Velibor V. Jovanović

    2012-07-01

    Full Text Available Without modern information systems, supply-chain management is almost impossible. Automatic identification technologies provide automated data processing, which contributes to improving the conditions and support decision making. Automatic identification technology media, notably BARCODE and RFID technology, are used as carriers of labels with high quality data and adequate description of material means, for providing a crucial visibility of inventory levels through the supply chain. With these media and the use of an adequate information system, the Ministry of Defense of the Republic of Serbia will be able to establish a system of codification and, in accordance with the NATO codification system, to successfully implement a unique codification, classification and determination of storage numbers for all tools, components and spare parts for their unequivocal identification. In the perspective, this will help end users to perform everyday tasks without compromising the material integrity of security data. It will also help command structures to have reliable information for decision making to ensure optimal management. Products and services that pass the codification procedure will have the opportunity to be offered in the largest market of armament and military equipment. This paper gives a comparative analysis of two automatic identification technologies - BARCODE, the most common one, and RFID, the most advanced one - with an emphasis on the advantages and disadvantages of their use in tracking inventory through the supply chain. Their possible application in the Serbian Army is discussed in general.

  2. Managing Returnable Containers Logistics - A Case Study Part II - Improving Visibility through Using Automatic Identification Technologies

    Directory of Open Access Journals (Sweden)

    Gretchen Meiser

    2011-05-01

    Full Text Available This case study is the result of a project conducted on behalf of a company that uses its own returnable containers to transport purchased parts from suppliers. The objective of this project was to develop a proposal to enable the company to more effectively track and manage its returnable containers. The research activities in support of this project included (1 the analysis and documentation of the physical flow and the information flow associated with the containers and (2 the investigation of new technologies to improve the automatic identification and tracking of containers. This paper explains the automatic identification technologies and important criteria for selection. A companion paper details the flow of information and containers within the logistics chain, and it identifies areas for improving the management of the containers.

  3. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    Science.gov (United States)

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. PMID:27268736

  4. Protokol Interchangeable Data pada VMeS (Vessel Messaging System) dan AIS (Automatic Identification System)

    OpenAIRE

    Farid Andhika; Trika Pitana; Achmad Affandi

    2012-01-01

    VMeS (Vessel Messaging System) merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System) yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehin...

  5. Rapid Identification of Volatile Compounds in Aromathic Plants by Automatic Thermal Desorption - GC-MS

    OpenAIRE

    Esteban, J. L.; Martínez-Castro, I.; Morales Valverde, Ramón; Fabrellas, B.; Sanz, J.

    1996-01-01

    [EN]Thermal desorption is a valuable mathod for the fractionation of plant volatile components, which can be carried out on-line with GC analysis. The use of coupled GC-MS affords additional qualitative information, of special interest for plant species whose composition has not been previosly studied. Some examples of the application of automatic thermal desorption coupled to GC-MS to the identification and characterization of volatile components of plants of different families are given.

  6. Automatic Identification of Tomato Maturation Using Multilayer Feed Forward Neural Network with Genetic Algorithms (GA)

    Institute of Scientific and Technical Information of China (English)

    FANG Jun-long; ZHANG Chang-li; WANG Shu-wen

    2004-01-01

    We set up computer vision system for tomato images. By using this system, the RGB value of tomato image was converted into HIS value whose H was used to acquire the color character of the surface of tomato. To use multilayer feed forward neural network with GA can finish automatic identification of tomato maturation. The results of experiment showed that the accuracy was upto 94%.

  7. Automatic Palmprint Identification based on High Order Zernike Moment

    Directory of Open Access Journals (Sweden)

    R. Gayathri

    2012-01-01

    Full Text Available Problem statement: Hand geometry contains relatively invariant features of an individual. Palmprint recognition is an efficient biometric solution for authentication system. The existence of several hand-based authentication commercial systems indicates the effectiveness of this type of biometric. Approach: We proposed a palmprint verification system using high order Zernike moment that was robust to rotation, translation and occlusion. Zernike moment was an efficient algorithm for representing the shape features of an image. The design consists of feature extraction and matching of image using high order Zernike moment. Zernike moments at high orders was calculated from the image and the image was classified using K-Nearest Neighborhood (KNN. The reason for using Zernike moment was that it was the best algorithm due to its orthogonality and rotation invariance property. Results and Conclusion: Computational cost can be reduced by detecting the common term of Zernike moment. Experiments and classifications have been performed using Hong Kong PolyU palm print database with 125 individuals left hand palm images; every person has 5 samples, totaling up to 625. We then get every persons palm images as a template (totaling 125. The remaining 500 are used as the training samples. The proposed palmprint authentication system achieves a recognition accuracy of 98% and interesting working point with False Acceptance Rate (FAR of = 1.062% and False Rejection Rate (FRR of = 0%. Experimental evaluation demonstrates the efficient recognition performance of the proposed algorithm compared with conventional palmprint recognition algorithms.

  8. A pattern recognition approach based on DTW for automatic transient identification in nuclear power plants

    International Nuclear Information System (INIS)

    Highlights: • Novel transient identification method for NPPs. • Low-complexity. • Low training data requirements. • High accuracy. • Fully reproducible protocol carried out on a real benchmark. - Abstract: Automatic identification of transients in nuclear power plants (NPPs) allows monitoring the fatigue damage accumulated by critical components during plant operation, and is therefore of great importance for ensuring that usage factors remain within the original design bases postulated by the plant designer. Although several schemes to address this important issue have been explored in the literature, there is still no definitive solution available. In the present work, a new method for automatic transient identification is proposed, based on the Dynamic Time Warping (DTW) algorithm, largely used in other related areas such as signature or speech recognition. The novel transient identification system is evaluated on real operational data following a rigorous pattern recognition protocol. Results show the high accuracy of the proposed approach, which is combined with other interesting features such as its low complexity and its very limited requirements of training data

  9. An Automatic Identification Procedure to Promote the use of FES-Cycling Training for Hemiparetic Patients

    Directory of Open Access Journals (Sweden)

    Emilia Ambrosini

    2014-01-01

    Full Text Available Cycling induced by Functional Electrical Stimulation (FES training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  10. A new approach to the automatic identification of organism evolution using neural networks.

    Science.gov (United States)

    Kasperski, Andrzej; Kasperska, Renata

    2016-01-01

    Automatic identification of organism evolution still remains a challenging task, which is especially exiting, when the evolution of human is considered. The main aim of this work is to present a new idea to allow organism evolution analysis using neural networks. Here we show that it is possible to identify evolution of any organisms in a fully automatic way using the designed EvolutionXXI program, which contains implemented neural network. The neural network has been taught using cytochrome b sequences of selected organisms. Then, analyses have been carried out for the various exemplary organisms in order to demonstrate capabilities of the EvolutionXXI program. It is shown that the presented idea allows supporting existing hypotheses, concerning evolutionary relationships between selected organisms, among others, Sirenia and elephants, hippopotami and whales, scorpions and spiders, dolphins and whales. Moreover, primate (including human), tree shrew and yeast evolution has been reconstructed. PMID:26975238

  11. Automatic Active-Region Identification and Azimuth Disambiguation of the SOLIS/VSM Full-Disk Vector Magnetograms

    CERN Document Server

    Georgoulis, M K; Henney, C J

    2007-01-01

    The Vector Spectromagnetograph (VSM) of the NSO's Synoptic Optical Long-Term Investigations of the Sun (SOLIS) facility is now operational and obtains the first-ever vector magnetic field measurements of the entire visible solar hemisphere. To fully exploit the unprecedented SOLIS/VSM data, however, one must first address two critical problems: first, the study of solar active regions requires an automatic, physically intuitive, technique for active-region identification in the solar disk. Second, use of active-region vector magnetograms requires removal of the azimuthal $180^o$-ambiguity in the orientation of the transverse magnetic field component. Here we report on an effort to address both problems simultaneously and efficiently. To identify solar active regions we apply an algorithm designed to locate complex, flux-balanced, magnetic structures with a dominant E-W orientation on the disk. Each of the disk portions corresponding to active regions is thereafter extracted and subjected to the Nonpotential M...

  12. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Yoshihisa Sakurai

    2016-04-01

    Full Text Available This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions.

  13. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors.

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier's subtechniques in course conditions. PMID:27049388

  14. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions. PMID:27049388

  15. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks

    OpenAIRE

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; T. Toledano, Doroteo; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recogn...

  16. Semi-Automatic Translation of Medical Terms from English to Swedish : SNOMED CT in Translation

    OpenAIRE

    Lindgren, Anna

    2011-01-01

    The Swedish National Board of Health and Welfare has been overseeing translations of the international clinical terminology SNOMED CT from English to Swedish. This study was performed to find whether semi-automatic methods of translation could produce a satisfactory translation while requiring fewer resources than manual translation. Using the medical English-Swedish dictionary TermColl translations of select subsets of SNOMED CT were produced by ways of translation memory and statistical tra...

  17. Automatic identification of shallow landslides based on Worldview2 remote sensing images

    Science.gov (United States)

    Ma, Hai-Rong; Cheng, Xinwen; Chen, Lianjun; Zhang, Haitao; Xiong, Hongwei

    2016-01-01

    Automatic identification of landslides based on remote sensing images is important for investigating disasters and producing hazard maps. We propose a method to detect shallow landslides automatically using Wordview2 images. Features such as high soil brightness and low vegetation coverage can help identify shallow landslides on remote sensing images. Therefore, soil brightness and vegetation index were chosen as indexes for landslide remote sensing. The back scarp of a landslide can form dark shadow areas on the landslide mass, affecting the accuracy of landslide extraction. To eliminate this effect, the shadow index was chosen as an index. The first principal component (PC1) contained >90% of the image information; therefore, this was also selected as an index. The four selected indexes were used to synthesize a new image wherein information on shallow landslides was enhanced, while other background information was suppressed. Then, PC1 was extracted from the new synthetic image, and an automatic threshold segmentation algorithm was used for segmenting the image to obtain similar landslide areas. Based on landslide features such as slope, shape, and area, nonlandslide areas were eliminated. Finally, four experimental sites were used to verify the feasibility of the developed method.

  18. Automatic derivation of domain terms and concept location based on the analysis of the identifiers

    CERN Document Server

    Vaclavik, Peter; Mezei, Marek

    2010-01-01

    Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and web frameworks. Knowledge of domain terms enables easy localization of chunks of code that belong to a certain term. We consider these chunks of code as "concepts" and their placement in the code as "concept location". Application developers may also benefit from the obtained domain terms. These terms are parts of speech that characterize a certain concept. Concepts are encoded in "classes" (OO paradigm) and the obtained vocabulary of terms supports the selection and the comprehension of the class' appropriate identifiers. ...

  19. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    Directory of Open Access Journals (Sweden)

    Ferrández Oscar

    2012-07-01

    Full Text Available Abstract Background The increased use and adoption of Electronic Health Records (EHR causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI, which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity and precision (equivalent to positive predictive value metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches

  20. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Amato, G. [ISTI-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V. [IPCF-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Sorrentino, F., E-mail: sorrentino@fi.infn.i [Dipartimento di Fisica e astronomia, Universita di Firenze, Polo Scientifico, via Sansone 1, 50019 Sesto Fiorentino (Italy); Istituto di Cibernetica CNR, via Campi Flegrei 34, 80078 Pozzuoli (Italy); Marwan Technology, c/o Dipartimento di Fisica ' E. Fermi' , Largo Pontecorvo 3, 56127 Pisa (Italy); Tognoni, E. [INO-CNR, Area della Ricerca, Via Moruzzi 1, 56124 Pisa (Italy)

    2010-08-15

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  1. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Science.gov (United States)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  2. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    International Nuclear Information System (INIS)

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  3. Semi-automatic charge and mass identification in two-dimensional matrices

    CERN Document Server

    Gruyer, Diego; Chbihi, A; Frankland, J D; Barlini, S; Borderie, B; Bougault, R; Duenas, J A; Neindre, N Le; Lopez, O; Pastore, G; Piantelli, S; Valdre, S; Verde, G; Vient, E

    2016-01-01

    This article presents a new semi-automatic method for charge and mass identification in two-dimensional matrices. The proposed algorithm is based on the matrix's properties and uses as little information as possible on the global form of the iden tification lines, making it applicable to a large variety of matrices, including various $\\Delta$E-E correlations, or those coming from Pulse Shape Analysis of the charge signal in silicon detectors. Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented.

  4. Benefit Analyses of Technologies for Automatic Identification to Be Implemented in the Healthcare Sector

    Science.gov (United States)

    Krey, Mike; Schlatter, Ueli

    The tasks and objectives of automatic identification (Auto-ID) are to provide information on goods and products. It has already been established for years in the areas of logistics and trading and can no longer be ignored by the German healthcare sector. Some German hospitals have already discovered the capabilities of Auto-ID. Improvements in quality, safety and reductions in risk, cost and time are aspects and areas where improvements are achievable. Privacy protection, legal restraints, and the personal rights of patients and staff members are just a few aspects which make the heath care sector a sensible field for the implementation of Auto-ID. Auto-ID in this context contains the different technologies, methods and products for the registration, provision and storage of relevant data. With the help of a quantifiable and science-based evaluation, an answer is sought as to which Auto-ID has the highest capability to be implemented in healthcare business.

  5. Automatic Identification of Artifact-Related Independent Components for Artifact Removal in EEG Recordings.

    Science.gov (United States)

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2016-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods. PMID:25415992

  6. Terminology of the public relations field: corpus — automatic term recognition — terminology database

    Directory of Open Access Journals (Sweden)

    Nataša Logar Berginc

    2013-12-01

    Full Text Available The article describes an analysis of automatic term recognition results performed for single- and multi-word terms with the LUIZ term extraction system. The target application of the results is a terminology database of Public Relations and the main resource the KoRP Public Relations Corpus. Our analysis is focused on two segments: (a single-word noun term candidates, which we compare with the frequency list of nouns from KoRP and evaluate termhood on the basis of the judgements of two domain experts, and (b multi-word term candidates with verb and noun as headword. In order to better assess the performance of the system and the soundness of our approach we also performed an analysis of recall. Our results show that the terminological relevance of extracted nouns is indeed higher than that of merely frequent nouns, and that verbal phrases only rarely count as proper terms. The most productive patterns of multi-word terms with noun as a headword have the following structure: [adjective + noun], [adjective + and + adjective + noun] and [adjective + adjective + noun]. The analysis of recall shows low inter-annotator agreement, but nevertheless very satisfactory recall levels.

  7. Automatic identification of bird targets with radar via patterns produced by wing flapping.

    Science.gov (United States)

    Zaugg, Serge; Saporta, Gilbert; van Loon, Emiel; Schmaljohann, Heiko; Liechti, Felix

    2008-09-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research. PMID:18331979

  8. Identification of Units and Other Terms in Czech Medical Records

    Czech Academy of Sciences Publication Activity Database

    Zvára Jr., Karel; Kašpar, Václav

    2010-01-01

    Roč. 6, č. 1 (2010), s. 78-82. ISSN 1801-5603 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : natural language processing * healthcare documentation * medical reports * EHR * finite-state machine * regular expression Subject RIV: IN - Informatics, Computer Science http://www.ejbi.org/en/ejbi/article/61-en-identification-of-units-and-other-terms-in-czech- medical -records.html

  9. Identification and Estimation of Gaussian Affine Term Structure Models

    OpenAIRE

    Hamilton, James D.; Jing Cynthia Wu

    2012-01-01

    This paper develops new results for identification and estimation of Gaussian affine term structure models. We establish that three popular canonical representations are unidentified, and demonstrate how unidentified regions can complicate numerical optimization. A separate contribution of the paper is the proposal of minimum-chi-square estimation as an alternative to MLE. We show that, although it is asymptotically equivalent to MLE, it can be much easier to compute. In some cases, MCSE allo...

  10. Long-term sensitivity with an automatic TL personnel dosimetry system

    International Nuclear Information System (INIS)

    Since the response of thermoluminescent dosimeters are highly affected by systemdependent parameters, any variation in the reader conditions, in the thermal treatment or in the readout parameters may cause significant changes of the relative sensitivity of the system. For this reason the essential parameters determining satisfactory operation of the dosimetry system must be regularly monitored. For most commercially available systems a certain number of controls, like the control of the preheating and heating temperatures or the control of the power supply voltage of the photomultiplier, are carried out automatically. In the daily routine work of a personnel monitoring laboratory these controls have to be complemented by specific measurements which check the general behaviour of the measurement circuit and which make part of the dose evaluation procedure. The analysis of these measurements over a long period permit an estimation of the practical long-term stability of the sensitivity of an automatic radiothermoluminescent dosimetry system. The authors report on the experience gained in the field with the use of a standard Harshaw system of a Swiss personnel monitoring laboratory. 1 ref., 2 figs. (Author)

  11. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    Science.gov (United States)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  12. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  13. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  14. AROMA-AIRWICK: a CHLOE/CDC-3600 system for the automatic identification of spark images and their association into tracks

    Energy Technology Data Exchange (ETDEWEB)

    Clark, R K

    1980-06-26

    The AROMA-AIRWICK System for CHLOE, an automatic film scanning equipment built at Argonne by Donald Hodges, and the CDC-3600 computer is a system for the automatic identification of spark images and their association into tracks. AROMA-AIRWICK has been an outgrowth of the generally recognized need for the automatic processing of high energy physics data and the fact that the Argonne National Laboratory has been a center of serious spark chamber development in recent years.

  15. AROMA-AIRWICK: a CHLOE/CDC-3600 system for the automatic identification of spark images and their association into tracks

    International Nuclear Information System (INIS)

    The AROMA-AIRWICK System for CHLOE, an automatic film scanning equipment built at Argonne by Donald Hodges, and the CDC-3600 computer is a system for the automatic identification of spark images and their association into tracks. AROMA-AIRWICK has been an outgrowth of the generally recognized need for the automatic processing of high energy physics data and the fact that the Argonne National Laboratory has been a center of serious spark chamber development in recent years

  16. Evaluation of the algorithm for automatic identification of the common carotid artery in ARTSENS

    International Nuclear Information System (INIS)

    Arterial compliance (AC) is an indicator of the risk of cardiovascular diseases (CVDs) and it is generally estimated by B-mode ultrasound investigation. The number of sonologists in low- and middle-income countries is very disproportionate to the extent of CVD. To bridge this gap we are developing an image-free CVD risk screening tool–arterial stiffness evaluation for non-invasive screening (ARTSENS™) which can be operated with minimal training. ARTSENS uses a single element ultrasound transducer to investigate the wall dynamics of the common carotid artery (CCA) and subsequently measure the AC. Identification of the proximal and distal walls of the CCA, in the ultrasound frames, is an important step in the process of the measurement of AC. The image-free nature of ARTSENS creates some unique issues which necessitate the development of a new algorithm that can automatically identify the CCA from a sequence of A-mode radio-frequency (RF) frames. We have earlier presented the concept and preliminary results for an algorithm that employed clues from the relative positions and temporal motion of CCA walls, for identifying the CCA and finding the approximate wall positions. In this paper, we present the detailed algorithm and its extensive evaluation based on simulation and clinical studies. The algorithm identified the wall position correctly in more than 90% of all simulated datasets where the signal-to-noise ratio was greater than 3 dB. The algorithm was then tested extensively on RF data obtained from the CCA of 30 human volunteers, where it successfully located the arterial walls in more than 70% of all measurements. The algorithm could successfully reject frames where the CCA was not present thus assisting the operator to place the probe correctly in the image-free system, ARTSENS. It was demonstrated that the algorithm can be used in real-time with few trade-offs which do not affect the accuracy of CCA identification. A new method for depth range selection

  17. GPU-accelerated automatic identification of robust beam setups for proton and carbon-ion radiotherapy

    International Nuclear Information System (INIS)

    We demonstrate acceleration on graphic processing units (GPU) of automatic identification of robust particle therapy beam setups, minimizing negative dosimetric effects of Bragg peak displacement caused by treatment-time patient positioning errors. Our particle therapy research toolkit, RobuR, was extended with OpenCL support and used to implement calculation on GPU of the Port Homogeneity Index, a metric scoring irradiation port robustness through analysis of tissue density patterns prior to dose optimization and computation. Results were benchmarked against an independent native CPU implementation. Numerical results were in agreement between the GPU implementation and native CPU implementation. For 10 skull base cases, the GPU-accelerated implementation was employed to select beam setups for proton and carbon ion treatment plans, which proved to be dosimetrically robust, when recomputed in presence of various simulated positioning errors. From the point of view of performance, average running time on the GPU decreased by at least one order of magnitude compared to the CPU, rendering the GPU-accelerated analysis a feasible step in a clinical treatment planning interactive session. In conclusion, selection of robust particle therapy beam setups can be effectively accelerated on a GPU and become an unintrusive part of the particle therapy treatment planning workflow. Additionally, the speed gain opens new usage scenarios, like interactive analysis manipulation (e.g. constraining of some setup) and re-execution. Finally, through OpenCL portable parallelism, the new implementation is suitable also for CPU-only use, taking advantage of multiple cores, and can potentially exploit types of accelerators other than GPUs.

  18. MetaboHunter: an automatic approach for identification of metabolites from 1H-NMR spectra of complex mixtures

    Directory of Open Access Journals (Sweden)

    Culf Adrian

    2011-10-01

    Full Text Available Abstract Background One-dimensional 1H-NMR spectroscopy is widely used for high-throughput characterization of metabolites in complex biological mixtures. However, the accurate identification of individual compounds is still a challenging task, particularly in spectral regions with higher peak densities. The need for automatic tools to facilitate and further improve the accuracy of such tasks, while using increasingly larger reference spectral libraries becomes a priority of current metabolomics research. Results We introduce a web server application, called MetaboHunter, which can be used for automatic assignment of 1H-NMR spectra of metabolites. MetaboHunter provides methods for automatic metabolite identification based on spectra or peak lists with three different search methods and with possibility for peak drift in a user defined spectral range. The assignment is performed using as reference libraries manually curated data from two major publicly available databases of NMR metabolite standard measurements (HMDB and MMCD. Tests using a variety of synthetic and experimental spectra of single and multi metabolite mixtures show that MetaboHunter is able to identify, in average, more than 80% of detectable metabolites from spectra of synthetic mixtures and more than 50% from spectra corresponding to experimental mixtures. This work also suggests that better scoring functions improve by more than 30% the performance of MetaboHunter's metabolite identification methods. Conclusions MetaboHunter is a freely accessible, easy to use and user friendly 1H-NMR-based web server application that provides efficient data input and pre-processing, flexible parameter settings, fast and automatic metabolite fingerprinting and results visualization via intuitive plotting and compound peak hit maps. Compared to other published and freely accessible metabolomics tools, MetaboHunter implements three efficient methods to search for metabolites in manually curated

  19. RFID: A Revolution in Automatic Data Recognition

    Science.gov (United States)

    Deal, Walter F., III

    2004-01-01

    Radio frequency identification, or RFID, is a generic term for technologies that use radio waves to automatically identify people or objects. There are several methods of identification, but the most common is to store a serial number that identifies a person or object, and perhaps other information, on a microchip that is attached to an antenna…

  20. Discovery of mass spectral characteristics and automatic identification of wax esters from gas chromatography mass spectrometry data.

    Science.gov (United States)

    Zhang, Liang-xiao; Yun, Yi-feng; Liang, Yi-zeng; Cao, Dong-sheng

    2010-06-01

    The mass spectral characteristics of wax esters were systemically summarized and interpreted through data mining of their standard mass spectra taken from NIST standard mass spectral library. Combining with the rules of retention indices described in the previous study, an automatic system was subsequently developed to identify the structural information for wax esters from GC/MS data. After tested and illustrated by both simulated and real GC/MS data, the results indicate that this system could identify wax esters except the polyunsaturated ones and the mass spectral characteristics are useful and effective information for identification of wax esters. PMID:20417935

  1. Introducing a semi-automatic method to simulate large numbers of forensic fingermarks for research on fingerprint identification.

    Science.gov (United States)

    Rodriguez, Crystal M; de Jongh, Arent; Meuwly, Didier

    2012-03-01

    Statistical research on fingerprint identification and the testing of automated fingerprint identification system (AFIS) performances require large numbers of forensic fingermarks. These fingermarks are rarely available. This study presents a semi-automatic method to create simulated fingermarks in large quantities that model minutiae features or images of forensic fingermarks. This method takes into account several aspects contributing to the variability of forensic fingermarks such as the number of minutiae, the finger region, and the elastic deformation of the skin. To investigate the applicability of the simulated fingermarks, fingermarks have been simulated with 5-12 minutiae originating from different finger regions for six fingers. An AFIS matching algorithm was used to obtain similarity scores for comparisons between the minutiae configurations of fingerprints and the minutiae configurations of simulated and forensic fingermarks. The results showed similar scores for both types of fingermarks suggesting that the simulated fingermarks are good substitutes for forensic fingermarks. PMID:22103733

  2. Automatic classification of long-term ambulatory ECG records according to type of ischemic heart disease

    Directory of Open Access Journals (Sweden)

    Smrdel Aleš

    2011-12-01

    Full Text Available Abstract Background Elevated transient ischemic ST segment episodes in the ambulatory electrocardiographic (AECG records appear generally in patients with transmural ischemia (e. g. Prinzmetal's angina while depressed ischemic episodes appear in patients with subendocardial ischemia (e. g. unstable or stable angina. Huge amount of AECG data necessitates automatic methods for analysis. We present an algorithm which determines type of transient ischemic episodes in the leads of records (elevations/depressions and classifies AECG records according to type of ischemic heart disease (Prinzmetal's angina; coronary artery diseases excluding patients with Prinzmetal's angina; other heart diseases. Methods The algorithm was developed using 24-hour AECG records of the Long Term ST Database (LTST DB. The algorithm robustly generates ST segment level function in each AECG lead of the records, and tracks time varying non-ischemic ST segment changes such as slow drifts and axis shifts to construct the ST segment reference function. The ST segment reference function is then subtracted from the ST segment level function to obtain the ST segment deviation function. Using the third statistical moment of the histogram of the ST segment deviation function, the algorithm determines deflections of leads according to type of ischemic episodes present (elevations, depressions, and then classifies records according to type of ischemic heart disease. Results Using 74 records of the LTST DB (containing elevated or depressed ischemic episodes, mixed ischemic episodes, or no episodes, the algorithm correctly determined deflections of the majority of the leads of the records and correctly classified majority of the records with Prinzmetal's angina into the Prinzmetal's angina category (7 out of 8; majority of the records with other coronary artery diseases into the coronary artery diseases excluding patients with Prinzmetal's angina category (47 out of 55; and correctly

  3. Contribution to automatic speech recognition. Analysis of the direct acoustical signal. Recognition of isolated words and phoneme identification

    International Nuclear Information System (INIS)

    This report deals with the acoustical-phonetic step of the automatic recognition of the speech. The parameters used are the extrema of the acoustical signal (coded in amplitude and duration). This coding method, the properties of which are described, is simple and well adapted to a digital processing. The quality and the intelligibility of the coded signal after reconstruction are particularly satisfactory. An experiment for the automatic recognition of isolated words has been carried using this coding system. We have designed a filtering algorithm operating on the parameters of the coding. Thus the characteristics of the formants can be derived under certain conditions which are discussed. Using these characteristics the identification of a large part of the phonemes for a given speaker was achieved. Carrying on the studies has required the development of a particular methodology of real time processing which allowed immediate evaluation of the improvement of the programs. Such processing on temporal coding of the acoustical signal is extremely powerful and could represent, used in connection with other methods an efficient tool for the automatic processing of the speech.(author)

  4. Automatic Identification of the Repolarization Endpoint by Computing the Dominant T-wave on a Reduced Number of Leads.

    Science.gov (United States)

    Giuliani, C; Agostinelli, A; Di Nardo, F; Fioretti, S; Burattini, L

    2016-01-01

    Electrocardiographic (ECG) T-wave endpoint (Tend) identification suffers lack of reliability due to the presence of noise and variability among leads. Tend identification can be improved by using global repolarization waveforms obtained by combining several leads. The dominant T-wave (DTW) is a global repolarization waveform that proved to improve Tend identification when computed using the 15 (I to III, aVr, aVl, aVf, V1 to V6, X, Y, Z) leads usually available in clinics, of which only 8 (I, II, V1 to V6) are independent. The aim of the present study was to evaluate if the 8 independent leads are sufficient to obtain a DTW which allows a reliable Tend identification. To this aim Tend measures automatically identified from 15-dependent-lead DTWs of 46 control healthy subjects (CHS) and 103 acute myocardial infarction patients (AMIP) were compared with those obtained from 8-independent-lead DTWs. Results indicate that Tend distributions have not statistically different median values (CHS: 340 ms vs. 340 ms, respectively; AMIP: 325 ms vs. 320 ms, respectively), besides being strongly correlated (CHS: ρ=0.97, AMIP: 0.88; Pidentification from DTW, the 8 independent leads can be used without a statistically significant loss of accuracy but with a significant decrement of computational effort. The lead dependence of 7 out of 15 leads does not introduce a significant bias in the Tend determination from 15 dependent lead DTWs. PMID:27347218

  5. A semi-automatic model for sinkhole identification in a karst area of Zhijin County, China

    Science.gov (United States)

    Chen, Hao; Oguchi, Takashi; Wu, Pan

    2015-12-01

    The objective of this study is to investigate the use of DEMs derived from ASTER and SRTM remote sensing images and topographic maps to detect and quantify natural sinkholes in a karst area in Zhijin county, southwest China. Two methodologies were implemented. The first is a semi-automatic approach which stepwise identifies the depression using DEMs: 1) DEM acquisition; 2) sink fill; 3) sink depth calculation using the difference between the original and sinkfree DEMs; and 4) elimination of the spurious sinkholes by the threshold values of morphometric parameters including TPI (topographic position index), geology, and land use. The second is the traditional visual interpretation of depressions based on the integrated analysis of the high-resolution aerial photographs and topographic maps. The threshold values of the depression area, shape, depth and TPI appropriate for distinguishing true depressions were abstained from the maximum overall accuracy generated by the comparison between the depression maps produced by the semi-automatic model or visual interpretation. The result shows that the best performance of the semi-automatic model for meso-scale karst depression delineation was using the DEM from the topographic maps with the thresholds area >~ 60 m2, ellipticity >~ 0.2 and TPI <= 0. With these realistic thresholds, the accuracy of the semi-automatic model ranges from 0.78 to 0.95 for DEM resolutions from 3 to 75 m.

  6. Automatic derivation of domain terms and concept location based on the analysis of the identifiers

    OpenAIRE

    Vaclavik, Peter; Poruban, Jaroslav; Mezei, Marek

    2010-01-01

    Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and we...

  7. The Effects of Degraded Vision and Automatic Combat Identification Reliability on Infantry Friendly Fire Engagements

    OpenAIRE

    Kogler, Timothy Michael

    2003-01-01

    Fratricide is one of the most devastating consequences of any military conflict. Target identification failures have been identified as the last link in a chain of mistakes that can lead to fratricide. Other links include weapon and equipment malfunctions, command, control, and communication failures, navigation failures, fire discipline failures, and situation awareness failures. This research examined the effects of degraded vision and combat identification reliability on the time-stress...

  8. Automatic Screening of Missing Objects and Identification with Group Coding of RF Tags

    OpenAIRE

    G. Vijayaraju

    2013-01-01

    Here the container of the shipping based phenomena it is a collection of the objects in a well oriented fashion by which there is a group oriented fashion related to the well efficient strategy of the objects based on the physical phenomena in a well efficient fashion respectively. Here by the enabling of the radio frequency identification based strategy in which object identification takes place in the system in a well efficient fashion and followed by the container oriented strategy in ...

  9. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    Science.gov (United States)

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types. PMID:26723348

  10. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  11. A Method for Automatic Identification of Reliable Heart Rates Calculated from ECG and PPG Waveforms

    OpenAIRE

    Yu, Chenggang; Liu, Zhenqiu; McKenna, Thomas; Reisner, Andrew T.; Reifman, Jaques

    2006-01-01

    Objective: The development and application of data-driven decision-support systems for medical triage, diagnostics, and prognostics pose special requirements on physiologic data. In particular, that data are reliable in order to produce meaningful results. The authors describe a method that automatically estimates the reliability of reference heart rates (HRr) derived from electrocardiogram (ECG) waveforms and photoplethysmogram (PPG) waveforms recorded by vital-signs monitors. The reliabilit...

  12. Semi-automatic identification of counterfeit offers in online shopping platforms

    OpenAIRE

    Wartner, Christian; Arnold, Patrick; Rahm, Erhard

    2015-01-01

    Product counterfeiting is a serious problem causing the industry estimated losses of billions of dollars every year. With the increasing spread of e-commerce, the number of counterfeit products sold online increased substantially. We propose the adoption of a semi-automatic workflow to identify likely counterfeit offers in online platforms and to present these offers to a domain expert for manual verification. The workflow includes steps to generate search queries for relevant product offers,...

  13. On the automatic detection of otolith features for fish species identification and their age estimation

    OpenAIRE

    Sória Pérez, José A. (José Antonio)

    2013-01-01

    This thesis deals with the automatic detection of features in signals, either extracted from photographs or captured by means of electronic sensors, and its possible application in the detection of morphological structures in fish otoliths so as to identify species and estimate their age at death. From a more biological perspective, otoliths, which are calcified structures located in the auditory system of all teleostean fish, constitute one of the main elements employed in the study and mana...

  14. Map++: A Crowd-sensing System for Automatic Map Semantics Identification

    OpenAIRE

    Aly, Heba; Basalamah, Anas; Youssef, Moustafa

    2015-01-01

    Digital maps have become a part of our daily life with a number of commercial and free map services. These services have still a huge potential for enhancement with rich semantic information to support a large class of mapping applications. In this paper, we present Map++, a system that leverages standard cell-phone sensors in a crowdsensing approach to automatically enrich digital maps with different road semantics like tunnels, bumps, bridges, footbridges, crosswalks, road capacity, among o...

  15. Automatic recognition of type III solar radio bursts. Automated radio burst identification system method and first observations

    International Nuclear Information System (INIS)

    Complete text of publication follows. Because of the rapidly increasing role of technology, including complicated electronic systems, spacecraft, etc., modern society has become more vulnerable to a set of extraterrestrial influences (space weather) and requires continuous observation and forecasts of space weather. The major space weather events like solar flares and coronal mass ejections are usually accompanied by solar radio bursts, which can be used for a real-time space weather forecast. Coronal type III radio bursts are produced near the local electron plasma frequency and near its harmonic by fast electrons ejected from the solar active regions and moving through the corona and solar wind. These bursts have dynamic spectra with frequency rapidly falling with time, the typical duration of the coronal burst being about 1-3 s. This paper presents a new method developed to detect coronal type III bursts automatically and its implementation in a new Automated Radio Burst Identification System (ARBIS), which is working in real-time. The central idea of the implementation is to use the Radon transform for more objective detection of the bursts as approximately straight lines in dynamic spectra. Preliminary tests of the method with the use of the spectra obtained during 13 days show that the performance of the current implementation is quite high, ∼84%, while no false positives are observed and 23 events not listed previously are found. The first automatically detected coronal type III radio bursts are presented.

  16. Need of a consistent and convenient nucleus identification in ENDF files for the automatic construction of the depletion chains

    Science.gov (United States)

    Mosca, Pietro; Mounier, Claude

    2016-03-01

    The automatic construction of evolution chains recently implemented in GALILEE system is based on the analysis of several ENDF files : the multigroup production cross sections present in the GENDF files processed by NJOY from the ENDF evaluation, the decay file and the fission product yields (FPY) file. In this context, this paper highlights the importance of the nucleus identification to properly interconnect the data mentioned above. The first part of the paper describes the present status of the nucleus identification among the several ENDF files focusing, in particular, on the use of the excited state number and of the isomeric state number. The second part reviews the problems encountered during the automatic construction of the depletion chains using recent ENDF data. The processing of the JEFF-3.1.1, ENDF/B-VII.0 (decay and FPY) and the JEFF-3.2 (production cross section) points out problems about the compliance or not of the nucleus identifiers with the ENDF-6 format and sometimes the inconsistencies among the various ENDF files. In addition, the analysis of EAF-2003 and EAF-2010 shows some incoherence between the ZA product identifier and the reaction identifier MT for the reactions (n, pα) and (n, 2np). As a main result of this work, our suggestion is to change the ENDF format using systematically the isomeric state number to identify the nuclei. This proposal is already compliant to a huge amount ENDF data that are not in agreement with the present ENDF format. This choice is the most convenient because, ultimately, it allows one to give human readable names to the nuclei of the depletion chains.

  17. Automatic identification of bird targets with radar via patterns produced by wing flapping

    NARCIS (Netherlands)

    S. Zaugg; G. Saporta; E. van Loon; H. Schmaljohann; F. Liechti

    2008-01-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pa

  18. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  19. Analysis and Development of FACE Automatic Apparatus for Rapid Identification of Transuranium Isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Sebesta, E.H.

    1978-09-01

    A description of and operating manual for the FACE Automatic Apparatus has been written along with a documentation of the FACE machine operating program, to provide a user manual for the FACE Automatic Apparatus. In addition, FACE machine performance was investigated to improve transuranium throughput. Analysis of the causes of transuranium isotope loss was undertaken both chemical and radioactive. To lower radioactive loss, the dynamics of the most time consuming step of the FACE machine, the chromatographic column output droplet drying and flaming, in preparation of sample for alpha spectroscopy and counting, was investigated. A series of droplets were dried in an experimental apparatus demonstrating that droplets could be dried significantly faster through more intensie heating, enabling the FACE machine cycle to be shortened by 30-60 seconds. Proposals incorporating these ideas were provided for FACE machine development. The 66% chemical loss of product was analyzed and changes were proposed to reduce the radioisotopes product loss. An analysis of the chromatographic column was also provided. All operating steps in the FACE machine are described and analyzed to provide a complete guide, along with the proposals for machine improvement.

  20. Price strategy and pricing strategy: terms and content identification

    OpenAIRE

    Panasenko Tetyana

    2015-01-01

    The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  1. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  2. Semi-automatic construction of the Chinese-English MeSH using Web-based term translation method.

    Science.gov (United States)

    Lu, Wen-Hsiang; Lin, Shih-Jui; Chan, Yi-Che; Chen, Kuan-Hsi

    2005-01-01

    Due to language barrier, non-English users are unable to retrieve the most updated medical information from the U.S. authoritative medical websites, such as PubMed and MedlinePlus. A few cross-language medical information retrieval (CLMIR) systems have been utilizing MeSH (Medical Subject Heading) with multilingual thesaurus to bridge the gap. Unfortunately, MeSH has yet not been translated into traditional Chinese currently. We proposed a semi-automatic approach to constructing Chinese-English MeSH based on Web-based term translation. The system provides knowledge engineers with candidate terms mining from anchor texts and search-result pages. The result is encouraging. Currently, more than 19,000 Chinese-English MeSH entries have been complied. This thesaurus will be used in Chinese-English CLMIR in the future. PMID:16779085

  3. A hybrid model for automatic identification of risk factors for heart disease.

    Science.gov (United States)

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task. PMID:26375492

  4. Price strategy and pricing strategy: terms and content identification

    Directory of Open Access Journals (Sweden)

    Panasenko Tetyana

    2015-11-01

    Full Text Available The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  5. REMI and ROUSE: Quantitative Models for Long-Term and Short-Term Priming in Perceptual Identification

    NARCIS (Netherlands)

    E.J. Wagenmakers (Eric-Jan); R. Zeelenberg (René); D.E. Huber (David); J.G.W. Raaijmakers (Jeroen)

    2003-01-01

    textabstractThe REM model originally developed for recognition memory (Shiffrin & Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. We discuss two REM models based on Bayesian principles: a model for long-term priming (REMI; Sc

  6. Automatic Assignment of Non-Leaf MeSH Terms to Biomedical Articles.

    Science.gov (United States)

    Kavuluru, Ramakanth; Rios, Anthony

    2015-01-01

    Assigning labels from a hierarchical vocabulary is a well known special case of multi-label classification, often modeled to maximize micro F1-score. However, building accurate binary classifiers for poorly performing labels in the hierarchy can improve both micro and macro F1-scores. In this paper, we propose and evaluate classification strategies involving descendant node instances to build better binary classifiers for non-leaf labels with the use-case of assigning Medical Subject Headings (MeSH) to biomedical articles. Librarians at the National Library of Medicine tag each biomedical article to be indexed by their PubMed information system with terms from the MeSH terminology, a biomedical conceptual hierarchy with over 27,000 terms. Human indexers look at each article's full text to assign a set of most suitable MeSH terms for indexing it. Several recent automated attempts focused on using the article title and abstract text to identify MeSH terms for the corresponding article. Despite these attempts, it is observed that assigning MeSH terms corresponding to certain non-leaf nodes of the MeSH hierarchy is particularly challenging. Non-leaf nodes are very important as they constitute one third of the total number of MeSH terms. Here, we demonstrate the effectiveness of exploiting training examples of descendant terms of non-leaf nodes in improving the performance of conventional classifiers for the corresponding non-leaf MeSH terms. Specifically, we focus on reducing the false positives (FPs) caused due to descendant instances in traditional classifiers. Our methods are able to achieve a relative improvement of 7.5% in macro-F1 score while also increasing the micro-F1 score by 1.6% for a set of 500 non-leaf terms in the MeSH hierarchy. These results strongly indicate the critical role of incorporating hierarchical information in MeSH term prediction. To our knowledge, our effort is the first to demonstrate the role of hierarchical information in improving

  7. Automatic Resolution of Ambiguous Terms Based on Machine Learning and Conceptual Relations in the UMLS

    OpenAIRE

    Liu, Hongfang; Johnson, Stephen B.; Friedman, Carol

    2002-01-01

    Motivation. The UMLS has been used in natural language processing applications such as information retrieval and information extraction systems. The mapping of free-text to UMLS concepts is important for these applications. To improve the mapping, we need a method to disambiguate terms that possess multiple UMLS concepts. In the general English domain, machine-learning techniques have been applied to sense-tagged corpora, in which senses (or concepts) of ambiguous terms have been annotated (m...

  8. A smart pattern recognition system for the automatic identification of aerospace acoustic sources

    Science.gov (United States)

    Cabell, R. H.; Fuller, C. R.

    1989-01-01

    An intelligent air-noise recognition system is described that uses pattern recognition techniques to distinguish noise signatures of five different types of acoustic sources, including jet planes, propeller planes, a helicopter, train, and wind turbine. Information for classification is calculated using the power spectral density and autocorrelation taken from the output of a single microphone. Using this system, as many as 90 percent of test recordings were correctly identified, indicating that the linear discriminant functions developed can be used for aerospace source identification.

  9. Towards the automatic identification of cloudiness condition by means of solar global irradiance measurements

    Science.gov (United States)

    Sanchez, G.; Serrano, A.; Cancillo, M. L.

    2010-09-01

    This study focuses on the design of an automatic algorithm for classification of the cloudiness condition based only on global irradiance measurements. Clouds are a major modulating factor for the Earth radiation budget. They attenuate the solar radiation and control the terrestrial radiation participating in the energy balance. Generally, cloudiness is a limiting factor for the solar radiation reaching the ground, highly contributing to the Earth albedo. Additionally it is the main responsible for the high variability shown by the downward irradiance measured at ground level. Being a major source for the attenuation and high-frequency variability of the solar radiation available for energy purposes in solar power plants, the characterization of the cloudiness condition is of great interest. This importance is even higher in Southern Europe, where very high irradiation values are reached during long periods within the year. Thus, several indexes have been proposed in the literature for the characterization of the cloudiness condition of the sky. Among these indexes, those exclusively involving global irradiance are of special interest since this variable is the most widely available measurement in most radiometric stations. Taking this into account, this study proposes an automatic algorithm for classifying the cloudiness condition of the sky into three categories: cloud-free, partially cloudy and overcast. For that aim, solar global irradiance was measured by Kipp&Zonen CMP11 pyranometer installed on the terrace of the Physics building in the Campus of Badajoz (Spain) of the University of Extremadura. Measurements were recorded at one-minute basis for a period of study extending from 23 November 2009 to 31 March 2010. The algorithm is based on the clearness index kt, which is calculated as the ratio between the solar global downward irradiance measured at ground and the solar downward irradiance at the top of the atmosphere. Since partially cloudy conditions

  10. Automatic identification of mobile and rigid substructures in molecular dynamics simulations and fractional structural fluctuation analysis.

    Directory of Open Access Journals (Sweden)

    Leandro Martínez

    Full Text Available The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD and Root Mean Square Fluctuations (RMSF of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.

  11. Automatic Identification of Critical Follow-Up Recommendation Sentences in Radiology Reports

    Science.gov (United States)

    Yetisgen-Yildiz, Meliha; Gunn, Martin L.; Xia, Fei; Payne, Thomas H.

    2011-01-01

    Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports. PMID:22195225

  12. Automatic Screening of Missing Objects and Identification with Group Coding of RF Tags

    Directory of Open Access Journals (Sweden)

    G. Vijayaraju

    2013-11-01

    Full Text Available Here the container of the shipping based phenomena it is a collection of the objects in a well oriented fashion by which there is a group oriented fashion related to the well efficient strategy of the objects based on the physical phenomena in a well efficient fashion respectively. Here by the enabling of the radio frequency identification based strategy in which object identification takes place in the system in a well efficient fashion and followed by the container oriented strategy in a well effective fashion respectively. Here there is a problem with respect to the present strategy in which there is a problem with respect to the design oriented mechanism by which there is a no proper analysis takes place for the accurate identification of the objects based on the missing strategy plays a major role in the system based aspect respectively. Here a new technique is proposed in order to overcome the problem of the previous method here the present design oriented powerful strategy includes the object oriented determination of the ID based on the user oriented phenomena in a well effective manner where the data related to the strategy of the missing strategy plays a major role in the system based aspect in a well effective fashion by which that is from the perfect analysis takes place from the same phenomena without the help of the entire database n a well respective fashion takes place in the system respectively. Here the main key aspect of the present method is to effectively divide the entire data related to the particular aspect and define based on the present strategy in a well effective manner in which there is coordination has to be maintained in the system based aspect respectively. Simulations have been conducted on the present method and a lot of analysis takes place on the large number of the data sets in a well oriented fashion with respect to the different environmental conditions where there is an accurate analysis with respect to

  13. Comparison between three implementations of automatic identification algorithms for the quantification and characterization of mesoscale eddies in the South Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    J. M. A. C. Souza

    2011-03-01

    Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.

  14. Automatic identification of resting state networks: an extended version of multiple template-matching

    Science.gov (United States)

    Guaje, Javier; Molina, Juan; Rudas, Jorge; Demertzi, Athena; Heine, Lizette; Tshibanda, Luaba; Soddu, Andrea; Laureys, Steven; Gómez, Francisco

    2015-12-01

    Functional magnetic resonance imaging in resting state (fMRI-RS) constitutes an informative protocol to investigate several pathological and pharmacological conditions. A common approach to study this data source is through the analysis of changes in the so called resting state networks (RSNs). These networks correspond to well-defined functional entities that have been associated to different low and high brain order functions. RSNs may be characterized by using Independent Component Analysis (ICA). ICA provides a decomposition of the fMRI-RS signal into sources of brain activity, but it lacks of information about the nature of the signal, i.e., if the source is artifactual or not. Recently, a multiple template-matching (MTM) approach was proposed to automatically recognize RSNs in a set of Independent Components (ICs). This method provides valuable information to assess subjects at individual level. Nevertheless, it lacks of a mechanism to quantify how much certainty there is about the existence/absence of each network. This information may be important for the assessment of patients with severely damaged brains, in which RSNs may be greatly affected as a result of the pathological condition. In this work we propose a set of changes to the original MTM that improves the RSNs recognition task and also extends the functionality of the method. The key points of this improvement is a standardization strategy and a modification of method's constraints that adds flexibility to the approach. Additionally, we also introduce an analysis to the trustworthiness measurement of each RSN obtained by using template-matching approach. This analysis consists of a thresholding strategy applied over the computed Goodness-of-Fit (GOF) between the set of templates and the ICs. The proposed method was validated on 2 two independent studies (Baltimore, 23 healthy subjects and Liege, 27 healthy subjects) with different configurations of MTM. Results suggest that the method will provide

  15. Automatic Spatially-Adaptive Balancing of Energy Terms for Image Segmentation

    CERN Document Server

    Rao, Josna; Abugharbieh, Rafeef

    2009-01-01

    Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.

  16. Automatic Whole-Spectrum Matching Techniques for Identification of Pure and Mixed Minerals using Raman Spectroscopy

    Science.gov (United States)

    Dyar, M. D.; Carey, C. J.; Breitenfeld, L.; Tague, T.; Wang, P.

    2015-12-01

    In situuse of Raman spectroscopy on Mars is planned for three different instruments in the next decade. Although implementations differ, they share the potential to identify surface minerals and organics and inform Martian geology and geochemistry. Their success depends on the availability of appropriate databases and software for phase identification. For this project, we have consolidated all known publicly-accessible Raman data on minerals for which independent confirmation of phase identity is available, and added hundreds of additional spectra acquired using varying instruments and laser energies. Using these data, we have developed software tools to improve mineral identification accuracy. For pure minerals, whole-spectrum matching algorithms far outperform existing tools based on diagnostic peaks in individual phases. Optimal matching accuracy does depend on subjective end-user choices for data processing (such as baseline removal, intensity normalization, and intensity squashing), as well as specific dataset characteristics. So, to make this tuning process amenable to automated optimization methods, we developed a machine learning-based generalization of these choices within a preprocessing and matching framework. Our novel method dramatically reduces the burden on the user and results in improved matching accuracy. Moving beyond identifying pure phases into quantification of relative abundances is a complex problem because relationships between peak intensity and mineral abundance are obscured by complicating factors: exciting laser frequency, the Raman cross section of the mineral, crystal orientation, and long-range chemical and structural ordering in the crystal lattices. Solving this un-mixing problem requires adaptation of our whole-spectrum algorithms and a large number of test spectra of minerals in known volume proportions, which we are creating for this project. Key to this effort is acquisition of spectra from mixtures of pure minerals paired

  17. Automatic Identification of Messages Related to Adverse Drug Reactions from Online User Reviews using Feature-based Classification.

    Directory of Open Access Journals (Sweden)

    Jingfang Liu

    2014-11-01

    Full Text Available User-generated medical messages on Internet contain extensive information related to adverse drug reactions (ADRs and are known as valuable resources for post-marketing drug surveillance. The aim of this study was to find an effective method to identify messages related to ADRs automatically from online user reviews.We conducted experiments on online user reviews using different feature set and different classification technique. Firstly, the messages from three communities, allergy community, schizophrenia community and pain management community, were collected, the 3000 messages were annotated. Secondly, the N-gram-based features set and medical domain-specific features set were generated. Thirdly, three classification techniques, SVM, C4.5 and Naïve Bayes, were used to perform classification tasks separately. Finally, we evaluated the performance of different method using different feature set and different classification technique by comparing the metrics including accuracy and F-measure.In terms of accuracy, the accuracy of SVM classifier was higher than 0.8, the accuracy of C4.5 classifier or Naïve Bayes classifier was lower than 0.8; meanwhile, the combination feature sets including n-gram-based feature set and domain-specific feature set consistently outperformed single feature set. In terms of F-measure, the highest F-measure is 0.895 which was achieved by using combination feature sets and a SVM classifier. In all, we can get the best classification performance by using combination feature sets and SVM classifier.By using combination feature sets and SVM classifier, we can get an effective method to identify messages related to ADRs automatically from online user reviews.

  18. Large data analysis: automatic visual personal identification in a demography of 1.2 billion persons

    Science.gov (United States)

    Daugman, John

    2014-05-01

    The largest biometric deployment in history is now underway in India, where the Government is enrolling the iris patterns (among other data) of all 1.2 billion citizens. The purpose of the Unique Identification Authority of India (UIDAI) is to ensure fair access to welfare benefits and entitlements, to reduce fraud, and enhance social inclusion. Only a minority of Indian citizens have bank accounts; only 4 percent possess passports; and less than half of all aid money reaches its intended recipients. A person who lacks any means of establishing their identity is excluded from entitlements and does not officially exist; thus the slogan of UIDAI is: To give the poor an identity." This ambitious program enrolls a million people every day, across 36,000 stations run by 83 agencies, with a 3-year completion target for the entire national population. The halfway point was recently passed with more than 600 million persons now enrolled. In order to detect and prevent duplicate identities, every iris pattern that is enrolled is first compared against all others enrolled so far; thus the daily workflow now requires 600 trillion (or 600 million-million) iris cross-comparisons. Avoiding identity collisions (False Matches) requires high biometric entropy, and achieving the tremendous match speed requires phase bit coding. Both of these requirements are being delivered operationally by wavelet methods developed by the author for encoding and comparing iris patterns, which will be the focus of this Large Data Award" presentation.

  19. AN AUTOMATIC LEAF RECOGNITION SYSTEM FOR PLANT IDENTIFICATION USING MACHINE VISION TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    VIJAY SATTI

    2013-04-01

    Full Text Available Plants are the backbone of all life on Earth and an essential resource for human well-being. Plant recognition is very important in agriculture for the management of plant species whereas botanists can use this application for medicinal purposes. Leaf of different plants have different characteristics which can be used to classify them.This paper presents a simple and computationally efficient method for plant identification using digital image processing and machine vision technology. The proposed approach consists of three phases: pre-processing, feature extraction and classification. Pre- processing is the technique of enhancing data images prior to computational processing. The feature extraction phase derives features based on color and shape of the leaf image. These features are used as inputs to the classifier for efficient classification and the results were tested and compared using Artificial Neural Network (ANN and Euclidean (KNN classifier. The network was trained with 1907 sample leaves of 33 different plant species taken form Flavia dataset. The proposed approach is 93.3 percent accurate using ANN classifier and the comparison of classifiers shows that ANN takes less average time for execution than Euclidean distance method.

  20. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Directory of Open Access Journals (Sweden)

    Malik M. Naeem Mannan

    2016-02-01

    Full Text Available Contamination of eye movement and blink artifacts in Electroencephalogram (EEG recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI. In this paper, we proposed an automatic framework based on independent component analysis (ICA and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.

  1. Hybrid EEG--Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal.

    Science.gov (United States)

    Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad

    2016-01-01

    Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276

  2. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Science.gov (United States)

    Mannan, Malik M. Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M. Ahmad

    2016-01-01

    Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276

  3. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-04-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity (T) and storativity (S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  4. Short-term price overreaction: Identification, testing, exploitation

    OpenAIRE

    Caporale, Guglielmo Maria; Gil-Alana, Luis; Plastun, Alex

    2014-01-01

    This paper examines short-term price reactions after one-day abnormal price changes and whether they create exploitable profit opportunities in various financial markets. A t-test confirms the presence of overreactions and also suggests that there is an “inertia anomaly”, i.e. after an overreaction day prices tend to move in the same direction for some time. A trading robot approach is then used to test two trading strategies aimed at exploiting the detected anomalies to make abnormal profits...

  5. Short-Term Price Overreactions: Identification, Testing, Exploitation

    OpenAIRE

    Caporale, Guglielmo Maria; Luis A. Gil-Alana; Plastun, Alex

    2014-01-01

    This paper examines short-term price reactions after one-day abnormal price changes and whether they create exploitable profit opportunities in various financial markets. A t-test confirms the presence of overreactions and also suggests that there is an “inertia anomaly”, i.e. after an overreaction day prices tend to move in the same direction for some time. A trading robot approach is then used to test two trading strategies aimed at exploiting the detected anomalies to make abnormal profits...

  6. Automatic identification and placement of measurement stations for hydrological discharge simulations at basin scale

    Science.gov (United States)

    Grassi, P. R.; Ceppi, A.; Cancarè, F.; Ravazzani, G.; Mancini, M.; Sciuto, D.

    2012-04-01

    corresponding data is used, and false that it is not used. Using this definition of the solution space it is possible to apply various optimization algorithms such as genetics and simulated annealing. Iterating on a large set of possible configurations these algorithms provide the set of Pareto-optimal solutions, i.e. the number of measuring points is minimized while the forecasting accuracy is maximised. The identified Pareto curve is approximate, since the identification of the complete Pareto curve is practically impossible due to the large amount of possible configurations. From the experimental results, as expected, we notice that a certain set of weather data are essential for hydrological simulations while other are negligible. Combining the outcome of different optimization algorithms is possible to extract a reliable set of rules to place measurement stations for forecasting monitoring.

  7. Hybrid ICA – regression: automatic identification and removal of ocular artifacts from electroencephalographic signals

    Directory of Open Access Journals (Sweden)

    Malik Muhammad Naeem Mannan

    2016-05-01

    Full Text Available Electroencephalography (EEG is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI. It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA, regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA and regression-ICA (REGICA confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG.

  8. Hybrid ICA-Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals.

    Science.gov (United States)

    Mannan, Malik M Naeem; Jeong, Myung Y; Kamran, Muhammad A

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG. PMID:27199714

  9. Hybrid ICA—Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals

    Science.gov (United States)

    Mannan, Malik M. Naeem; Jeong, Myung Y.; Kamran, Muhammad A.

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG. PMID:27199714

  10. Identification of terms to define unconstrained air transportation demands

    Science.gov (United States)

    Jacobson, I. D.; Kuhilhau, A. R.

    1982-01-01

    The factors involved in the evaluation of unconstrained air transportation systems were carefully analyzed. By definition an unconstrained system is taken to be one in which the design can employ innovative and advanced concepts no longer limited by present environmental, social, political or regulatory settings. Four principal evaluation criteria are involved: (1) service utilization, based on the operating performance characteristics as viewed by potential patrons; (2) community impacts, reflecting decisions based on the perceived impacts of the system; (3) technological feasibility, estimating what is required to reduce the system to practice; and (4) financial feasibility, predicting the ability of the concepts to attract financial support. For each of these criteria, a set of terms or descriptors was identified, which should be used in the evaluation to render it complete. It is also demonstrated that these descriptors have the following properties: (a) their interpretation may be made by different groups of evaluators; (b) their interpretations and the way they are used may depend on the stage of development of the system in which they are used; (c) in formulating the problem, all descriptors should be addressed independent of the evaluation technique selected.

  11. Distributed and Overlapping Neural Substrates for Object Individuation and Identification in Visual Short-Term Memory.

    Science.gov (United States)

    Naughtin, Claire K; Mattingley, Jason B; Dux, Paul E

    2016-02-01

    Object individuation and identification are 2 key processes involved in representing visual information in short-term memory (VSTM). Individuation involves the use of spatial and temporal cues to register an object as a distinct perceptual event relative to other stimuli, whereas object identification involves extraction of featural and related conceptual properties of a stimulus. Together, individuation and identification provide the "what," "where," and "when" of visual perception. In the current study, we asked whether individuation and identification processes are underpinned by distinct neural substrates, and to what extent brain regions that reflect these 2 operations are consistent across encoding, maintenance, and retrieval stages of VSTM. We used functional magnetic resonance imaging to identify brain regions that represent the number of objects (individuation) and/or object features (identification) in an array. Using univariate and multivariate analyses, we found substantial overlap between these 2 operations in the brain. Moreover, we show that regions supporting individuation and identification vary across distinct stages of information processing. Our findings challenge influential models of multiple-object encoding in VSTM, which argue that individuation and identification are underpinned by a limited set of nonoverlapping brain regions. PMID:25217471

  12. Resolving Quasi-Synonym Relationships in Automatic Thesaurus Construction Using Fuzzy Rough Sets and an Inverse Term Frequency Similarity Function

    Science.gov (United States)

    Davault, Julius M., III.

    2009-01-01

    One of the problems associated with automatic thesaurus construction is with determining the semantic relationship between word pairs. Quasi-synonyms provide a type of equivalence relationship: words are similar only for purposes of information retrieval. Determining such relationships in a thesaurus is hard to achieve automatically. The term…

  13. Automatic Peak Identification in Scanning Electron Microscopy/Energy Dispersive X-ray (SEM/EDS) Microanalysis: Can You Always Trust the Results?

    Science.gov (United States)

    Newbury, D.

    2006-05-01

    The degree of sophistication of computer-aided scanning electron microscopy/energy dispersive x-ray spectrometry (SEM/EDS) microanalysis has advanced to the point where it is possible with a single command to automatically perform sequential qualitative analysis (peak identification) and quantitative analysis and then create a report of analysis with full statistical support. Often the actual algorithms employed in commercial software for each stage of the analysis are not provided or tested in sufficient detail nor are any inherent limitations in applying such "black box" software described to enable the analyst to estimate the performance. The identification of the elements responsible for the characteristic peaks in the EDS spectrum is obviously the first critical step in performing a robust analysis. Can the software be trusted to always deliver the correct elemental identification for the easiest possible case: peaks with high intensity and high peak-to-background that arise from major constituents (i.e., concentration, C above 0.1 mass fraction = 10 weight percent) and which do not suffer peak interference from another constituent? Unfortunately, testing of automatic peak identification procedures in a series of commercial systems has revealed that serious mistakes occur approximately 3 to 5 percent of the time for this easiest case [1]. Moreover, these mistakes are not random but occur systematically for certain elements. The situation is even worse when minor (C from 0.01 to 0.1) and trace (C less than 0.01) constituents are of interest or when analysis is performed under "low voltage" conditions (beam energy 5 keV or less). The prudent analyst will always use manual peak identification procedures to provide confirmation of automatic peak identification results before proceeding to quantitative analysis [2]. [1] Newbury, D., Microscopy and Microanalysis, 11 (2005) 545. [2] Goldstein, J., Newbury, D., Joy, D., Lyman, C., Echlin, P., Lifshin, E., Sawyer, L

  14. Identification time constants of the synchronous machine in high reliability power supply systems in Kozloduy NPP for mathematical modeling of automatical control system

    International Nuclear Information System (INIS)

    This article presents the results of subjects identification, for following of creating base models in Simulink (included in Matlab5.3) of automatic control system, synchronous generator and motor. The method for timing rows analysis is used the received third line contains the machine's time constants: d axis transient short - circuit time constant Td' and consist of mechanical parameters, initial conditions and saturation parameters. The results of research allow creating models of type 'machine-regulator', for analysis in Simulink to Matlab identically by specification objects. (authors)

  15. Automatic Assessment of Global Craniofacial Differences between Crouzon mice and Wild-type mice in terms of the Cephalic Index

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Oubel, Estanislao; Frangi, Alejandro F.; Darvann, Tron Andre; Hermann, Nuno V.; Kreiborg, Sven; Larsen, Per; Ersbøll, Bjarne Kjær; Perlyn, Chad A.; Morriss-Kay, Gillian

    registering each mouse to the atlas using affine transformations. The skull length and width are then measured on the atlas and propagated to all subjects to obtain automatic measurements of the cephalic index. The registration accuracy was estimated by RMS landmark errors. Even though the accuracy of...... landmark matching is limited using only affine transformations, the errors were considered acceptable. The automatic estimation of the cephalic index was in full agreement with the gold standard measurements. Discriminant analysis of the three scaling parameters resulted in a good classification of the...

  16. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    Science.gov (United States)

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. PMID:24473345

  17. An image analysis and classification system for automatic weed species identification in different crops for precision weed management

    OpenAIRE

    Weis, Martin

    2010-01-01

    A system for the automatic weed detection in arable fields was developed in this thesis. With the resulting maps, weeds in fields can be controlled on a sub-field level, according to their abundance. The system contributes to the emerging field of Precision Farming technologies. Precision Farming technologies have been developed during the last two decades to refine the agricultural management practise. The goal of Precision Farming is to vary treatments within fields, according to the local ...

  18. Planning for Site Transition to Long-Term Stewardship: Identification of Requirements and Issues

    Energy Technology Data Exchange (ETDEWEB)

    Banaee, Jila

    2002-08-01

    A systematic methodology is presented and applied for the identification of requirements and issues pertaining to the planning for, and transition to, long term stewardship (LTS). The method has been applied to three of the twelve identified LTS functions. The results of the application of the methodology to contaminated and uncontaminated federal real property in those three functions are presented. The issues that could be seen as impediments to the implementation of LTS are also identified for the three areas under consideration. The identified requirements are significant and in some cases complex to implement. It is clear that early and careful planning is required in all circumstances.

  19. Planning for Site Transition to Long-Term Stewardship: Identification of Requirements and Issues

    International Nuclear Information System (INIS)

    A systematic methodology is presented and applied for the identification of requirements and issues pertaining to the planning for, and transition to, long term stewardship (LTS). The method has been applied to three of the twelve identified LTS functions. The results of the application of the methodology to contaminated and uncontaminated federal real property in those three functions are presented. The issues that could be seen as impediments to the implementation of LTS are also identified for the three areas under consideration. The identified requirements are significant and in some cases complex to implement. It is clear that early and careful planning is required in all circumstances

  20. Automatic Keywords Extraction for Punjabi Language

    Directory of Open Access Journals (Sweden)

    Vishal Gupta

    2011-09-01

    Full Text Available Automatic keywords extraction is the task to identify a small set of words, key phrases, keywords, or key segments from a document that can describe the meaning of the document. Keywords are useful tools as they give the shortest summary of the document. This paper concentrates on Automatic keywords extraction for Punjabi language text. It includes various phases like removing stop words, Identification of Punjabi nouns and noun stemming, Calculation of Term Frequency and Inverse Sentence Frequency (TF-ISF, Punjabi keywords as nouns with high TF-ISF score and title/headline feature for Punjabi text. The extracted keywords are very much helpful in automatic indexing, text summarization, information retrieval, classification, clustering, topic detection and tracking and web searches etc.

  1. Automatic identification approach for high-performance liquid chromatography-multiple reaction monitoring fatty acid global profiling.

    Science.gov (United States)

    Tie, Cai; Hu, Ting; Jia, Zhi-Xin; Zhang, Jin-Lan

    2015-08-18

    Fatty acids (FAs) are a group of lipid molecules that are essential to organisms. As potential biomarkers for different diseases, FAs have attracted increasing attention from both biological researchers and the pharmaceutical industry. A sensitive and accurate method for globally profiling and identifying FAs is required for biomarker discovery. The high selectivity and sensitivity of high-performance liquid chromatography-multiple reaction monitoring (HPLC-MRM) gives it great potential to fulfill the need to identify FAs from complicated matrices. This paper developed a new approach for global FA profiling and identification for HPLC-MRM FA data mining. Mathematical models for identifying FAs were simulated using the isotope-induced retention time (RT) shift (IRS) and peak area ratios between parallel isotope peaks for a series of FA standards. The FA structures were predicated using another model based on the RT and molecular weight. Fully automated FA identification software was coded using the Qt platform based on these mathematical models. Different samples were used to verify the software. A high identification efficiency (greater than 75%) was observed when 96 FA species were identified in plasma. This FAs identification strategy promises to accelerate FA research and applications. PMID:26189701

  2. Proliferating cell nuclear antigen (PCNA) allows the automatic identification of follicles in microscopic images of human ovarian tissue

    CERN Document Server

    Kelsey, Thomas W; Castillo, Luis; Wallace, W Hamish B; Gonzálvez, Francisco Cóppola; 10.2147/PLMI.S11116

    2010-01-01

    Human ovarian reserve is defined by the population of nongrowing follicles (NGFs) in the ovary. Direct estimation of ovarian reserve involves the identification of NGFs in prepared ovarian tissue. Previous studies involving human tissue have used hematoxylin and eosin (HE) stain, with NGF populations estimated by human examination either of tissue under a microscope, or of images taken of this tissue. In this study we replaced HE with proliferating cell nuclear antigen (PCNA), and automated the identification and enumeration of NGFs that appear in the resulting microscopic images. We compared the automated estimates to those obtained by human experts, with the "gold standard" taken to be the average of the conservative and liberal estimates by three human experts. The automated estimates were within 10% of the "gold standard", for images at both 100x and 200x magnifications. Automated analysis took longer than human analysis for several hundred images, not allowing for breaks from analysis needed by humans. O...

  3. Automatic de-identification of electronic medical records using token-level and character-level conditional random fields.

    Science.gov (United States)

    Liu, Zengjian; Chen, Yangxin; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai; Li, Haodi; Wang, Jingfeng; Deng, Qiwen; Zhu, Suisong

    2015-12-01

    De-identification, identifying and removing all protected health information (PHI) present in clinical data including electronic medical records (EMRs), is a critical step in making clinical data publicly available. The 2014 i2b2 (Center of Informatics for Integrating Biology and Bedside) clinical natural language processing (NLP) challenge sets up a track for de-identification (track 1). In this study, we propose a hybrid system based on both machine learning and rule approaches for the de-identification track. In our system, PHI instances are first identified by two (token-level and character-level) conditional random fields (CRFs) and a rule-based classifier, and then are merged by some rules. Experiments conducted on the i2b2 corpus show that our system submitted for the challenge achieves the highest micro F-scores of 94.64%, 91.24% and 91.63% under the "token", "strict" and "relaxed" criteria respectively, which is among top-ranked systems of the 2014 i2b2 challenge. After integrating some refined localization dictionaries, our system is further improved with F-scores of 94.83%, 91.57% and 91.95% under the "token", "strict" and "relaxed" criteria respectively. PMID:26122526

  4. A New Color Facial Identification Feature Extraction Method' and Automatic Identification%一种改进的彩色人脸鉴别特征抽取方法及自动识别

    Institute of Scientific and Technical Information of China (English)

    高燕; 明曙军; 刘永俊

    2011-01-01

    Currently face recognition has made some success, algorithms are constantly being improved. According to the common needs of the average sample solution in traditional linear analysis methods, this paper proposes the face recognition based on intermediate samples. This method can remove the influence of average samples to interference samples. Combined with the color of face recognition, the paper proposes color facial identification feature extraction and automatic identification based on the middle samples. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%针对传统的线性分析方法中都需要的平均样本的共性,提出了基于中间样本的人脸识别.这种方法有效去除了干扰样本对平均样本的影响,并结合彩色人脸识别,提出了基于中间样本的彩色人脸鉴别特征抽取及自动识别方法.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了算法的有效性.

  5. Automatic segmentation of the hippocampus for preterm neonates from early-in-life to term-equivalent age

    Directory of Open Access Journals (Sweden)

    Ting Guo

    2015-01-01

    Conclusions: MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth.

  6. Totomatix: a novel automatic set-up to control diurnal, diel and long-term plant nitrate nutrition

    OpenAIRE

    Adamowicz, Stephane; Le Bot, Jacques; Huanosto, Ruth; Fabre, Marie Joseph

    2011-01-01

    Background Stand-alone nutritional set-ups are useful tools to grow plants at defined nutrient availabilities and to measure nutrient uptake rates continuously, in particular that for nitrate. Their use is essential when the measurements are meant to cover long time periods. These complex systems have, however, important drawbacks, including poor long-term reliability and low precision at high nitrate concentration. This explains why the information dealing with diel dynamics of nitrate uptak...

  7. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  8. Automatic identification of organ/tissue regions in CT image data for the implementation of patient specific phantoms for treatment planning in cancer therapy

    Science.gov (United States)

    Sparks, Richard Blaine

    In vivo targeted radiotherapy has the potential to be an effective treatment for many types of cancer. Agents which show preferred uptake by cancerous tissue are labeled with radio-nuclides and administered to the patient. The preferred uptake by the cancerous tissue allows for the delivery of therapeutically effective radiation absorbed doses to tumors, while sparing normal tissue. Accurate absorbed dose estimation for targeted radiotherapy would be of great clinical value in a patient's treatment planning. One of the problems with calculating absorbed dose involves the use of geometric mathematical models of the human body for the simulation of the radiation transport. Since many patients differ markedly from these models, errors in the absorbed dose estimation procedure result from using these models. Patient specific models developed using individual patient's anatomical structure would greatly enhance the accuracy of dosimetry calculations. Patient specific anatomy data is available from CT or MRI images, but the very time consuming process of manual organ and tissue identification limits its practicality for routine clinical use. This study uses a statistical classifier to automatically identify organs and tissues from CT image data. In this study, image ``slices'' from thirty- five different subjects at approximately the same anatomical position are used to ``train'' the statistical classifier. Multi-dimensional probability distributions of image characteristics, such as location and intensity, are generated from the training images. Statistical classification rules are then used to identify organs and tissues in five previously unseen images. A variety of pre-processing and post-processing techniques are then employed to enhance the classification procedure. This study demonstrated the promise of statistical classifiers for solving segmentation problems involving human anatomy where there is an underlying pattern of structure. Despite the poor quality of

  9. Identification of Biocontrol Bacteria against Soybean Root Rot with Biolog Automatic Microbiology Analysis System%拮抗大豆根腐病细菌的Biolog鉴定

    Institute of Scientific and Technical Information of China (English)

    许艳丽; 刘海龙; 李春杰; 潘凤娟; 李淑娴; 刘新晶

    2012-01-01

    In order to identify the systematic position of taxonomy of two biocontrol bacteria against soybean root rot. Traditional morphological identification and BIOLOG automatic microbiology analysis system were used to identify strain B021a and B04b. The results showed that similarity value of strain B021a with Vibrio tubiashii was 0. 634, possibility to 86% and genetic distance to 4.00,and similarity value of strain B04b with Pasteurella trehalosi was 0. 610,probability to 75% and genetic distance to 2. 77. Strain B021a was identified as Vibrio tubiashii and strain B04b as Pasteurella trehalosi by colony morphological propertie and BIOLOC analysis system.%为明确2株生防细菌的分类地位,采用传统形态学方法结合Biolog微生物自动分析系统,鉴定了大豆根腐病的2株生防细菌.结果表明,菌株B021a与塔式弧菌相似度值为0.634,可能性是86%,遗传距离为4.00.菌株B04b与海藻巴斯德菌相似度值为0.610,可能性是75%,遗传距离为2.77.综合形态学和Biolog鉴定结果,认为菌株B021a是塔式弧菌,菌株B04b是海藻巴斯德菌.

  10. Automatic Term-Level Abstraction

    OpenAIRE

    Brady, Bryan

    2011-01-01

    Recent advances in decision procedures for Boolean satisfiability (SAT) and Satisfiability Modulo Theories (SMT) have increased the performance and capacity of formal verification techniques. Even with these advances, formal methods often do not scale to industrial-size designs, due to the gap between the level of abstraction at which designs are described and the level at which SMT solvers can be applied. In order to fully exploit the power of state-of-the-art SMT solvers, abstraction ...

  11. Mining Twitter as a First Step toward Assessing the Adequacy of Gender Identification Terms on Intake Forms.

    Science.gov (United States)

    Hicks, Amanda; Hogan, William R; Rutherford, Michael; Malin, Bradley; Xie, Mengjun; Fellbaum, Christiane; Yin, Zhijun; Fabbri, Daniel; Hanna, Josh; Bian, Jiang

    2015-01-01

    The Institute of Medicine (IOM) recommends that health care providers collect data on gender identity. If these data are to be useful, they should utilize terms that characterize gender identity in a manner that is 1) sensitive to transgender and gender non-binary individuals (trans* people) and 2) semantically structured to render associated data meaningful to the health care professionals. We developed a set of tools and approaches for analyzing Twitter data as a basis for generating hypotheses on language used to identify gender and discuss gender-related issues across regions and population groups. We offer sample hypotheses regarding regional variations in the usage of certain terms such as 'genderqueer', 'genderfluid', and 'neutrois' and their usefulness as terms on intake forms. While these hypotheses cannot be directly validated with Twitter data alone, our data and tools help to formulate testable hypotheses and design future studies regarding the adequacy of gender identification terms on intake forms. PMID:26958196

  12. Automatic Number Plate Recognition System

    OpenAIRE

    Rajshree Dhruw; Dharmendra Roy

    2014-01-01

    Automatic Number Plate Recognition (ANPR) is a mass surveillance system that captures the image of vehicles and recognizes their license number. The objective is to design an efficient automatic authorized vehicle identification system by using the Indian vehicle number plate. In this paper we discus different methodology for number plate localization, character segmentation & recognition of the number plate. The system is mainly applicable for non standard Indian number plates by recognizing...

  13. Operator overloading as an enabling technology for automatic differentiation

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [Marquette Univ., Milwaukee, WI (United States)]|[Argonne National Lab., IL (United States); Griewank, A. [Argonne National Lab., IL (United States)

    1993-05-01

    We present an example of the science that is enabled by object-oriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or sensitivity analysis. Automatic differentiation computes derivatives accurately and efficiently by applying the chain rule to each arithmetic operation or elementary function. Operator overloading enables the techniques of either the forward or the reverse mode of automatic differentiation to be applied to real-world scientific problems. We illustrate automatic differentiation with an example drawn from a model of unsaturated flow in a porous medium. The problem arises from planning for the long-term storage of radioactive waste.

  14. Operator overloading as an enabling technology for automatic differentiation

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (Marquette Univ., Milwaukee, WI (United States) Argonne National Lab., IL (United States)); Griewank, A. (Argonne National Lab., IL (United States))

    1993-01-01

    We present an example of the science that is enabled by object-oriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or sensitivity analysis. Automatic differentiation computes derivatives accurately and efficiently by applying the chain rule to each arithmetic operation or elementary function. Operator overloading enables the techniques of either the forward or the reverse mode of automatic differentiation to be applied to real-world scientific problems. We illustrate automatic differentiation with an example drawn from a model of unsaturated flow in a porous medium. The problem arises from planning for the long-term storage of radioactive waste.

  15. Operator overloading as an enabling technology for automatic differentiation

    International Nuclear Information System (INIS)

    We present an example of the science that is enabled by object-oriented programming techniques. Scientific computation often needs derivatives for solving nonlinear systems such as those arising in many PDE algorithms, optimization, parameter identification, stiff ordinary differential equations, or sensitivity analysis. Automatic differentiation computes derivatives accurately and efficiently by applying the chain rule to each arithmetic operation or elementary function. Operator overloading enables the techniques of either the forward or the reverse mode of automatic differentiation to be applied to real-world scientific problems. We illustrate automatic differentiation with an example drawn from a model of unsaturated flow in a porous medium. The problem arises from planning for the long-term storage of radioactive waste

  16. Numerical method of identification of an unknown source term in a heat equation

    Directory of Open Access Journals (Sweden)

    Fatullayev Afet Golayo?lu

    2002-01-01

    Full Text Available A numerical procedure for an inverse problem of identification of an unknown source in a heat equation is presented. Approach of proposed method is to approximate unknown function by polygons linear pieces which are determined consecutively from the solution of minimization problem based on the overspecified data. Numerical examples are presented.

  17. 基于图像的昆虫自动识别与计数研究进展%Progress in Research on Digital Image Processing Technology for Automatic Insect Identification and Counting

    Institute of Scientific and Technical Information of China (English)

    姚青; 吕军; 杨保军; 薛杰; 郑宏海; 唐健

    2011-01-01

    As the rapid development of information technology, digitization, precision and intelligence are important characteristics for modem agriculture and automatic identification and counting of agricultural insects has become a hot research topic. Main methods and applications for automatic insect identification and counting by image processing technology were reviewed. The advantages and disadvantages of these methods were compared and the relevant problems and prospect were also discussed.%随着计算机技术的快速发展,现代农业逐步走向数字化、精准化和智能化,昆虫自动识别和计数成为国内外研究的热点.论文综述了国内外基于图像的昆虫自动识别与计数技术研究的主要方法和应用,概述了各种方法的原理,比较了它们的优缺点,最后讨论了存在的问题及研究展望.

  18. Paraphrase Identification using Semantic Heuristic Features

    Directory of Open Access Journals (Sweden)

    Zia Ul-Qayyum

    2012-11-01

    Full Text Available Paraphrase Identification (PI problem is to classify that whether or not two sentences are close enough in meaning to be termed as paraphrases. PI is an important research dimension with practical applications in Information Extraction (IE, Machine Translation, Information Retrieval, Automatic Identification of Copyright Infringement, Question Answering Systems and Intelligent Tutoring Systems, to name a few. This study presents a novel approach of paraphrase identification using semantic heuristic features envisaging improving the accuracy compared to state-of-the-art PI systems. Finally, a comprehensive critical analysis of misclassifications is carried out to provide insightful evidence about the proposed approach and the corpora used in the experiments.

  19. 钢管焊缝超声自动检测系统能力的鉴定%Identification for the Ability of Steel Pipe Weld Automatic Ultrasonic Testing System

    Institute of Scientific and Technical Information of China (English)

    甘正红; 方晓东; 余洋; 苏继权

    2013-01-01

    In this article, it introduced the main contents to be detected in multichannel steel pipe weld automatic ultrasonic testing system, calibration method to detecting system(equipment), and service conditions of detecting system . Combined with steel pipe weld automatic ultrasonic testing requirements specified in API SPEC 5L/IS0 3183 standard, it discussed the main properties and identification method of multichannel steel pipe weld automatic ultrasonic testing system, provided specific requirements for linearity, horizontal linearity, dynamic range, comprehensive property and others. The feasibility of identification ability was proved through actual application.%介绍了多通道钢管焊缝超声波自动检测系统待检测的主要内容、对检测系统(设备)进行校准的方法以及检测系统的使用条件.结合API SPEC 5L/ISO 3183标准对钢管焊缝超声自动检测的要求,探讨了多通道钢管焊缝超声波自动检测系统的主要性能指标及鉴定方法,给出了主要性能指标如直线性和水平线性、动态范围、综合性能等的具体要求.并通过实际应用表明了鉴定能力的可行性.

  20. The effect of generation on long-term repetition priming in auditory and visual perceptual identification.

    Science.gov (United States)

    Mulligan, Neil W

    2011-05-01

    Perceptual implicit memory is typically most robust when the perceptual processing at encoding matches the perceptual processing required during retrieval. A consistent exception is the robust priming that semantic generation produces on the perceptual identification test (Masson & MacLeod, 2002), a finding which has been attributed to either (1) conceptual influences in this nominally perceptual task, or (2) covert orthographic processing during generative encoding. The present experiments assess these possibilities using both auditory and visual perceptual identification, tests in which participants identify auditory words in noise or rapidly-presented visual words. During the encoding phase of the experiments, participants generated some words and perceived others in an intermixed study list. The perceptual control condition was visual (reading) or auditory (hearing), and varied across participants. The reading and hearing conditions exhibited the expected modality-specificity, producing robust intra-modal priming and non-significant cross-modal priming. Priming in the generate condition depended on the perceptual control condition. With a read control condition, semantic generation produced robust visual priming but no auditory priming. With a hear control condition, the results were reversed: semantic generation produced robust auditory priming but not visual priming. This set of results is not consistent with a straightforward application of either the conceptual-influence or covert-orthography account, and implies that the nature of encoding in the generate condition is influenced by the broader list context. PMID:21388613

  1. Identification of a functional connectome for long-term fear memory in mice.

    Directory of Open Access Journals (Sweden)

    Anne L Wheeler

    Full Text Available Long-term memories are thought to depend upon the coordinated activation of a broad network of cortical and subcortical brain regions. However, the distributed nature of this representation has made it challenging to define the neural elements of the memory trace, and lesion and electrophysiological approaches provide only a narrow window into what is appreciated a much more global network. Here we used a global mapping approach to identify networks of brain regions activated following recall of long-term fear memories in mice. Analysis of Fos expression across 84 brain regions allowed us to identify regions that were co-active following memory recall. These analyses revealed that the functional organization of long-term fear memories depends on memory age and is altered in mutant mice that exhibit premature forgetting. Most importantly, these analyses indicate that long-term memory recall engages a network that has a distinct thalamic-hippocampal-cortical signature. This network is concurrently integrated and segregated and therefore has small-world properties, and contains hub-like regions in the prefrontal cortex and thalamus that may play privileged roles in memory expression.

  2. Automatic Speaker Recognition System

    Directory of Open Access Journals (Sweden)

    Parul,R. B. Dubey

    2012-12-01

    Full Text Available Spoken language is used by human to convey many types of information. Primarily, speech convey message via words. Owing to advanced speech technologies, people's interactions with remote machines, such as phone banking, internet browsing, and secured information retrieval by voice, is becoming popular today. Speaker verification and speaker identification are important for authentication and verification in security purpose. Speaker identification methods can be divided into text independent and text-dependent. Speaker recognition is the process of automatically recognizing speaker voice on the basis of individual information included in the input speech waves. It consists of comparing a speech signal from an unknown speaker to a set of stored data of known speakers. This process recognizes who has spoken by matching input signal with pre- stored samples. The work is focussed to improve the performance of the speaker verification under noisy conditions.

  3. Liabilities identification and long-term management - Review of French situation

    International Nuclear Information System (INIS)

    In France, long term liabilities due to nuclear activities concern four main operators: Electricite de France (EDF), AREVA (an industrial group created on September 3, 2001 and covering the entire fuel cycle from ore extraction and transformation to the recycling of spent fuel), the Atomic Energy Commission (CEA, the French public research organism in the nuclear sector) and the French Agency for radioactive waste management (ANDRA, in charge with the long term operation of radioactive waste installations). Long term liabilities are due to the financing of both decommissioning of nuclear installations and radioactive waste long term management. In the current French organisational scheme, the different operators must take the responsibility of these long term liabilities. The setting of national policies and the establishment of the legislation are carried out at a national level by the French state. These include the supervision of the three operators through different Ministries and the regulatory control of safety trough the Nuclear Safety Authority (ASN). EDF, AREVA, CEA and ANDRA are responsible for all aspects of the decommissioning (from a technical and financial point of view). Within a safety regulatory frame, they have their own initiative concerning future expenses, based on estimated costs and the expected operational lifetime of the installations. They are responsible of the definition and implementation of the technical options. Through its supervision activities, the French State regularly requires updating studies of these estimated costs, which are conducted by the operators. A general review of the management of these long-term liabilities is also carried out on a four years basis by the French Court of Accounts. Operators are due to constitute provisions during the life cycle of their installations. Provisions are calculated for each installation on the basis of the decommissioning expenses and of the reasonably estimated lifetime. They are re

  4. Records, record linkage, and the identification of long term environmental hazards

    Energy Technology Data Exchange (ETDEWEB)

    Acheson, E.D.

    1978-11-15

    Long-term effects of toxic substances in man which have been recognized so far have been noticed because they have involved gross relative risks, or bizarre effects, or have been stumbled upon by chance or because of special circumstances. These facts and some recent epidemiological evidence together suggest that a systematic approach with more precise methods and data would almost certainly reveal the effects of many more toxic substances, particularly in workers exposed in manufacturing industry. Additional ways are suggested in which record linkage techniques might be used to identify substances with long-term toxic effects. Obstacles to further progress in the field of monitoring for long-term hazards in man are: lack of a public policy dealing with confidentiality and informed consent in the use of identifiable personal records, which balances the needs of bona fide research workers with proper safeguards for the privacy of the individual, and lack of resources to improve the quality, accessibility and organization of the appropriate data. (PCS)

  5. Screening local Lactobacilli from Iran in terms of production of lactic acid and identification of superior strains

    Directory of Open Access Journals (Sweden)

    Fatemeh Soleimanifard

    2015-12-01

    Full Text Available Introduction: Lactobacilli are a group of lactic acid bacteria that their final product of fermentation is lactic acid. The objective of this research is selection of local Lactobacilli producing L (+ lactic acid. Materials and methods: In this research the local strains were screened based on the ability to produce lactic acid. The screening was performed in two stages. The first stage was the titration method and the second stage was the enzymatic method. The superior strains obtained from titration method were selected to do enzymatic test. Finally, the superior strains in the second stage (enzymatic which had the ability to produce L(+ lactic acid were identified by biochemical tests. Then, molecular identification of strains was performed by using 16S rRNA sequencing. Results: In this study, the ability of 79 strains of local Lactobacilli in terms of production of lactic acid was studied. The highest and lowest rates of lactic acid production was 34.8 and 12.4 mg/g. Superior Lactobacilli in terms of production of lactic acid ability of producing had an optical isomer L(+, the highest levels of L(+ lactic acid were with 3.99 and the lowest amount equal to 1.03 mg/g. The biochemical and molecular identification of superior strains showed that strains are Lactobacillus paracasei. Then the sequences of 16S rRNA of superior strains were reported in NCBI with accession numbers KF735654، KF735655، KJ508201and KJ508202. Discussion and conclusion: The amounts of lactic acid production by local Lactobacilli were very different and producing some of these strains on available reports showed more products. The results of this research suggest the use of superior strains of Lactobacilli for production of pure L(+ lactic acid.

  6. Automatic Tools for Diagnosis Support of Total Hip Replacement Follow-up

    Directory of Open Access Journals (Sweden)

    SULTANA, A.

    2011-11-01

    Full Text Available Total hip replacement is a common procedure in today orthopedics, with high rate of long-term success. Failure prevention is based on a regular follow-up aimed at checking the prosthesis fit and state by means of visual inspection of radiographic images. It is our purpose to provide automatic means for aiding medical personnel in this task. Therefore we have constructed tools for automatic identification of the component parts of the radiograph, followed by analysis of interactions between the bone and the prosthesis. The results form a set of parameters with obvious interest in medical diagnosis.

  7. Identification of long-term containment/stabilization technology performance issues

    International Nuclear Information System (INIS)

    U.S. Department of Energy (DOE) faces a somewhat unique challenge when addressing in situ remedial alternatives that leave long-lived radionuclides and hazardous contaminants onsite. These contaminants will remain a potential hazard for thousands of years. However, the risks, costs, and uncertainties associated with removal and offsite disposal are leading many sites to select in situ disposal alternatives. Improvements in containment, stabilization, and monitoring technologies will enhance the viability of such alternatives for implementation. DOE's Office of Science and Technology sponsored a two day workshop designed to investigate issues associated with the long-term in situ stabilization and containment of buried, long-lived hazardous and radioactive contaminants. The workshop facilitated communication among end users representing most sites within the DOE, regulators, and technologists to define long-term performance issues for in situ stabilization and containment alternatives. Participants were divided into groups to identify issues and a strategy to address priority issues. This paper presents the results of the working groups and summarizes the conclusions. A common issue identified by the work groups is communication. Effective communication between technologists, risk assessors, end users, regulators, and other stakeholders would contribute greatly to resolution of both technical and programmatic issues

  8. Modeling of Automatic Generation Control for Power System Transient, Medium-Term and Long-Term Stabilities Simulations%电力系统全过程动态仿真中的自动发电控制模型

    Institute of Scientific and Technical Information of China (English)

    宋新立; 王成山; 仲悟之; 汤涌; 卓峻峰; 旸吴国; 苏志达

    2013-01-01

    针对大规模电力系统二次调频控制的动态仿真问题,采用混杂系统的建模方法,提出一种适于机电暂态及中长期动态全过程仿真的自动发电控制模型。模型主要由属于连续动态系统的区域控制偏差计算、属于离散动态系统的控制策略和机组调节指令计算3个模块组成。通过与电力系统全过程动态仿真程序中已有模型的接口,该模型可以模拟大规模电网中基于A标准和CPS控制性能评价标准的控制策略,以及定频率控制、定交换功率控制和联络线功率频率偏差控制等多种方式。与我国特高压交流联络线相关的2个算例仿真表明,该模型可为大规模电网联络线功率波动限制、多区域AGC控制策略的协调配合和二次调频的优化控制等实际电网问题提供有效的仿真手段。%In order to dynamically simulate secondary power frequency control in large power systems, a new automatic generation control (AGC) model, which can be applied for power system electro-mechanical transient, medium-term and long-term dynamics simulation, is proposed based on the modeling method of hybrid system. It mainly consists of three parts:calculation of area control error (ACE), simulation of control strategy, and calculation of generating power regulation. The first module is modeled by the method of continuous dynamic systems, and the last two modules are modeled by the method of discrete event dynamic systems. By interfacing to the existing models in the power system unified dynamic simulation program, it is capable of simulating not only the three main control modes of AGC for large power systems, i.e., flat frequency control (FFC), constant net interchange control (CIC), and tie line bias frequency control (TBC), but also the widely-used control strategies based on CPS and A standard. Two simulation cases, which are related to the active power control for the tie-line in China UHVAC interconnected

  9. Eating as an Automatic Behavior

    OpenAIRE

    Deborah A. Cohen, MD, MPH; Thomas A. Farley, MD, MPH

    2007-01-01

    The continued growth of the obesity epidemic at a time when obesity is highly stigmatizing should make us question the assumption that, given the right information and motivation, people can successfully reduce their food intake over the long term. An alternative view is that eating is an automatic behavior over which the environment has more control than do individuals. Automatic behaviors are those that occur without awareness, are initiated without intention, tend to continue without contr...

  10. Identification of long-term trends in vegetation dynamics in the Guinea savannah region of Nigeria

    Science.gov (United States)

    Osunmadewa, Babatunde A.; Wessollek, Christine; Karrasch, Pierre

    2014-10-01

    The availability of newly generated data from Advanced Very High Resolution Radiometer (AVHRR) covering the last three decades has broaden our understanding of vegetation dynamics (greening) from global to regional scale through quantitative analysis of seasonal trends in vegetation time series and climatic variability especially in the Guinea savannah region of Nigeria where greening trend is inconsistent. Due to the impact of changes in global climate and sustainability of means of human livelihood, increasing interest on vegetation productivity has become important. The aim of this study is to examine association between NDVI and rainfall using remotely sensed data, since vegetation dynamics (greening) has a high degree of association with weather parameters. This study therefore analyses trends in regional vegetation dynamics in Kogi state, Nigeria using bi-monthly AVHRR GIMMS 3g (Global Inventory Modelling and Mapping Studies) data and TAMSAT (Tropical Applications of Meteorology Satellite) monthly data both from 1983 to 2011 to identify changes in vegetation greenness over time. Analysis of changes in the seasonal variation of vegetation greenness and climatic drivers was conducted for selected locations to further understand the causes of observed interannual changes in vegetation dynamics. For this study, Mann-Kendall (MK) monotonic method was used to analyse long-term inter-annual trends of NDVI and climatic variable. The Theil-Sen median slope was used to calculate the rate of change in slopes between all pair wise combination and then assessing the median over time. Trends were also analysed using a linear model method, after seasonality had been removed from the original NDVI and rainfall data. The result of the linear model are statistically significant (p <0.01) in all the study location which can be interpreted as increase in vegetation trend over time (greening). Also the result of the NDVI trend analysis using Mann-Kendall test shows an increasing

  11. 通道行人集聚型异常事件自动识别算法设计%Design of Automatic Identification Algorithm for Pedestrian Clustering in Channel

    Institute of Scientific and Technical Information of China (English)

    李鑫; 陈艳艳; 陈宁; 刘小明; 冯国臣

    2016-01-01

    为了对城市轨道交通枢纽通道内的集聚型异常事件进行合理的疏导和客流组织,保障城市轨道交通枢纽的安全、高效运行,本文提出了一种通道内行人集聚型异常事件的自动识别算法.该算法首先通过对通道客流基础数据平稳性和突变性的分析,创建了一种兼具平稳性和突变性特征的新数据类型,然后基于双截面客流数据设计了自动识别算法的关键参数—偏移空间差值.最后通过对关键参数变化特征的分析,建立了通道行人集聚型异常事件自动识别算法.仿真试验结果显示:该算法的检测精度为100%,反应时间均值为65 s,表明该算法对通道行人集聚事件有极强的自动检测能力和较短的反应时间.%In order to carry out reasonable guidance and passenger flow organization in the traffic hub channel of urban rail transit, ensure the safe and efficient operation of urban rail transit hub, we put forward an algorithm that can recognize the abnormal events of crowds gathering in the transfer channel automatically. Basic information like stability and mutability of pedestrian volume is analysed firstly, creating a new type data set characterized by stability and mutability based on the calculated result, and then the key parameter-difference of space offset of automatic identification algorithm is designed based on the double-section pedestrian volume, and variation characteristics analysis of the key parameter will help to establish the algorithm for automatic identifying crowds gathering abnormal events. The simulation experiment result shows that the detection accuracy of the algorithm is 100%, and the reaction time is 65 s, which shows that the algorithm has a strong automatic detection ability and a shorter reaction time for the pedestrian clustering events.

  12. Liabilities identification and long-term management at national level (Spain)

    International Nuclear Information System (INIS)

    economic uncertainties in high level waste disposal systems is a constant line of work, and in this respect ENRESA attempts to incorporate the most adequate techniques for cost analysis in a probabilistic framework. Even though the economical calculations are revised every year, tempering forecasting inaccuracies, in the longer term, it is felt that problems might arise if there were a particularly significant time difference between the dates of plant decommissioning and the initiation of repository construction work. Under these conditions, any delay in constructing the definitive disposal facility might lead to not having sufficient financial resources available for its construction, operation or dismantling. The Spanish legislation includes no indications in this respect. Conceptually, various treatment hypothesis could be envisaged, such as legally increasing the period of fee collection, the creation of an extra fee during the last few years of collection, the obligation for the waste producers to contract additional guarantees in order to address uncovered risks, or acceptance by the State of responsibilities in relation to this issue. Obviously, the case of a surplus of money after the completion of waste disposal is also to be taken into account. In relation to this hypothesis, criteria and procedures for liquidation or distribution should have to be set out. It is considered that, at present, it is to soon to approach such a question

  13. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  14. A 100-m Fabry–Pérot Cavity with Automatic Alignment Controls for Long-Term Observations of Earth’s Strain

    Directory of Open Access Journals (Sweden)

    Akiteru Takamori

    2014-08-01

    Full Text Available We have developed and built a highly accurate laser strainmeter for geophysical observations. It features the precise length measurement of a 100-m optical cavity with reference to a stable quantum standard. Unlike conventional laser strainmeters based on simple Michelson interferometers that require uninterrupted fringe counting to track the evolution of ground deformations, this instrument is able to determine the absolute length of a cavity at any given time. The instrument offers advantage in covering a variety of geophysical events, ranging from instantaneous earthquakes to crustal deformations associated with tectonic strain changes that persist over time. An automatic alignment control and an autonomous relocking system have been developed to realize stable performance and maximize observation times. It was installed in a deep underground site at the Kamioka mine in Japan, and an effective resolution of 2 × (10−8 − 10−7 m was achieved. The regular tidal deformations and co-seismic strain changes were in good agreement with those from a theoretical model and a co-located conventional laser strainmeter. Only the new instrument was able to record large strain steps caused by a nearby large earthquake because of its capability of absolute length determination.

  15. SU-E-J-182: A Feasibility Study Evaluating Automatic Identification of Gross Tumor Volume for Breast Cancer Radiotherapy Using Dynamic Contrast-Enhanced MR Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C; Horton, J; Yin, F; Blitzblau, R; Palta, M; Chang, Z [Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: To develop a computerized pharmacokinetic model-free Gross Tumor Volume (GTV) segmentation method based on dynamic contrastenhanced MRI (DCE-MRI) data that can improve physician GTV contouring efficiency. Methods: 12 patients with biopsy-proven early stage breast cancer with post-contrast enhanced DCE-MRI images were analyzed in this study. A fuzzy c-means (FCM) clustering-based method was applied to segment 3D GTV from pre-operative DCE-MRI data. A region of interest (ROI) is selected by a clinician/physicist, and the normalized signal evolution curves were calculated by dividing the signal intensity enhancement value at each voxel by the pre-contrast signal intensity value at the corresponding voxel. Three semi-quantitative metrics were analyzed based on normalized signal evolution curves: initial Area Under signal evolution Curve (iAUC), Immediate Enhancement Ratio (IER), and Variance of Enhancement Slope (VES). The FCM algorithm wass applied to partition ROI voxels into GTV voxels and non-GTV voxels by using three analyzed metrics. The partition map for the smaller cluster is then generated and binarized with an automatically calculated threshold. To reduce spurious structures resulting from background, a labeling operation was performed to keep the largest three-dimensional connected component as the identified target. Basic morphological operations including hole-filling and spur removal were useutilized to improve the target smoothness. Each segmented GTV was compared to that drawn by experienced radiation oncologists. An agreement index was proposed to quantify the overlap between the GTVs identified using two approaches and a thershold value of 0.4 is regarded as acceptable. Results: The GTVs identified by the proposed method were overlapped with the ones drawn by radiation oncologists in all cases, and in 10 out of 12 cases, the agreement indices were above the threshold of 0.4. Conclusion: The proposed automatic segmentation method was shown to

  16. 被淹没地震信号的小波熵检测与自动识别方法%METHOD OF DETECTION BY WAVELET ENTROPY AND IDENTIFICATION AUTOMATICALLY FOR SUBMERGED SEISMIC SIGNAL

    Institute of Scientific and Technical Information of China (English)

    杨建平; 帅晓勇; 陶黄林

    2015-01-01

    In order to detect the micro-seismic before large earthquake, protect the important facilities, such as the large coal, oil, mine and so on. It’s an urgent need for seismic data processing technique, such as real-time process, recognize automatically and extract the submerged seismic onset point. A multi-resolution complexity parameter was acquired based on the wavelet transform and the theory of information entropy, the parameter can clearly shows the change in the exploration data from the arrivals of seismic waves. A simulation was done with the exploration data, Comparison of the monitoring effect of wavelet transform or digital band-pass filter, the results show that the parameter can be very good at the micro seismic onset point for automatic identification.%为探测大震前的微震,保护大型煤矿、油田和矿山等重要设施,急需地震信号的实时处理、自动识别和提取地震初至点等地震数据处理技术。采用了小波变换和信息熵理论相结合的一种具有多分辨率的复杂度参数——小波熵,该参数能够从被淹没环境中清晰地显示出勘探数据中地震波到达所带来的变化。结合实测数据进行了仿真,并对比了单一的小波变换、数字带通滤波器的监测效果,结果表明小波熵参数能够更好地自动识别微震初至点。

  17. 16S rRNA Gene Sequence-Based Identification of Bacteria in Automatically Incubated Blood Culture Materials from Tropical Sub-Saharan Africa.

    Directory of Open Access Journals (Sweden)

    Hagen Frickmann

    Full Text Available The quality of microbiological diagnostic procedures depends on pre-analytic conditions. We compared the results of 16S rRNA gene PCR and sequencing from automatically incubated blood culture materials from tropical Ghana with the results of cultural growth after automated incubation.Real-time 16S rRNA gene PCR and subsequent sequencing were applied to 1500 retained blood culture samples of Ghanaian patients admitted to a hospital with an unknown febrile illness after enrichment by automated culture.Out of all 1500 samples, 191 were culture-positive and 98 isolates were considered etiologically relevant. Out of the 191 culture-positive samples, 16S rRNA gene PCR and sequencing led to concordant results in 65 cases at species level and an additional 62 cases at genus level. PCR was positive in further 360 out of 1309 culture-negative samples, sequencing results of which suggested etiologically relevant pathogen detections in 62 instances, detections of uncertain relevance in 50 instances, and DNA contamination due to sample preparation in 248 instances. In two instances, PCR failed to detect contaminants from the skin flora that were culturally detectable. Pre-analytical errors caused many Enterobacteriaceae to be missed by culture.Potentially correctable pre-analytical conditions and not the fastidious nature of the bacteria caused most of the discrepancies. Although 16S rRNA gene PCR and sequencing in addition to culture led to an increase in detections of presumably etiologically relevant blood culture pathogens, the application of this procedure to samples from the tropics was hampered by a high contamination rate. Careful interpretation of diagnostic results is required.

  18. Linking the Annual Variation of Snow Radar-derived Accumulation in West Antarctica to Long-term Automatic Weather Station Measurements

    Science.gov (United States)

    Feng, B.; Braaten, D. A.; Gogineni, P.; Paden, J. D.; Leuschen, C.; Purdon, K.

    2013-12-01

    Understanding the snow accumulation rate on polar ice sheets is important in assessing mass balance and ice sheet contribution to sea level rise. Measuring annual accumulation on a regional scale and extending back in time several decades has been accomplished using the Center for Remote Sensing of Ice Sheets (CReSIS) Snow Radar on the NASA DC-8 that is part of NASA's Ice-Bridge project. The Snow Radar detects and maps near-surface internal layers in polar firn, operating from 2- 6 GHz and providing a depth resolution of ~4 cm. During November 2011, Snow Radar data were obtained for large areas of West Antarctica, including a flight segment that passed within ~70 km of Byrd Station (80°S, 119°W). Byrd Station has a very long automatic weather station (AWS) record, extending from present to 1980, with 3 relatively brief gaps in the record. The AWS data for Byrd Station were obtained from the Antarctic Meteorological Research Center (AMRC) at the University of Wisconsin. The L1B Snow Radar data products, available from the National Snow and Ice Data Center (NSIDC), were analyzed using layer picking software to obtain the depth of reflectors in the firn that are detected by the radar. These reflectors correspond to annual markers in the firn, and allow annual accumulation to be determined. Using the distance between the reflectors and available density profiles from ice cores, water equivalent accumulation for each annual layer back to 1980 is obtained. We are analyzing spatial variations of accumulation along flight lines, as well as variations in the time series of annual accumulation. We are also analyzing links between annual accumulation and surface weather observations from the Byrd Station AWS. Our analyses of surface weather observations have focused on annual temperature, atmospheric pressure and wind extremes (e.g. 5th and 95th percentiles) and links to annual snow accumulation. We are also examining satellite-derived sea ice extent records for the

  19. Comparison of Multi-shot Models for Short-term Re-identification of People using RGB-D Sensors

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Bahnsen, Chris; Moeslund, Thomas B.

    This work explores different types of multi-shot descriptors for re-identification in an on-the-fly enrolled environment using RGB-D sensors. We present a full re-identification pipeline complete with detection, segmentation, feature extraction, and re-identification, which expands on previous work...... by using multi-shot descriptors modeling people over a full camera pass instead of single frames with no temporal linking. We compare two different multi-shot models; mean histogram and histogram series, and test them each in 3 different color spaces. Both histogram descriptors are assisted by a...

  20. Study on Automatic English Synonym Terms Discovery from Web and the System Implementation%互联网环境下的英文同义术语自动发现研究与系统实现

    Institute of Scientific and Technical Information of China (English)

    刘伟; 黄小江; 万小军; 王星

    2012-01-01

    There are extremely abundant synonym term resources in the Web. Three effective approaches have been proposed in this paper, which are syntactical pattern learning, online synonym dictionary extraction, and static synonym category crawling. On this basis, a prototype system, Web Synonym Term Searcher, has been implemented. The experimental results show it is a promising way to automatically obtain synonym terms from the Web.%以英文同义术语为例,提出三种有效的自动获取互联网术语资源的技术手段,包括语法模式的自学习,在线同义词典的抽取,静态同义术语分类的爬取。在此基础上,设计并实现互联网同义术语检索原型系统(Web Synonym Searcher)。实验测试表明,从互联网中自动获取同义术语是一种非常有前景的途径。

  1. Decree no. 96-1108 from December 17, 1996 giving permission to the Office for the protection against ionizing radiations to use the French national identification index of physical people for the automatic processing of registered personal informations relative to the surveillance of some persons exposed to ionizing radiations

    International Nuclear Information System (INIS)

    This decree from the French ministry of labour and social affairs gives permission to the OPRI (Office for the Protection against Ionizing Radiations) to use the personal registration number of the national identification index of physical people who are or were professionally exposed to ionizing radiations. This number is only used to identify these people in order to automatically process the informations relative to the surveillance of their exposure to ionizing radiations. (J.S.)

  2. Automatic determination of total alkalinity based on image identification technology%基于图像识别技术的总碱度自动测定方法

    Institute of Scientific and Technical Information of China (English)

    秦玉华; 王东兵; 张海燕; 欧佳; 徐志明

    2011-01-01

    提出了一种基于图像识别技术的水中总碱度的自动测量方法及装置.以盐酸为滴定剂,溴甲酚绿为指示剂,根据酸碱中和原理,利用等当点时溶液R、G、B值的突跃确定滴定终点,从而测出水中总碱度.实验结果表明,该方法测量碱度的线性范围为0.2~40 mmoL/L,相对标准偏差为0.43%,加标回收率为96.4%~102.6%.用于工业循环冷却水中总碱度的测定,具有操作简便、准确度高的特点,可实现碱度的自动测量.%A new automatic measuring method and device of alkalinity in water based on image identification technology are proposed in this paper. Based on the principles of acid-base titration, hydrochloric acid is used as the titrant and bromcresol green as the indicator, the equivalent point of titration is identified by a RGB-based value that is calculated using a proposed procedure based on red, green and blue color system; and the total alkalinity of the water can be measured. Experimental results show that the linear range of alkalinity detection ranks is from 0.2 to 40 mmol/L. This method gives a reproducibility of 0.43% R. S. D and a recovery rate of 96.4% -102. 6% . This method has the advantage of simplicity and accuracy when applied in total alkalinity measurement of industrial circulating cooling water. The automatic measurement of alkalinity can be achieved.

  3. 21 CFR 870.5925 - Automatic rotating tourniquet.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic rotating tourniquet. 870.5925 Section 870.5925 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... rotating tourniquet. (a) Identification. An automatic rotating tourniquet is a device that prevents...

  4. 21 CFR 892.1900 - Automatic radiographic film processor.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900 Section 892.1900 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... processor. (a) Identification. An automatic radiographic film processor is a device intended to be used...

  5. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  6. Automatic Tag Identification in Web Service Descriptions

    OpenAIRE

    Falleri, Jean-Rémy; Azmeh, Zeina; Huchard, Marianne; Tibermacine, Chouki

    2010-01-01

    With the increasing interest toward service-oriented architectures, the number of existing Web services is dramatically growing. Therefore, finding a particular service among this huge number of services is becoming a time-consuming task. User tags or keywords have proven to be a useful technique to smooth browsing experience in large document collections. Some service search engines, like Seekda, already propose this kind of facility. Service tagging, which is a fairly tedious and error pron...

  7. Eating as an Automatic Behavior

    Directory of Open Access Journals (Sweden)

    Deborah A. Cohen, MD, MPH

    2008-01-01

    Full Text Available The continued growth of the obesity epidemic at a time when obesity is highly stigmatizing should make us question the assumption that, given the right information and motivation, people can successfully reduce their food intake over the long term. An alternative view is that eating is an automatic behavior over which the environment has more control than do individuals. Automatic behaviors are those that occur without awareness, are initiated without intention, tend to continue without control, and operate efficiently or with little effort. The concept that eating is an automatic behavior is supported by studies that demonstrate the impact of the environmental context and food presentation on eating. The amount of food eaten is strongly influenced by factors such as portion size, food visibility and salience, and the ease of obtaining food. Moreover, people are often unaware of the amount of food they have eaten or of the environmental influences on their eating. A revised view of eating as an automatic behavior, as opposed to one that humans can self-regulate, has profound implications for our response to the obesity epidemic, suggesting that the focus should be less on nutrition education and more on shaping the food environment.

  8. Consideration Of The Change Of Material Emission Signatures Due To Long-term Emissions For Enhancing Voc Source Identification

    DEFF Research Database (Denmark)

    Han, K. H.; Zhang, J. S.; Knudsen, H. N.; Wargocki, Pawel; Guo, B.

    2011-01-01

    The objectives of this study were to characterize the changes of VOC material emission profiles over time and develop a method to account for such changes in order to enhance a source identification technique that is based on the measurements of mixed air samples and the emission signatures of...... individual building materials determined by PTRMS. Source models, including powerlaw model, doubleexponential decay model and mechanistic diffusion model, were employed to track the change of individual material emission signatures by PTRMS over a ninemonth period. Samples of nine typical building materials...

  9. Study on Ground Automatic Identification Technology for Intelligent Vehicle Based on Vision Sensor%基于视觉传感器的自主车辆地面自动辨识技术研究

    Institute of Scientific and Technical Information of China (English)

    崔根群; 余建明; 赵娴; 赵丛琳

    2011-01-01

    The ground automatic identification technology for intelligent vehicle is iaking Leobor-Edu autonomous vehicle as a test vector and using DH-HV2003UC-T vision sensor to collect image infarmaiion of five common lane roads( cobbled road, concrete road, dirt road, grass road, tile road) , then using MATLAB image processing module to perform coding compression, recovery reconstruction, smoothing, sharpening, enhancement, feature extraction and other related processing,then using MATLAB BP neural network module to carry on pattern recognition.Through analyzing the pattern recognition result, lt shows that the objective error is 20%, the road recognition rate has reached the intended requirement in the system,and it can be universally applied in the smart vehicle or robots and other related fields.%谊自主车辆地面自动辨识技术是以Leobot-Edu自主车辆作为试验载体,并应用DH-HV2003UC-T视觉传感器对常见的5种行车路面(石子路面、水泥路面、土壤路面、草地路面、砖地路面)进行图像信息的采集,应用Matlab图像处理模块对其依次进行压缩编码、复原重建、平滑、锐化、增强、特征提取等相关处理后,再应用Matlab BP神经网络模块进行模式识别.通过对模式识别结果分析可知,网络训练目标的函数误差为20%,该系统路面识别率达到预定要求,可以在智能车辆或移动机器人等相关领域普及使用.

  10. Experimental research on nozzle device in mixed waste plastic automatic identification separator%废旧混合塑料自动识别分选机喷嘴装置实验研究

    Institute of Scientific and Technical Information of China (English)

    胡彪; 王树桐; 李健毅; 于立云; 汤桂兰; 张毅民

    2013-01-01

    In order to determine mixing plastic automatic identification separators best nozzle shape, firstly, analysising output pressure is the key to sorting plastic and we should calculate the minimum output pressure. Then discuss the influence of the output pressure of relevant parameters, through the calculation and simulation that the input and the output pressure attenuation degree, preliminary estimate input pressure and then through the experiment to get the date of output pressure with nozzle diameter and length change under the same input pressure, using the method of correlation factor summary to draw the diameter of the relationship between the output pressure and length of tubes. Based on the ahove, we analysis the curve rely on experimental data by fitting method to select the best nozzle parameters. In the end we measure its jet range and give the specific nozzle distribution scheme.%为确定废旧混合塑料自动识别分选机上喷嘴的最佳形状,我们首先分析得出输出压强是分选塑料的关键,并计算出最小输出压强.然后讨论影响输出压强的相关参数,通过计算和模拟得出输出压强相对输入压强的衰减程度,初步估算输入压强.然后通过实验获取相同输入压强下输出压强随喷嘴直径及管长变化的数据,用相关系数法总结绘制出直径和管长与输出压强的关系曲线.对实验数据进行拟合度分析,选取最佳的喷嘴参数.最后对其喷射范围进行测量,给出具体的喷嘴分布方案.

  11. Automatic input rectification

    OpenAIRE

    Long, Fan; Ganesh, Vijay; Carbin, Michael James; Sidiroglou, Stelios; Rinard, Martin

    2012-01-01

    We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the ...

  12. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  13. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  14. MassToMI - a Mathematica package for an automatic Mass Insertion expansion

    CERN Document Server

    Rosiek, Janusz

    2015-01-01

    We present a Mathematica package designed to automatize the expansion of QFT transition amplitudes calculated in the mass eigenstates basis (i.e. expressed in terms of physical masses and mixing matrices) into series of "mass insertions", defined as off-diagonal entries of mass matrices in Lagrangian before diagonalization and identification of the physical states. The algorithm implemented in this package is based on the general "Flavor Expansion Theorem" proven in Ref.~\\cite{FET}. The supplied routines are able to automatically analyze the structure of the amplitude, identify the parts which could be expanded and expand them to any required order. They are capable of dealing with amplitudes depending on both scalar or vector (Hermitian) and Dirac or Majorana fermion (complex) mass matrices. The package can be downloaded from the address www.fuw.edu.pl/masstomi.

  15. Second-Language Learners' Identification of Target-Language Phonemes: A Short-Term Phonetic Training Study

    Science.gov (United States)

    Cebrian, Juli; Carlet, Angelica

    2014-01-01

    This study examined the effect of short-term high-variability phonetic training on the perception of English /b/, /v/, /d/, /ð/, /ae/, /? /, /i/, and /i/ by Catalan/Spanish bilinguals learning English as a foreign language. Sixteen English-major undergraduates were tested before and after undergoing a four-session perceptual training program…

  16. Automatic exploitation system for photographic dosemeters

    International Nuclear Information System (INIS)

    The Laboratory of Dosimetry Exploitation (LED) has realized an equipment allowing to exploit automatically photographic film dosemeters. This system uses an identification of the films by code-bars and gives the doses measurement with a completely automatic reader. The principle consists in putting in ribbon the emulsions to be exploited and to develop them in a circulation machine. The measurement of the blackening film is realized on a reading plate having fourteen points of reading, in which are circulating the emulsions in ribbon. The exploitation is made with the usual dose calculation method, with special computers codes. A comparison on 2000 dosemeters has shown that the results are the same in manual and automatical methods. This system has been operating since July 1995 by the LED. (N.C.)

  17. Fast automatic analysis of antenatal dexamethasone on micro-seizure activity in the EEG

    International Nuclear Information System (INIS)

    Full text: In this work wc develop an automatic scheme for studying the effect of the antenatal Dexamethasone on the EEG activity. To do so an FFT (Fast Fourier Transform) based detector was designed and applied to the EEG recordings obtained from two groups of fetal sheep. Both groups received two injections with a time delay of 24 h between them. However the applied medicine was different for each group (Dex and saline). The detector developed was used to automatically identify and classify micro-seizures that occurred in the frequency bands corresponding to the EEG transients known as slow waves (2.5 14 Hz). For each second of the data recordings the spectrum was computed and the rise of the energy in each predefined frequency band then counted when the energy level exceeded a predefined corresponding threshold level (Where the threshold level was obtained from the long term average of the spectral points at each band). Our results demonstrate that it was possible to automatically count the micro-seizures for the three different bands in a time effective manner. It was found that the number of transients did not strongly depend on the nature of the injected medicine which was consistent with the results manually obtained by an EEG expert. Tn conclusion, the automatic detection scheme presented here would allow for rapid micro-seizure event identification of hours of highly sampled EEG data thus providing a valuable time-saving device.

  18. Mining Twitter as a First Step toward Assessing the Adequacy of Gender Identification Terms on Intake Forms

    OpenAIRE

    Hicks, Amanda; Hogan, William R; Rutherford, Michael; Malin, Bradley; Xie, Mengjun; Fellbaum, Christiane; Yin, Zhijun; Fabbri, Daniel; Hanna, Josh; Bian, Jiang

    2015-01-01

    The Institute of Medicine (IOM) recommends that health care providers collect data on gender identity. If these data are to be useful, they should utilize terms that characterize gender identity in a manner that is 1) sensitive to transgender and gender non-binary individuals (trans* people) and 2) semantically structured to render associated data meaningful to the health care professionals. We developed a set of tools and approaches for analyzing Twitter data as a basis for generating hypoth...

  19. Automatic polar ice thickness estimation from SAR imagery

    Science.gov (United States)

    Rahnemoonfar, Maryam; Yari, Masoud; Fox, Geoffrey C.

    2016-05-01

    Global warming has caused serious damage to our environment in recent years. Accelerated loss of ice from Greenland and Antarctica has been observed in recent decades. The melting of polar ice sheets and mountain glaciers has a considerable influence on sea level rise and altering ocean currents, potentially leading to the flooding of the coastal regions and putting millions of people around the world at risk. Synthetic aperture radar (SAR) systems are able to provide relevant information about subsurface structure of polar ice sheets. Manual layer identification is prohibitively tedious and expensive and is not practical for regular, longterm ice-sheet monitoring. Automatic layer finding in noisy radar images is quite challenging due to huge amount of noise, limited resolution and variations in ice layers and bedrock. Here we propose an approach which automatically detects ice surface and bedrock boundaries using distance regularized level set evolution. In this approach the complex topology of ice and bedrock boundary layers can be detected simultaneously by evolving an initial curve in radar imagery. Using a distance regularized term, the regularity of the level set function is intrinsically maintained that solves the reinitialization issues arising from conventional level set approaches. The results are evaluated on a large dataset of airborne radar imagery collected during IceBridge mission over Antarctica and Greenland and show promising results in respect to hand-labeled ground truth.

  20. 基于语序位置特征的汉英术语对自动抽取研究%Research on automatic Chinese-English term extraction based on order and position feature of words

    Institute of Scientific and Technical Information of China (English)

    张莉; 刘昱显

    2015-01-01

    With the explosion of information and in current society,knowledge is spreading among information in various areas and also in different languages.The characteristic of knowledge spreading brings people tremendous obstacles in understanding,retrieving and exchanging their thinking.Bilingual terminology is an important language resource for natural language processing tasks such as machine translation,data mining and bilingual information re-trieval.The collecting of bilingual terminology is often challenging and time-consuming because texts to be aligned are usually in different languages such as Chinese and English and there are significant differences in many cases. Thus bilingual terminology extraction and alignment becomes more important and brings more and more attention in the information processing and it plays an important role in cross-language retrieval,building bilingual dictionaries and machine translation research.The development of bilingual terminology extraction and alignment will benefit the building of translation memory in the field of machine-assisted translation and it can improve the quality of the machine translations while adding the bilingual terminology information.We propose an automatic Chinese-English terminology alignment algorithm based on the order and position feature information of words.The algorithm improves the terminology alignment of two-step strategy about extracting bilingual terms by integrating the order and position feature information of words in phrase-basedmachine translation.The experimental corpus we used is the journals in CSSCI from the year of 1 998 to 2012,mainly including the titles and abstracts in Chinese and English.In our experiment,37206 complete English titles and abstracts of many papers are launched including a total of 1.63 million words in Chinese and 1 910000 words in English.The algorithm improves accuracy rate of term alignment especially in the case of lower probability of terms translation while

  1. Automatic Identification of Digital Label Assembly Drawings of Mechanical Parts Based on Computer Vision Technology%基于计算机视觉技术的机械零件装配图数字标号的自动识别

    Institute of Scientific and Technical Information of China (English)

    江能兴

    2011-01-01

    In order to realize precisely the automatic identification of numeric characters in the assembly drawings of mechanical parts, the technology of Open Computer Vision libraries (OpenCV) are developed. This paper not only introduces the basic framework of OpenCV and its typical application areas, also, it compares and analyses the numeric characters in the assembly drawings of mechanical parts, which has great significance to the improvement on the current development in the area of the automatic identification of digital label assembly drawings of mechanical parts.%为精准快速地对机械零件装配图中的数字字符进行自动识别,提出一种基于开源计算机视觉库OpenCV的模板匹配方法.本文介绍OpenCV的基本框架、典型运用领域和利用OpenCV开发库对机械零件装配图中的数字字符进行自动识别的比较分析,该项工作对改进目前对机械图进行人工数字识别的现状具有重要的意义.

  2. Automatic query formulations in information retrieval.

    Science.gov (United States)

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice. PMID:10299297

  3. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 2 is a collection of papers that discusses the controversy about the suitability of COBOL as a common business oriented language, and the development of different common languages for scientific computation. A couple of papers describes the use of the Genie system in numerical calculation and analyzes Mercury autocode in terms of a phrase structure language, such as in the source language, target language, the order structure of ATLAS, and the meta-syntactical language of the assembly program. Other papers explain interference or an ""intermediate

  4. Automatic quantitative analysis of morphology of apoptotic HL-60 cells

    OpenAIRE

    Liu, Yahui; Lin, Wang; Yang, Xu; Liang, Weizi; Zhang, Jun; Meng, Maobin; Rice, John R.; Sa, Yu; Feng, Yuanming

    2014-01-01

    Morphological identification is a widespread procedure to assess the presence of apoptosis by visual inspection of the morphological characteristics or the fluorescence images. The procedure is lengthy and results are observer dependent. A quantitative automatic analysis is objective and would greatly help the routine work. We developed an image processing and segmentation method which combined the Otsu thresholding and morphological operators for apoptosis study. An automatic determina...

  5. Automatic Implantable Cardiac Defibrillator

    Medline Plus

    Full Text Available Automatic Implantable Cardiac Defibrillator February 19, 2009 Halifax Health Medical Center, Daytona Beach, FL Welcome to Halifax Health Daytona Beach, Florida. Over the next hour you' ...

  6. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  7. Short-term ECG recording for the identification of cardiac autonomic neuropathy in people with diabetes mellitus

    Science.gov (United States)

    Jelinek, Herbert F.; Pham, Phuong; Struzik, Zbigniew R.; Spence, Ian

    2007-07-01

    Diabetes mellitus (DM) is a serious and increasing health problem worldwide. Compared to non-diabetics, patients experience an increased risk of all cardiovascular diseases, including dysfunctional neural control of the heart. Poor diagnoses of cardiac autonomic neuropathy (CAN) may result in increased incidence of silent myocardial infarction and ischaemia, which can lead to sudden death. Traditionally the Ewing battery of tests is used to identify CAN. The purpose of this study is to examine the usefulness of heart rate variability (HRV) analyses of short-term ECG recordings as a method for detecting CAN. HRV may be able to identify asymptomatic individuals, which the Ewing battery is not able to do. Several HRV parameters are assessed, including time and frequency domain, as well as nonlinear parameters. Eighteen out of thirty-eight individuals with diabetes were positive for two or more of the Ewing battery of tests indicating CAN. Approximate Entropy (ApEn), log normalized total power (LnTP) and log normalized high frequency (LnHF) power demonstrate a significant difference at p ECG recordings. Our study paves the way to assess the utility of nonlinear parameters in identifying asymptomatic CAN.

  8. Automatic Arabic Text Classification

    OpenAIRE

    Al-harbi, S; Almuhareb, A.; Al-Thubaity , A; Khorsheed, M. S.; Al-Rajeh, A.

    2008-01-01

    Automated document classification is an important text mining task especially with the rapid growth of the number of online documents present in Arabic language. Text classification aims to automatically assign the text to a predefined category based on linguistic features. Such a process has different useful applications including, but not restricted to, e-mail spam detection, web page content filtering, and automatic message routing. This paper presents the results of experiments on documen...

  9. Metaphor identification in large texts corpora.

    Directory of Open Access Journals (Sweden)

    Yair Neuman

    Full Text Available Identifying metaphorical language-use (e.g., sweet child is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus.

  10. VEHICLE IDENTIFICATION TASK SOLUTION BY WINDSCREEN MARKING WITH A BARCODE

    Directory of Open Access Journals (Sweden)

    A. Levterov

    2012-01-01

    Full Text Available The vehicle identification means are considered and the present-day traffic requirements are set. The vehicle automatic identification method concerned with barcode use is proposed and described.

  11. Automatically predicting mood from expressed emotions

    NARCIS (Netherlands)

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused aff

  12. Automatic Syntactic Analysis of Free Text.

    Science.gov (United States)

    Schwarz, Christoph

    1990-01-01

    Discusses problems encountered with the syntactic analysis of free text documents in indexing. Postcoordination and precoordination of terms is discussed, an automatic indexing system call COPSY (context operator syntax) that uses natural language processing techniques is described, and future developments are explained. (60 references) (LRW)

  13. Genotypic Identification

    Science.gov (United States)

    In comparison with traditional, phenotype-based procedures for detection and identification of foodborne pathogen Listeria monocytogenes, molecular techniques are superior in terms of sensitivity, specificity and speed. This chapter provides a comprehensive review on the use of molecular methods for...

  14. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers by...... members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers a...

  15. Automatic utilities auditing

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Colin Boughton [Energy Metering Technology (United Kingdom)

    2000-08-01

    At present, energy audits represent only snapshot situations of the flow of energy. The normal pattern of energy audits as seen through the eyes of an experienced energy auditor is described. A brief history of energy auditing is given. It is claimed that the future of energy auditing lies in automatic meter reading with expert data analysis providing continuous automatic auditing thereby reducing the skill element. Ultimately, it will be feasible to carry out auditing at intervals of say 30 minutes rather than five years.

  16. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  17. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  18. Identification of nitrate long term trends in Loire-Brittany river district (France) in connection with hydrogeological contexts, agricultural practices and water table level variations

    Science.gov (United States)

    Lopez, B.; Baran, N.; Bourgine, B.; Ratheau, D.

    2009-04-01

    The European Union (EU) has adopted directives requiring that Member States take measures to reach a "good" chemical status of water resources by the year 2015 (Water Framework Directive: WFD). Alongside, the Nitrates Directives (91/676/EEC) aims at controlling nitrogen pollution and requires Member States to identify groundwaters that contain more than 50 mg NO3 L-1 or could exceed this limit if preventive measures are not taken. In order to achieve these environmental objectives in the Loire-Brittany river basin, or to justify the non achievement of these objectives, a large dataset of nitrate concentrations (117.056 raw data distributed on 7.341 time-series) and water table level time-series (1.371.655 data distributed on 511 piezometers) is analysed from 1945 to 2007. The 156.700 sq km Loire-Brittany river basin shows various hydrogeological contexts, ranging from sedimentary aquifers to basement ones, with a few volcanic-rock aquifers. The knowledge of the evolution of agricultural practices is important in such a study and, even if this information is not locally available, agricultural practices have globally changed since the 1991 Nitrates Directives. The detailed dataset available for the Loire-Brittany basin aquifers is used to evaluate tools and to propose efficient methodologies for identifying and quantifying past and current trends in nitrate concentrations. Therefore, the challenge of this study is to propose a global and integrated approach which allows nitrate trend identifications for the whole Loire-Brittany river basin. The temporal piezometric behaviour of each aquifer is defined using geostatistical analyse of water table level time-series. This method requires the calculation of an experimental temporal variogram that can be fitted with a theoretical model valid for a large time range. Identification of contrasted behaviours (short term, annual or pluriannual water table fluctuations) allows a systematic classification of the Loire

  19. An Automat for the Semantic Processing of Structured Information

    OpenAIRE

    Leiva-Mederos, Amed; Senso, Jos?? A.; Dom??nguez-Velasco, Sandor; H??pola, Pedro

    2012-01-01

    Using the database of the PuertoTerm project, an indexing system based on the cognitive model of Brigitte Enders was built. By analyzing the cognitive strategies of three abstractors, we built an automat that serves to simulate human indexing processes. The automat allows the texts integrated in the system to be assessed, evaluated and grouped by means of the Bipartite Spectral Graph Partitioning algorithm, which also permits visualization of the terms and the documents. The system features a...

  20. State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events

    Science.gov (United States)

    Roh, Min K.; Daigle, Bernie J.; Gillespie, Dan T.; Petzold, Linda R.

    2011-12-01

    In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)], 10.1063/1.2987701. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA—the state-dependent doubly weighted SSA (sdwSSA)—that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.

  1. Automatic Dance Lesson Generation

    Science.gov (United States)

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…

  2. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  3. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  4. Profiling School Shooters: Automatic Text-Based Analysis

    Directory of Open Access Journals (Sweden)

    Yair eNeuman

    2015-06-01

    Full Text Available School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various charateristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by six school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/priorization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology.

  5. Uranium casting furnace automatic temperature control development

    International Nuclear Information System (INIS)

    Development of an automatic molten uranium temperature control system for use on batch-type induction casting furnaces is described. Implementation of a two-color optical pyrometer, development of an optical scanner for the pyrometer, determination of furnace thermal dynamics, and design of control systems are addressed. The optical scanning system is shown to greatly improve pyrometer measurement repeatability, particularly where heavy floating slag accumulations cause surface temperature gradients. Thermal dynamics of the furnaces were determined by applying least-squares system identification techniques to actual production data. A unity feedback control system utilizing a proportional-integral-derivative compensator is designed by using frequency-domain techniques. 14 refs

  6. Automatic cytometric device using multiple wavelength excitations

    Science.gov (United States)

    Rongeat, Nelly; Ledroit, Sylvain; Chauvet, Laurence; Cremien, Didier; Urankar, Alexandra; Couderc, Vincent; Nérin, Philippe

    2011-05-01

    Precise identification of eosinophils, basophils, and specific subpopulations of blood cells (B lymphocytes) in an unconventional automatic hematology analyzer is demonstrated. Our specific apparatus mixes two excitation radiations by means of an acousto-optics tunable filter to properly control fluorescence emission of phycoerythrin cyanin 5 (PC5) conjugated to antibodies (anti-CD20 or anti-CRTH2) and Thiazole Orange. This way our analyzer combining techniques of hematology analysis and flow cytometry based on multiple fluorescence detection, drastically improves the signal to noise ratio and decreases the spectral overlaps impact coming from multiple fluorescence emissions.

  7. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  8. Automatic indexing, compiling and classification

    International Nuclear Information System (INIS)

    A review of the principles of automatic indexing, is followed by a comparison and summing-up of work by the authors and by a Soviet staff from the Moscou INFORM-ELECTRO Institute. The mathematical and linguistic problems of the automatic building of thesaurus and automatic classification are examined

  9. The automatic NMR gaussmeter

    International Nuclear Information System (INIS)

    The paper describes the automatic gaussmeter operating according to the principle of nuclear magnetic resonance. There have been discussed the operating principle, the block diagram and operating parameters of the meter. It can be applied to measurements of induction in electromagnets of wide-line radio-spectrometers EPR and NMR and in calibration stands of magnetic induction values. Frequency range of an autodyne oscillator from 0,6 up to 86 MHz for protons is corresponding to the field range from 0.016 up to 2T. Applicaton of other nuclei, such as 7Li and 2D is also foreseen. The induction measurement is carried over automatically, and the NMR signal and value of measured induction are displayed on a monitor screen. (author)

  10. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  11. Automatic Wall Painting Robot

    OpenAIRE

    P.KEERTHANAA, K.JEEVITHA, V.NAVINA, G.INDIRA, S.JAYAMANI

    2013-01-01

    The Primary Aim Of The Project Is To Design, Develop And Implement Automatic Wall Painting Robot Which Helps To Achieve Low Cost Painting Equipment. Despite The Advances In Robotics And Its Wide Spreading Applications, Interior Wall Painting Has Shared Little In Research Activities. The Painting Chemicals Can Cause Hazards To The Human Painters Such As Eye And Respiratory System Problems. Also The Nature Of Painting Procedure That Requires Repeated Work And Hand Rising Makes It Boring, Time A...

  12. Automatic Program Reports

    OpenAIRE

    Lígia Maria da Silva Ribeiro; Gabriel de Sousa Torcato David

    2007-01-01

    To profit from the data collected by the SIGARRA academic IS, a systematic setof graphs and statistics has been added to it and are available on-line. Thisanalytic information can be automatically included in a flexible yearly report foreach program as well as in a synthesis report for the whole school. Somedifficulties in the interpretation of some graphs led to the definition of new keyindicators and the development of a data warehouse across the university whereeffective data consolidation...

  13. Automatic Inductive Programming Tutorial

    OpenAIRE

    Aler, Ricardo

    2006-01-01

    Computers that can program themselves is an old dream of Artificial Intelligence, but only nowadays there is some progress of remark. In relation to Machine Learning, a computer program is the most powerful structure that can be learned, pushing the final goal well beyond neural networks or decision trees. There are currently many separate areas, working independently, related to automatic programming, both deductive and inductive. The first goal of this tutorial is to give to the attendants ...

  14. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  15. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  16. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychologic......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  17. Automatic digital image registration

    Science.gov (United States)

    Goshtasby, A.; Jain, A. K.; Enslin, W. R.

    1982-01-01

    This paper introduces a general procedure for automatic registration of two images which may have translational, rotational, and scaling differences. This procedure involves (1) segmentation of the images, (2) isolation of dominant objects from the images, (3) determination of corresponding objects in the two images, and (4) estimation of transformation parameters using the center of gravities of objects as control points. An example is given which uses this technique to register two images which have translational, rotational, and scaling differences.

  18. System Identification

    NARCIS (Netherlands)

    Keesman, K.J.

    2011-01-01

    Summary System Identification Introduction.- Part I: Data-based Identification.- System Response Methods.- Frequency Response Methods.- Correlation Methods.- Part II: Time-invariant Systems Identification.- Static Systems Identification.- Dynamic Systems Identification.- Part III: Time-varying Syste

  19. The ALDB box: automatic testing of cognitive performance in groups of aviary-housed pigeons.

    Science.gov (United States)

    Huber, Ludwig; Heise, Nils; Zeman, Christopher; Palmers, Christian

    2015-03-01

    The combination of highly controlled experimental testing and the voluntary participation of unrestrained animals has many advantages over traditional, laboratory-based learning environments in terms of animal welfare, learning speed, and resource economy. Such automatic learning environments have recently been developed for primates (Fagot & Bonté, 2010; Fagot & Paleressompoulle, 2009;) but, so far, has not been achieved with highly mobile creatures such as birds. Here, we present a novel testing environment for pigeons. Living together in small groups in outside aviaries, they can freely choose to participate in learning experiments by entering and leaving the automatic learning box at any time. At the single-access entry, they are individualized using radio frequency identification technology and then trained or tested in a stress-free and self-terminating manner. The voluntary nature of their participation according to their individual biorhythm guarantees high motivation levels and good learning and test performance. Around-the-clock access allows for massed-trials training, which in baboons has been proven to have facilitative effects on discrimination learning. The performance of 2 pigeons confirmed the advantages of the automatic learning device for birds box. The latter is the result of a development process of several years that required us to deal with and overcome a number of technical challenges: (1) mechanically controlled access to the box, (2) identification of the birds, (3) the release of a bird and, at the same time, prevention of others from entering the box, and (4) reliable functioning of the device despite long operation times and exposure to high dust loads and low temperatures. PMID:24737096

  20. Automatic Caption Generation for Electronics Textbooks

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2014-12-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  1. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2015-10-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  2. Ballistics Image Processing and Analysis for Firearm Identification

    OpenAIRE

    Li, Dongguang

    2009-01-01

    Firearm identification is an intensive and time-consuming process that requires physical interpretation of forensic ballistics evidence. Especially as the level of violent crime involving firearms escalates, the number of firearms to be identified accumulates dramatically. The demand for an automatic firearm identification system arises. This chapter proposes a new, analytic system for automatic firearm identification based on the cartridge and projectile specimens. Not only do we present an ...

  3. Automatic radioactive waste recycling

    International Nuclear Information System (INIS)

    The production of a plutonium ingot by calcium reduction process at CEA/Valduc generates a residue called 'slag'. This article introduces the recycling unit which is dedicated to the treatment of slags. The aim is to separate and to recycle the plutonium trapped in this bulk on the one hand, and to generate a disposable waste from the slag on the other hand. After a general introduction of the facilities, some elements will be enlightened, particularly the dissolution step, the filtration and the drying equipment. Reflections upon technological constraints will be proposed, and the benefits of a fully automatic recycling unit of nuclear waste will also be stressed. (authors)

  4. Automatic Configuration in NTP

    Institute of Scientific and Technical Information of China (English)

    Jiang Zongli(蒋宗礼); Xu Binbin

    2003-01-01

    NTP is nowadays the most widely used distributed network time protocol, which aims at synchronizing the clocks of computers in a network and keeping the accuracy and validation of the time information which is transmitted in the network. Without automatic configuration mechanism, the stability and flexibility of the synchronization network built upon NTP protocol are not satisfying. P2P's resource discovery mechanism is used to look for time sources in a synchronization network, and according to the network environment and node's quality, the synchronization network is constructed dynamically.

  5. Automatically predicting mood from expressed emotions

    OpenAIRE

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused affective state, as opposed to the short-term, volatile and intense emotion. This is emphasized, because mood and emotion often tend to be used as synonyms. However, since their differences are well e...

  6. Description of automatic tool change systems in machining centers

    OpenAIRE

    Jirásek, Lukáš

    2008-01-01

    Automation of tools changing belongs to key issues how to increase universality, flexibility and total level of automation of processing production machines. We don’t usually manage with one active term (tool) in the cause of the mechanical machining only but we derive benefits from many different tools in the appropriate operation circle. Therefore the first matter of raising productivity of machine-tools was a necessity to change automatic tools. The main contribution of automatic tools cha...

  7. Indexing of Arabic documents automatically based on lexical analysis

    OpenAIRE

    Molijy, Abdulrahman Al; Hmeidi, Ismail; Alsmadi, Izzat

    2012-01-01

    The continuous information explosion through the Internet and all information sources makes it necessary to perform all information processing activities automatically in quick and reliable manners. In this paper, we proposed and implemented a method to automatically create and Index for books written in Arabic language. The process depends largely on text summarization and abstraction processes to collect main topics and statements in the book. The process is developed in terms of accuracy a...

  8. Automatic readout micrometer

    International Nuclear Information System (INIS)

    A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range

  9. Photo-identification methods reveal seasonal and long-term site-fidelity of Risso’s dolphins (Grampus griseus) in shallow waters (Cardigan Bay, Wales)

    NARCIS (Netherlands)

    Boer, de M.N.; Leopold, M.F.; Simmonds, M.P.; Reijnders, P.J.H.

    2013-01-01

    A photo-identification study on Risso’s dolphins was carried out off Bardsey Island in Wales (July to September, 1997-2007). Their local abundance was estimated using two different analytical techniques: 1) mark-recapture of well-marked dolphins using a “closed-population” model; and 2) a census tec

  10. Automatic Wall Painting Robot

    Directory of Open Access Journals (Sweden)

    P.KEERTHANAA, K.JEEVITHA, V.NAVINA, G.INDIRA, S.JAYAMANI

    2013-07-01

    Full Text Available The Primary Aim Of The Project Is To Design, Develop And Implement Automatic Wall Painting Robot Which Helps To Achieve Low Cost Painting Equipment. Despite The Advances In Robotics And Its Wide Spreading Applications, Interior Wall Painting Has Shared Little In Research Activities. The Painting Chemicals Can Cause Hazards To The Human Painters Such As Eye And Respiratory System Problems. Also The Nature Of Painting Procedure That Requires Repeated Work And Hand Rising Makes It Boring, Time And Effort Consuming. When Construction Workers And Robots Are Properly Integrated In Building Tasks, The Whole Construction Process Can Be Better Managed And Savings In Human Labour And Timing Are Obtained As A Consequence. In Addition, It Would Offer The Opportunity To Reduce Or Eliminate Human Exposure To Difficult And Hazardous Environments, Which Would Solve Most Of The Problems Connected With Safety When Many Activities Occur At The Same Time. These Factors Motivate The Development Of An Automated Robotic Painting System.

  11. Automatic alkaloid removal system.

    Science.gov (United States)

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user. PMID:24783795

  12. Effects of moderate maternal energy restriction on the offspring metabolic health, in terms of obesity and related diseases, and identification of determinant factors and early biomarkers

    OpenAIRE

    Torrens García, Juana María

    2015-01-01

    Introduction A growing body of evidence, from epidemiological studies in humans and animal models, indicate that maternal health and nutritional status during gestation and lactation can program the propensity to develop obesity in their offspring. Huge efforts are now being directed toward understanding the molecular mechanisms underlying this developmental programming. Identification of these mechanisms could give some clues about potential strategies to prevent or revert programmed prop...

  13. Automatic Modulation Recognition by Support Vector Machines Using Wavelet Kernel

    International Nuclear Information System (INIS)

    Automatic modulation identification plays a significant role in electronic warfare, electronic surveillance systems and electronic counter measure. The task of modulation recognition of communication signals is to determine the modulation type and signal parameters. In fact, automatic modulation identification can be range to an application of pattern recognition in communication field. The support vector machines (SVM) is a new universal learning machine which is widely used in the fields of pattern recognition, regression estimation and probability density. In this paper, a new method using wavelet kernel function was proposed, which maps the input vector xi into a high dimensional feature space F. In this feature space F, we can construct the optimal hyperplane that realizes the maximal margin in this space. That is to say, we can use SVM to classify the communication signals into two groups, namely analogue modulated signals and digitally modulated signals. In addition, computer simulation results are given at last, which show good performance of the method

  14. Optimal Coordination of Automatic Line Switches for Distribution Systems

    OpenAIRE

    Jyh-Cherng Gu; Ming-Ta Yang

    2012-01-01

    For the Taiwan Power Company (Taipower), the margins of coordination times between the lateral circuit breakers (LCB) of underground 4-way automatic line switches and the protection equipment of high voltage customers are often too small. This could lead to sympathy tripping by the feeder circuit breaker (FCB) of the distribution feeder and create difficulties in protection coordination between upstream and downstream protection equipment, identification of faults, and restoration operations....

  15. Automatic target validation based on neuroscientific literature mining for tractography

    OpenAIRE

    Xavier Vasques; Renaud Richardet; Etienne Pralong; LAURA CIF

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so t...

  16. Experiments in Image Segmentation for Automatic US License Plate Recognition

    OpenAIRE

    Diaz Acosta, Beatriz

    2004-01-01

    License plate recognition/identification (LPR/I) applies image processing and character recognition technology to identify vehicles by automatically reading their license plates. In the United States, however, each state has its own standard-issue plates, plus several optional styles, which are referred to as special license plates or varieties. There is a clear absence of standardization and multi-colored, complex backgrounds are becoming more frequent in license plates. Commercially availab...

  17. Requirements for Automatic Performance Analysis - APART Technical Report

    OpenAIRE

    Riley, Graham D.; Gurd, John R.

    1999-01-01

    This report discusses the requirements for automatic performance analysis tools. The discussion proceeds by first examining the nature and purpose of performance analysis. This results in an identification of the sources of performance data available to the analysis process and some properties of the process itself. Consideration is then given to the automation of the process. Many environmental factors affecting the performance analysis process are identified leading to the definition of a s...

  18. Making automatic differentiation truly automatic : coupling PETSc with ADIC

    International Nuclear Information System (INIS)

    Despite its name, automatic differentiation (AD) is often far from an automatic process. often one must specify independent and dependent variables, indicate the derivative quantities to be computed, and perhaps even provide information about the structure of the Jacobians or Hessians being computed. However, when AD is used in conjunction with a toolkit with well-defined interfaces, many of these issues do not arise. They describe recent research into coupling the ADIC automatic differentiation tool with PETSc, a toolkit for the parallel numerical solution of PDEs. This research leverages the interfaces and objects of PETSc to make the AD process very nearly transparent

  19. Hydra: Automatic algorithm exploration from linear algebra equations

    OpenAIRE

    Duchâteau, Alexandre; Padua, David; Barthou, Denis

    2013-01-01

    International audience Hydra accepts an equation written in terms of operations on matrices and automatically produces highly efficient code to solve these equations. Processing of the equation starts by tiling the matrices. This transforms the equation into either a single new equation containing terms involving tiles or into multiple equations some of which can be solved in parallel with each other. Hydra continues transforming the equations using tiling and seeking terms that Hydra know...

  20. Identification and quantification of phytochelatins in roots of rice to long-term exposure: evidence of individual role on arsenic accumulation and translocation

    OpenAIRE

    Lemos Batista, Bruno; Nigar, Meher; Mestrot, Adrien; Alves Rocha, Bruno; Barbosa Júnior, Fernando; Price, Adam H.; Raab, Andrea; Feldmann, Jörg

    2014-01-01

    Rice has the predilection to take up arsenic in the form of methylated arsenic (o-As) and inorganic arsenic species (i-As). Plants defend themselves using i-As efflux systems and the production of phytochelatins (PCs) to complex i-As. Our study focused on the identification and quantification of phytochelatins by HPLC-ICP-MS/ESI-MS, relating them to the several variables linked to As exposure. GSH, 11 PCs, and As–PC complexes from the roots of six rice cultivars (Italica Carolina, Dom Sofid, ...

  1. Photo-identification methods reveal seasonal and long-term site-fidelity of Risso’s dolphins (Grampus griseus) in shallow waters (Cardigan Bay, Wales)

    OpenAIRE

    Boer; Leopold, M.F.; Simmonds, M.P.; Reijnders, P.J.H.

    2013-01-01

    A photo-identification study on Risso’s dolphins was carried out off Bardsey Island in Wales (July to September, 1997-2007). Their local abundance was estimated using two different analytical techniques: 1) mark-recapture of well-marked dolphins using a “closed-population” model; and 2) a census technique based on the total number of iden-tified individual dolphins sighted over the study period. The mark-recapture estimates of 121 (left sides; 64 - 178, 95% CI; CV 0.24) and 145 dolphins (righ...

  2. Automatic Detection of Dominance and Expected Interest

    Directory of Open Access Journals (Sweden)

    M. Teresa Anguera

    2010-01-01

    Full Text Available Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.

  3. Semi-automatic analysis of fire debris

    Science.gov (United States)

    Touron; Malaquin; Gardebas; Nicolai

    2000-05-01

    Automated analysis of fire residues involves a strategy which deals with the wide variety of received criminalistic samples. Because of unknown concentration of accelerant in a sample and the wide range of flammable products, full attention from the analyst is required. Primary detection with a photoionisator resolves the first problem, determining the right method to use: the less responsive classical head-space determination or absorption on active charcoal tube, a better fitted method more adapted to low concentrations can thus be chosen. The latter method is suitable for automatic thermal desorption (ATD400), to avoid any risk of cross contamination. A PONA column (50 mx0.2 mm i.d.) allows the separation of volatile hydrocarbons from C(1) to C(15) and the update of a database. A specific second column is used for heavy hydrocarbons. Heavy products (C(13) to C(40)) were extracted from residues using a very small amount of pentane, concentrated to 1 ml at 50 degrees C and then placed on an automatic carousel. Comparison of flammables with referenced chromatograms provided expected identification, possibly using mass spectrometry. This analytical strategy belongs to the IRCGN quality program, resulting in analysis of 1500 samples per year by two technicians. PMID:10802196

  4. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  5. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  6. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  7. Prospects for de-automatization.

    Science.gov (United States)

    Kihlstrom, John F

    2011-06-01

    Research by Raz and his associates has repeatedly found that suggestions for hypnotic agnosia, administered to highly hypnotizable subjects, reduce or even eliminate Stroop interference. The present paper sought unsuccessfully to extend these findings to negative priming in the Stroop task. Nevertheless, the reduction of Stroop interference has broad theoretical implications, both for our understanding of automaticity and for the prospect of de-automatizing cognition in meditation and other altered states of consciousness. PMID:20356765

  8. Process automatization in system administration

    OpenAIRE

    Petauer, Janja

    2013-01-01

    The aim of the thesis is to present automatization of user management in company Studio Moderna. The company has grown exponentially in recent years, that is why we needed to find faster, easier and cheaper way of man- aging user accounts. We automatized processes of creating, changing and removing user accounts within Active Directory. We prepared user interface inside of existing application, used Java Script for drop down menus, wrote script in scripting programming langu...

  9. Exploring Behavioral Markers of Long-Term Physical Activity Maintenance: A Case Study of System Identification Modeling within a Behavioral Intervention

    Science.gov (United States)

    Hekler, Eric B.; Buman, Matthew P.; Poothakandiyil, Nikhil; Rivera, Daniel E.; Dzierzewski, Joseph M.; Aiken Morgan, Adrienne; McCrae, Christina S.; Roberts, Beverly L.; Marsiske, Michael; Giacobbi, Peter R., Jr.

    2013-01-01

    Efficacious interventions to promote long-term maintenance of physical activity are not well understood. Engineers have developed methods to create dynamical system models for modeling idiographic (i.e., within-person) relationships within systems. In behavioral research, dynamical systems modeling may assist in decomposing intervention effects…

  10. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  11. Topical Session on Liabilities identification and long-term management at national level - Topical Session held during the 36. Meeting of the RWMC

    International Nuclear Information System (INIS)

    These proceedings cover a topical session that was held at the March 2003 meeting of the Radioactive Waste Management Committee. The topical session focused on liability assessment and management for decommissioning of all types of nuclear installations, including decontamination of historic sites and waste management, as applicable. The presentations covered the current, national situations. The first oral presentation, from Switzerland, set the scene by providing a broad coverage of the relevant issues. The subsequent presentations - five from Member countries and one from the EC - described additional national positions and the evolving EC proposed directives. Each oral presentation was followed by a brief period of Q and As for clarification only. A plenary discussion took place on the ensemble of presentations and a Rapporteur provided a report on points made and lessons learnt. Additionally, written contributions were provided by RWMC delegates from several other countries. These are included in the proceedings as are the papers from the oral sessions, and the Rapporteur's report. These papers are not intended to be exhaustive, but to give an informed glimpse of NEA countries' approaches to liability identification and management in the context of nuclear facilities decommissioning and dismantling

  12. The DanTermBank Project

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Pram Nielsen, Louise;

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  13. Exploring Behavioral Markers of Long-term Physical Activity Maintenance: A Case Study of System Identification Modeling within a Behavioral Intervention

    OpenAIRE

    Hekler, Eric B; Buman, Matthew P.; Poothakandiyil, Nikhil; Rivera, Daniel E.; Dzierzewski, Joseph M.; Morgan, Adrienne Aiken; McCrae, Christina S.; Roberts, Beverly L; Marsiske, Michael; Giacobbi, Peter R

    2013-01-01

    Efficacious interventions to promote long-term maintenance of physical activity are not well understood. Engineers have developed methods to create dynamical system models for modeling idiographic (i.e., within-person) relationships within systems. In behavioral research, dynamical systems modeling may assist in decomposing intervention effects and identifying key behavioral patterns that may foster behavioral maintenance. The Active Adult Mentoring Program (AAMP) was a 16-week randomized con...

  14. THEORETICAL CONSIDERATIONS REGARDING THE AUTOMATIC FISCAL STABILIZERS OPERATING MECHANISM

    Directory of Open Access Journals (Sweden)

    Gondor Mihaela

    2012-07-01

    Full Text Available This paper examines the role of Automatic Fiscal Stabilizers (AFS for stabilizing the cyclical fluctuations of macroeconomic output as an alternative to discretionary fiscal policy, admitting its huge potential of being an anti crisis solution. The objectives of the study are the identification of the general features of the concept of automatic fiscal stabilizers and the logical assessment of them from economic perspectives. Based on the literature in the field, this paper points out the disadvantages of fiscal discretionary policy and argue the need of using Automatic Fiscal Stabilizers in order to provide a faster decision making process, shielded from political interference, and reduced uncertainty for households and business environment. The paper conclude about the need of using fiscal policy for smoothing the economic cycle, but in a way which includes among its features transparency, responsibility and clear operating mechanisms. Based on the research results the present paper assumes that pro-cyclicality reduces de effectiveness of the Automatic Fiscal Stabilizer and as a result concludes that it is very important to avoid the pro-cyclicality in fiscal rule design. Moreover, by committing in advance to specific fiscal policy action contingent on economic developments, uncertainty about the fiscal policy framework during a recession should be reduced. Being based on logical analysis and not focused on empirical, contextualized one, the paper presents some features of AFS operating mechanism and also identifies and systematizes the factors which provide its importance and national individuality. Reaching common understanding on the Automatic Fiscal Stabilizer concept as a institutional device for smoothing the gap of the economic cycles across different countries, particularly for the European Union Member States, will facilitate efforts to coordinate fiscal policy responses during a crisis, especially in the context of the fiscal

  15. Identification and quantification of phytochelatins in roots of rice to long-term exposure: evidence of individual role on arsenic accumulation and translocation.

    Science.gov (United States)

    Batista, Bruno Lemos; Nigar, Meher; Mestrot, Adrien; Rocha, Bruno Alves; Barbosa Júnior, Fernando; Price, Adam H; Raab, Andrea; Feldmann, Jörg

    2014-04-01

    Rice has the predilection to take up arsenic in the form of methylated arsenic (o-As) and inorganic arsenic species (i-As). Plants defend themselves using i-As efflux systems and the production of phytochelatins (PCs) to complex i-As. Our study focused on the identification and quantification of phytochelatins by HPLC-ICP-MS/ESI-MS, relating them to the several variables linked to As exposure. GSH, 11 PCs, and As-PC complexes from the roots of six rice cultivars (Italica Carolina, Dom Sofid, 9524, Kitrana 508, YRL-1, and Lemont) exposed to low and high levels of i-As were compared with total, i-As, and o-As in roots, shoots, and grains. Only Dom Sofid, Kitrana 508, and 9524 were found to produce higher levels of PCs even when exposed to low levels of As. PCs were only correlated to i-As in the roots (r=0.884, P <0.001). However, significant negative correlations to As transfer factors (TF) roots-grains (r= -0.739, P <0.05) and shoots-grains (r= -0.541, P <0.05), suggested that these peptides help in trapping i-As but not o-As in the roots, reducing grains' i-As. Italica Carolina reduced i-As in grains after high exposure, where some specific PCs had a special role in this reduction. In Lemont, exposure to elevated levels of i-As did not result in higher i-As levels in the grains and there were no significant increases in PCs or thiols. Finally, the high production of PCs in Kitrana 508 and Dom Sofid in response to high As treatment did not relate to a reduction of i-As in grains, suggesting that other mechanisms such as As-PC release and transport seems to be important in determining grain As in these cultivars. PMID:24600019

  16. Automatic measurement system for long term LED parameters

    Science.gov (United States)

    Budzyński, Łukasz; Zajkowski, Maciej

    2015-09-01

    During the past years significantly increased the number of LED models available on the market. However, not all of them have parameters which allow for use in professional lighting systems. The article discusses the international standards which should be met by modern LEDs. Among them, one of the most important parameters is factor of luminous flux decline in value during the operation of the LEDs. Its value is influenced by many factors, among others, the junction temperature of the diode and average and maximum values of supply current. Other important, for lighting reasons, parameters are stability of correlated color temperature and stability of chromaticity coordinates of the emitted light. The paper presents a system to measure luminous flux and colorimetric parameters of LEDs. Measurement system also allows for measuring a change in these parameters during operation of the LED.

  17. Indexing of Arabic documents automatically based on lexical analysis

    CERN Document Server

    Molijy, Abdulrahman Al; Alsmadi, Izzat

    2012-01-01

    The continuous information explosion through the Internet and all information sources makes it necessary to perform all information processing activities automatically in quick and reliable manners. In this paper, we proposed and implemented a method to automatically create and Index for books written in Arabic language. The process depends largely on text summarization and abstraction processes to collect main topics and statements in the book. The process is developed in terms of accuracy and performance and results showed that this process can effectively replace the effort of manually indexing books and document, a process that can be very useful in all information processing and retrieval applications.

  18. Automatic control rod programming for boiling water reactors

    International Nuclear Information System (INIS)

    The objective of long-term control rod programming is to develop a sequence of exposure-dependent control rod patterns that assure the safe and efficient depletion of the nuclear fuel for the duration of the cycle. A two step method was effected in the code OCTOPUS to perform this task automatically for the Pennsylvania and Power Light Co.' BWRs. Although the execution of OCTOPUS provides good or satisfactory results, its input and execution mode has been improved by making it more user friendly and automatic. (authors)

  19. Automatic recognizing of vocal fold disorders from glottis images.

    Science.gov (United States)

    Huang, Chang-Chiun; Leu, Yi-Shing; Kuo, Chung-Feng Jeffrey; Chu, Wen-Lin; Chu, Yueng-Hsiang; Wu, Han-Cheng

    2014-09-01

    The laryngeal video stroboscope is an important instrument to test glottal diseases and read vocal fold images and voice quality for physician clinical diagnosis. This study is aimed to develop a medical system with functionality of automatic intelligent recognition of dynamic images. The static images of glottis opening to the largest extent and closing to the smallest extent were screened automatically using color space transformation and image preprocessing. The glottal area was also quantized. As the tongue base movements affected the position of laryngoscope and saliva would result in unclear images, this study used the gray scale adaptive entropy value to set the threshold in order to establish an elimination system. The proposed system can improve the effect of automatically captured images of glottis and achieve an accuracy rate of 96%. In addition, the glottal area and area segmentation threshold were calculated effectively. The glottis area segmentation was corrected, and the glottal area waveform pattern was drawn automatically to assist in vocal fold diagnosis. When developing the intelligent recognition system for vocal fold disorders, this study analyzed the characteristic values of four vocal fold patterns, namely, normal vocal fold, vocal fold paralysis, vocal fold polyp, and vocal fold cyst. It also used the support vector machine classifier to identify vocal fold disorders and achieved an identification accuracy rate of 98.75%. The results can serve as a very valuable reference for diagnosis. PMID:25313026

  20. Face Prediction Model for an Automatic Age-invariant Face Recognition System

    OpenAIRE

    Yadav, Poonam

    2015-01-01

    Automated face recognition and identification softwares are becoming part of our daily life; it finds its abode not only with Facebook's auto photo tagging, Apple's iPhoto, Google's Picasa, Microsoft's Kinect, but also in Homeland Security Department's dedicated biometric face detection systems. Most of these automatic face identification systems fail where the effects of aging come into the picture. Little work exists in the literature on the subject of face prediction that accounts for agin...

  1. Towards automatic identification of mismatched image pairs through loop constraints

    Science.gov (United States)

    Elibol, Armagan; Kim, Jinwhan; Gracias, Nuno; Garcia, Rafael

    2013-12-01

    Obtaining image sequences has become easier and easier thanks to the rapid progress on optical sensors and robotic platforms. Processing of image sequences (e.g., mapping, 3D reconstruction, Simultaneous Localisation and Mapping (SLAM)) usually requires 2D image registration. Recently, image registration is accomplished by detecting salient points in two images and nextmatching their descriptors. To eliminate outliers and to compute a planar transformation (homography) between the coordinate frames of images, robust methods (such as Random Sample Consensus (RANSAC) and Least Median of Squares (LMedS)) are employed. However, image registration pipeline can sometimes provide sufficient number of inliers within the error bounds even when images do not overlap. Such mismatches occur especially when the scene has repetitive texture and shows structural similarity. In this study, we present a method to identify the mismatches using closed-loop (cycle) constraints. The method exploits the fact that images forming a cycle should have identity mapping when all the homographies between images in the cycle multiplied. Cycles appear when the camera revisits an area that was imaged before, which is a common practice especially for mapping purposes. Our proposal extracts several cycles to obtain error statistics for each matched image pair. Then, it searches for image pairs that have extreme error histogram comparing to the other pairs. We present experimental results with artificially added mismatched image pairs on real underwater image sequences.

  2. 33 CFR 401.20 - Automatic Identification System.

    Science.gov (United States)

    2010-07-01

    ... close to the primary conning position in the navigation bridge and a standard 120 Volt, AC, 3-prong power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD) shall be located as close as possible to the primary conning position and be visible; (6) Computation...

  3. Automatic Identification used in Audio-Visual indexing and Analysis

    Directory of Open Access Journals (Sweden)

    A. Satish Chowdary

    2011-09-01

    Full Text Available To locate a video clip in large collections is very important for retrieval applications, especially for digital rights management. We attempt to provide a comprehensive and high-level review of audiovisual features that can be extracted from the standard compressed domains, such as MPEG-1 and MPEG-2. This paper presents a graph transformation and matching approach to identify the occurrence of potentially different ordering or length due to content editing. With a novel batch query algorithm to retrieve similar frames, the mapping relationship between the query and database video is first represented by a bipartite graph. The densely matched parts along the long sequence are then extracted, followed by a filter-and-refine search strategy to prune some irrelevant subsequences. During the filtering stage, Maximum Size Matching is deployed for each sub graph constructed by the query and candidate subsequence to obtain a smaller set of candidates. During the refinement stage, Sub-Maximum Similarity Matching is devised to identify the subsequence with the highest aggregate score from all candidates, according to a robust video similarity model that incorporates visual content, temporal order, and frame alignment information. This new algorithm is based on dynamic programming that fully uses the temporal dimension to measure the similarity between two video sequences. A normalized chromaticity histogram is used as a feature which is illumination invariant. Dynamic programming is applied on shot level to find the optimal nonlinear mapping between video sequences. Two new normalized distance measures are presented for video sequence matching. One measure is based on the normalization of the optimal path found by dynamic programming. The other measure combines both the visual features and the temporal information. The proposed distance measures are suitable for variable-length comparisons.

  4. Strengthen the Supervision over Pharmaceuticals via Modern Automatic Identification

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Fake pharmaceuticals inflicts severely upon people(?)~-s health through its circulation in markets.To strengthen the supervision of the pharmaceutical market,China is improving and is perfecting its national coding system in the field of pharmaceuticals. Bar-code tag and IC tag are available to the coding system.This paper summarizes the significance of IC tag to the supervision of pharmaceuticals and gives us a strategically general prospect of pharmaceutical supervision.

  5. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  6. AUTOMATIC AND SEMI-AUTOMATIC PROCESSES OF WORDSMITH 3.0 AS A TEXTBOOK EVALUATION INSTRUMENT: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Jayakaran Mukundan

    2006-01-01

    Full Text Available As checklists developed for textbook evaluation are question-able in terms of reliability and validity, other ways are being sought to bring about more systematic, efficient and objective evaluation instruments, which can provide greater insight into the strengths and weak-nesses of textbooks. With this in mind, the researchers explored the abilities of WordSmith 3.0, a concordance software, in providing some insights into the structure of textbooks. This study will provide findings on data WordSmith 3.0 generates automatically and semi-automatically, and how this information could be used in the evaluation of textbooks.

  7. Identification of the growth hormone-releasing hormone analogue [Pro1, Val14]-hGHRH with an incomplete C-term amidation in a confiscated product.

    Science.gov (United States)

    Esposito, Simone; Deventer, Koen; Van Eenoo, Peter

    2014-01-01

    In this work, a modified version of the 44 amino acid human growth hormone-releasing hormone (hGHRH(1-44)) containing an N-terminal proline extension, a valine residue in position 14, and a C-terminus amidation (sequence: PYADAIFTNSYRKVVLGQLSARKLLQDIMSRQQGESNQERGARARL-NH2 ) has been identified in a confiscated product by liquid chromatography-high resolution mass spectrometry (LC-HRMS). Investigation of the product suggests also an incomplete C-term amidation. Similarly to other hGHRH analogues, available in black markets, this peptide can potentially be used as performance-enhancing drug due to its growth hormone releasing activity and therefore it should be considered as a prohibited substance in sport. Additionally, the presence of partially amidated molecule reveals the poor pharmaceutical quality of the preparation, an aspect which represents a big concern for public health as well. PMID:25283153

  8. Automatic analysis of microscopic images of red blood cell aggregates

    Science.gov (United States)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  9. Bilirubin nomograms for identification of neonatal hyperbilirubinemia in healthy term and late-preterm infants:a systematic review and meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Zhang-Bin Yu; Shu-Ping Han; Chao Chen

    2014-01-01

    Background: Hyperbilirubinemia occurs in most healthy term and late-preterm infants, and must be monitored to identify those who might develop severe hyperbilirubinemia. Total serum bilirubin (TSB) or transcutaneous bilirubin (TcB) nomograms have been developed and validated to identify neonatal hyperbilirubinemia. This study aimed to review previously published studies and compare the TcB nomograms with the TSB nomogram, and to determine if the former has the same predictive value for signifi cant hyperbilirubinemia as TSB nomogram does. Methods: A predefined search strategy and inclusion criteria were set up. We selected studies assessing the predictive ability of TSB/TcB nomograms to identify significant hyperbilirubinemia in healthy term and latepreterm infants. Two independent reviewers assessed the quality and extracted the data from the included studies. Meta-Disc 1.4 analysis software was used to calculate the pooled sensitivity, specificity, and positive likelihood ratio of TcB/TSB nomograms. A pooled summary of the receiver operating characteristic of the TcB/TSB nomograms was created. Results: After screening 187 publications from electronic database searches and reference lists of eligible articles, we included 14 studies in the systematic review and meta-analysis. Eleven studies were of medium methodological quality. The remaining three studies were of low methodological quality. Seven studies evaluated the TcB nomograms, and seven studies assessed TSB nomograms. There were no differences between the predictive abilities of the TSB and TcB nomograms (the pooled area under curve was 0.819 vs. 0.817). Conclusions: This study showed that TcB nomograms had the same predictive value as TSB nomograms, both of which could be used to identify subsequent signifi cant hyperbilirubinemia. But this result should be interpreted cautiously because some methodological limitations of these included studies were identifi ed in this review.

  10. Long-term high frequency measurements of ethane, benzene and methyl chloride at Ragged Point, Barbados: Identification of long-range transport events

    Directory of Open Access Journals (Sweden)

    A.T. Archibald

    2015-09-01

    Full Text Available AbstractHere we present high frequency long-term observations of ethane, benzene and methyl chloride from the AGAGE Ragged Point, Barbados, monitoring station made using a custom built GC-MS system. Our analysis focuses on the first three years of data (2005–2007 and on the interpretation of periodic episodes of high concentrations of these compounds. We focus specifically on an exemplar episode during September 2007 to assess if these measurements are impacted by long-range transport of biomass burning and biogenic emissions. We use the Lagrangian Particle Dispersion model, NAME, run forwards and backwards in time to identify transport of air masses from the North East of Brazil during these events. To assess whether biomass burning was the cause we used hot spots detected using the MODIS instrument to act as point sources for simulating the release of biomass burning plumes. Excellent agreement for the arrival time of the simulated biomass burning plumes and the observations of enhancements in the trace gases indicates that biomass burning strongly influenced these measurements. These modelling data were then used to determine the emissions required to match the observations and compared with bottom up estimates based on burnt area and literature emission factors. Good agreement was found between the two techniques highlight the important role of biomass burning. The modelling constrained by in situ observations suggests that the emission factors were representative of their known upper limits, with the in situ data suggesting slightly greater emissions of ethane than the literature emission factors account for. Further analysis was performed concluding only a small role for biogenic emissions of methyl chloride from South America impacting measurements at Ragged Point. These results highlight the importance of long-term high frequency measurements of NMHC and ODS and highlight how these data can be used to determine sources of emissions

  11. Alcohol-related Cues Promote Automatic Racial Bias.

    Science.gov (United States)

    Stepanova, Elena V; Bartholow, Bruce D; Saults, J Scott; Friedman, Ronald S

    2012-07-01

    Previous research has shown that alcohol consumption can increase the expression of race bias by impairing control-related processes. The current study tested whether simple exposure to alcohol-related images can also increase bias, but via a different mechanism. Participants viewed magazine ads for either alcoholic or nonalcoholic beverages prior to completing Payne's (2001) Weapons Identification Task (WIT). As predicted, participants primed with alcohol ads exhibited greater race bias in the WIT than participants primed with neutral beverages. Process dissociation analyses indicated that these effects were due to automatic (relative to controlled) processes having a larger influence on behavior among alcohol-primed relative to neutral-primed participants. Structural equation modeling further showed that the alcohol-priming effect was mediated by increases in the influence of automatic associations on behavior. These data suggest an additional pathway by which alcohol can potentially harm inter-racial interactions, even when no beverage is consumed. PMID:22798699

  12. Automatic grading of carbon blacks from transmission electron microscopy

    Science.gov (United States)

    Luengo, L.; Treuillet, S.; Gomez, E.

    2015-04-01

    Carbon blacks are widely used as filler in industrial products to modify their mechanical, electrical and optical properties. For rubber products, they are the subject of a standard classification system relative to their surface area, particle size and structure. The electron microscope remains the most accurate means of measuring these characteristics on condition that boundaries of aggregates and particles are correctly detected. In this paper, we propose an image processing chain allowing subsequent characterization for automatic grading of the carbon black aggregates. Based on literature review, 31 features are extracted from TEM images to obtain reliable information on the particle size, the shape and microstructure of the carbon black aggregates. Then, they are used for training several classifiers to compare their results for automatic grading. To obtain better results, we suggest to use a cluster identification of aggregates in place of the individual characterization of aggregates.

  13. Early Automatic Detection of Parkinson's Disease Based on Sleep Recordings

    DEFF Research Database (Denmark)

    Kempfner, Jacob; Sorensen, Helge B D; Nikolic, Miki;

    2014-01-01

    SUMMARY: Idiopathic rapid-eye-movement (REM) sleep behavior disorder (iRBD) is most likely the earliest sign of Parkinson's Disease (PD) and is characterized by REM sleep without atonia (RSWA) and consequently increased muscle activity. However, some muscle twitching in normal subjects occurs...... which the number of outliers during REM sleep was used as a quantitative measure of muscle activity. RESULTS: The proposed method was able to automatically separate all iRBD test subjects from healthy elderly controls and subjects with periodic limb movement disorder. CONCLUSION: The proposed work is...... during REM sleep. PURPOSE: There are no generally accepted methods for evaluation of this activity and a normal range has not been established. Consequently, there is a need for objective criteria. METHOD: In this study we propose a full-automatic method for detection of RSWA. REM sleep identification...

  14. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  15. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    Science.gov (United States)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve

  16. Automatic Radiation Monitoring in Slovenia

    International Nuclear Information System (INIS)

    Full text: The automatic radiation monitoring system in Slovenia started in early nineties and now it comprises measurements of: 1. External gamma radiation: For the time being there are forty-three probes with GM tubes integrated into a common automatic network, operated at the SNSA. The probes measure dose rate in 30 minute intervals. 2. Aerosol radioactivity: Three automatic aerosol stations measure the concentration of artificial alpha and beta activity in the air, gamma emitting radionuclides, radioactive iodine 131 in the air (in all chemical forms, - natural radon and thoron progeny, 3. Radon progeny concentration: Radon progeny concentration is measured hourly and results are displayed as the equilibrium equivalent concentrations (EEC), 4. Radioactive deposition measurements: As a support to gamma dose rate measurements - the SNSA developed and installed an automatic measuring station for surface contamination equipped with gamma spectrometry system (with 3x3' NaI(Tl) detector). All data are transferred through the different communication pathways to the SNSA. They are collected in 30 minute intervals. Within these intervals the central computer analyses and processes the collected data, and creates different reports. Every month QA/QC analysis of data is performed, showing the statistics of acquisition errors and availability of measuring results. All results are promptly available at the our WEB pages. The data are checked and daily sent to the EURDEP system at Ispra (Italy) and also to the Austrian, Croatian and Hungarian authorities. (author)

  17. Automatic Association of News Items.

    Science.gov (United States)

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  18. Automatic quantification of iris color

    DEFF Research Database (Denmark)

    Christoffersen, S.; Harder, Stine; Andersen, J. D.;

    2012-01-01

    An automatic algorithm to quantify the eye colour and structural information from standard hi-resolution photos of the human iris has been developed. Initially, the major structures in the eye region are identified including the pupil, iris, sclera, and eyelashes. Based on this segmentation, the ...

  19. 05501 Summary -- Automatic Performance Analysis

    OpenAIRE

    Gerndt, Hans Michael; Malony, Allen; Miller, Barton P.; Nagel, Wolfgang

    2006-01-01

    The Workshop on Automatic Performance Analysis (WAPA 2005, Dagstuhl Seminar 05501), held December 13-16, 2005, brought together performance researchers, developers, and practitioners with the goal of better understanding the methods, techniques, and tools that are needed for the automation of performance analysis for high performance computing.

  20. Automatic Control of Configuration of Web Anonymization

    Directory of Open Access Journals (Sweden)

    Tomas Sochor

    2013-01-01

    Full Text Available Anonymization of the Internet traffic usually hides details about the request originator from the target server. Such a disguise might be required in some situations, especially in the case of web browsing. Although the web traffic anonymization is not a part of the http specification, it could be achieved using a certain extra tool. Significant deceleration of anonymized traffic compared to normal traffic is inevitable but it can be controlled in some cases as this article suggests. The results presented here focus on measuring the parameters of such deceleration in terms of response time, transmission speed and latency and proposing the way how to control it. This study focuses on TOR primarily because recent studies have concluded that other tools (like I2P and JAP provide worse service. Sets of 14 file locations and 30 web pages have been formed and the latency, response time and transmission speed during the page or file download were measured repeatedly both with TOR active in various configurations and without TOR. The main result presented here comprises several ways how to improve the TOR anonymization efficiency and the proposal for its automatic control. In spite of the fact that efficiency still remains too low compared to normal web traffic for ordinary use, its automatic control could make TOR a useful tool in special cases.

  1. Automatic Induction of Rule Based Text Categorization

    Directory of Open Access Journals (Sweden)

    D.Maghesh Kumar

    2010-12-01

    Full Text Available The automated categorization of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuingneed to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. This paper describes, a novel method for the automatic induction of rule-based text classifiers. This method supports a hypothesis language of the form "if T1, … or Tn occurs in document d, and none of T1+n,... Tn+m occurs in d, then classify d under category c," where each Ti is a conjunction of terms. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. Issues pertaining tothree different problems, namely, document representation, classifier construction, and classifier evaluation were discussed in detail.

  2. Identification of long-term carbon sequestration in soils with historical inputs of biochar using novel stable isotope and spectroscopic techniques

    Science.gov (United States)

    Hernandez-Soriano, Maria C.; Kerré, Bart; Hardy, Brieuc; Dufey, Joseph; Smolders, Erik

    2013-04-01

    Biochar is the collective term for organic matter (OM) that has been produced by pyrolysis of biomass, e.g. during production of charcoal or during natural processes such as bush fires. Biochar production and application is now suggested as one of the economically feasible options for global C-sequestration strategies. The C-sequestration in soil through application of biochar is not only related to its persistence (estimated lifetime exceeds 1000 year in soil), but also due to indirect effects such as its potential to adsorb and increase OM stability in soil. Historical charcoal production sites that had been in use >200 years ago in beech/oak forests have been localized in the south of Belgium. Aerial photography identified black spots in arable land on former forest sites. Soil sampling was conducted in an arable field used for maize production near Mettet (Belgium) where charcoal production was intensive until late 18th century. Soils were sampled in a horizontal gradient across the 'black soils' that extend of few decametres, collecting soil from the spots (Biochar Amended, BA) as well as from the non-biochar amended (NBA). Stable C isotope composition was used to estimate the long-term C-sequestration derived from crops in these soils where maize had been produced since about 15 years. Because C in the biochar originates in forest wood (C3 plants), its isotopic signature (δ13C) differs from the maize (a C4 plant). The C and N content and the δ13C were determined for bulk soil samples and for microaggregate size fractions separated by wet sieving. Fourier Transform Infrared Spectroscopy (FTIR) coupled to optical microscopy was used to obtaining fingerprints of biochar and OM composition for soil microaggregates. The total C content in the BA soil (5.5%) and the C/N ratio (16.9) were higher than for NBA (C content 2.7%; C/N ratio 12.6), which confirms the persistence of OM in the BA. The average isotopic signature of bulk soil from BA (-26.08) was slightly

  3. CRISPR Recognition Tool (CRT): a tool for automatic detection ofclustered regularly interspaced palindromic repeats

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Charles; Ramsey, Teresa L.; Sabree, Fareedah; Lowe,Micheal; Brown, Kyndall; Kyrpides, Nikos C.; Hugenholtz, Philip

    2007-05-01

    Clustered Regularly Interspaced Palindromic Repeats (CRISPRs) are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT also demonstrated superior performance, especially for genomes containing large numbers of repeats. In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, was shown to be a significant improvement over the current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n) in space and O(nm/l) in time.

  4. Metabolite Profiling of Diverse Rice Germplasm and Identification of Conserved Metabolic Markers of Rice Roots in Response to Long-Term Mild Salinity Stress

    Directory of Open Access Journals (Sweden)

    Myung Hee Nam

    2015-09-01

    Full Text Available The sensitivity of rice to salt stress greatly depends on growth stages, organ types and cultivars. Especially, the roots of young rice seedlings are highly salt-sensitive organs that limit plant growth, even under mild soil salinity conditions. In an attempt to identify metabolic markers of rice roots responding to salt stress, metabolite profiling was performed by 1H-NMR spectroscopy in 38 rice genotypes that varied in biomass accumulation under long-term mild salinity condition. Multivariate statistical analysis showed separation of the control and salt-treated rice roots and rice genotypes with differential growth potential. By quantitative analyses of 1H-NMR data, five conserved salt-responsive metabolic markers of rice roots were identified. Sucrose, allantoin and glutamate accumulated by salt stress, whereas the levels of glutamine and alanine decreased. A positive correlation of metabolite changes with growth potential and salt tolerance of rice genotypes was observed for allantoin and glutamine. Adjustment of nitrogen metabolism in rice roots is likely to be closely related to maintain the growth potential and increase the stress tolerance of rice.

  5. From motion to faces: 3D-assisted automatic analysis of people

    OpenAIRE

    Iacopo Masi

    2014-01-01

    From motion to faces: 3D-assisted automatic analysis of people. This work proposes new computer vision algorithms about recognizing people by exploiting the face and the imaged appearance of the body. Many computer vision algorithms are covered: tracking, face recognition and person re-identification.

  6. Automatic sensor placement

    Science.gov (United States)

    Abidi, Besma R.

    1995-10-01

    Active sensing is the process of exploring the environment using multiple views of a scene captured by sensors from different points in space under different sensor settings. Applications of active sensing are numerous and can be found in the medical field (limb reconstruction), in archeology (bone mapping), in the movie and advertisement industry (computer simulation and graphics), in manufacturing (quality control), as well as in the environmental industry (mapping of nuclear dump sites). In this work, the focus is on the use of a single vision sensor (camera) to perform the volumetric modeling of an unknown object in an entirely autonomous fashion. The camera moves to acquire the necessary information in two ways: (a) viewing closely each local feature of interest using 2D data; and (b) acquiring global information about the environment via 3D sensor locations and orientations. A single object is presented to the camera and an initial arbitrary image is acquired. A 2D optimization process is developed. It brings the object in the field of view of the camera, normalizes it by centering the data in the image plane, aligns the principal axis with one of the camera's axes (arbitrarily chosen), and finally maximizes its resolution for better feature extraction. The enhanced image at each step is projected along the corresponding viewing direction. The new projection is intersected with previously obtained projections for volume reconstruction. During the global exploration of the scene, the current image as well as previous images are used to maximize the information in terms of shape irregularity as well as contrast variations. The scene on the borders of occlusion (contours) is modeled by an entropy-based objective functional. This functional is optimized to determine the best next view, which is recovered by computing the pose of the camera. A criterion based on the minimization of the difference between consecutive volume updates is set for termination of the

  7. Automatically identifying scatter in fluorescence data using robust techniques

    DEFF Research Database (Denmark)

    Engelen, S.; Frosch, Stina; Hubert, M.

    2007-01-01

    complicates the analysis instead and contributes to model inadequacy. As such, scatter can be considered as an example of element-wise outliers. However, no straightforward method for identifying the scatter region can be found in the literature. In this paper an automatic scatter identification method is...... input data for three different PARAFAC methods. Firstly inserting missing values in the scatter regions are tested, secondly an interpolation of the scatter regions is performed and finally the scatter regions are down-weighted. These results show that the PARAFAC method to choose after scatter...

  8. Automatic radar target recognition of objects falling on railway tracks

    International Nuclear Information System (INIS)

    This paper presents an automatic radar target recognition procedure based on complex resonances using the signals provided by ultra-wideband radar. This procedure is dedicated to detection and identification of objects lying on railway tracks. For an efficient complex resonance extraction, a comparison between several pole extraction methods is illustrated. Therefore, preprocessing methods are presented aiming to remove most of the erroneous poles interfering with the discrimination scheme. Once physical poles are determined, a specific discrimination technique is introduced based on the Euclidean distances. Both simulation and experimental results are depicted showing an efficient discrimination of different targets including guided transport passengers

  9. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM)

  10. Technologies on the Horizon for Product Identification

    Science.gov (United States)

    Schramm, Harry Fred, Jr.

    2005-01-01

    Contents include the following: Configuration management. Component/system to report. Unique item identifier (UID). Aftemarket undocumented configuration change. Revolutionary new brake system. Traceability of critical parts. Automatic identification used at many levels. Product ID problems that inhibited traceability. Direct part marking enables life cycle tracking.

  11. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  12. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  13. Semi-automatic removal of foreground stars from images of galaxies

    CERN Document Server

    Frei, Z

    1996-01-01

    A new procedure, designed to remove foreground stars from galaxy profiles is presented. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well known stellar photometry packages, DAOPhot. Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since...

  14. Automatically Determining Scale Within Unstructured Point Clouds

    Science.gov (United States)

    Kadamen, Jayren; Sithole, George

    2016-06-01

    Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.

  15. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  16. Orbital welding automatic pressure test by ODA automatic machines is 35 years old

    International Nuclear Information System (INIS)

    Development review of technology and equipment of automatic orbital welding with automatic pressures test of nuclear power stations pipelines and different purpose objects is performed. Welding variants with automatic pressure test and different welding automatic machines are described. Priority of national developments is underlined

  17. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS). In the...... second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable of...... identifying a release of radioactivity provided that the number of sampling locations is sufficiently high. Consequently, we believe that a combination of applying the presented mapping approach and the physical knowledge of the transport processes of radioactivity should be used to predict the extreme values...

  18. Automatic Schema Evolution in Root

    Institute of Scientific and Technical Information of China (English)

    ReneBrun; FonsRademakers

    2001-01-01

    ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.

  19. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  20. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  1. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  2. Automatic registration of satellite imagery

    Science.gov (United States)

    Fonseca, Leila M. G.; Costa, Max H. M.; Manjunath, B. S.; Kenney, C.

    1997-01-01

    Image registration is one of the basic image processing operations in remote sensing. With the increase in the number of images collected every day from different sensors, automated registration of multi-sensor/multi-spectral images has become an important issue. A wide range of registration techniques has been developed for many different types of applications and data. The objective of this paper is to present an automatic registration algorithm which uses a multiresolution analysis procedure based upon the wavelet transform. The procedure is completely automatic and relies on the grey level information content of the images and their local wavelet transform modulus maxima. The registration algorithm is very simple and easy to apply because it needs basically one parameter. We have obtained very encouraging results on test data sets from the TM and SPOT sensor images of forest, urban and agricultural areas.

  3. The Automatic Galaxy Collision Software

    CERN Document Server

    Smith, Beverly J; Pfeiffer, Phillip; Perkins, Sam; Barkanic, Jason; Fritts, Steve; Southerland, Derek; Manchikalapudi, Dinikar; Baker, Matt; Luckey, John; Franklin, Coral; Moffett, Amanda; Struck, Curtis

    2009-01-01

    The key to understanding the physical processes that occur during galaxy interactions is dynamical modeling, and especially the detailed matching of numerical models to specific systems. To make modeling interacting galaxies more efficient, we have constructed the `Automatic Galaxy Collision' (AGC) code, which requires less human intervention in finding good matches to data. We present some preliminary results from this code for the well-studied system Arp 284 (NGC 7714/5), and address questions of uniqueness of solutions.

  4. Automatic Generation of Technical Documentation

    OpenAIRE

    Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of ...

  5. Annual review in automatic programming

    CERN Document Server

    Halpern, Mark I; Bolliet, Louis

    2014-01-01

    Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown.

  6. Automatically constructing the semantic web

    OpenAIRE

    Becerra, Victor Manuel; Brown, Matthew; Nasuto, Slawomir

    2008-01-01

    The storage and processing capacity realised by computing has lead to an explosion of data retention. We now reach the point of information overload and must begin to use computers to process more complex information. In particular, the proposition of the Semantic Web has given structure to this problem, but has yet realised practically. The largest of its problems is that of ontology construction; without a suitable automatic method most will have to be encoded by hand. In this paper we disc...

  7. Approaches to Automatic Text Structuring

    OpenAIRE

    Erbs, Nicolai

    2015-01-01

    Structured text helps readers to better understand the content of documents. In classic newspaper texts or books, some structure already exists. In the Web 2.0, the amount of textual data, especially user-generated data, has increased dramatically. As a result, there exists a large amount of textual data which lacks structure, thus making it more difficult to understand. In this thesis, we will explore techniques for automatic text structuring to help readers to fulfill their information need...

  8. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range. F...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  9. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  10. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  11. Social influence effects on automatic racial prejudice.

    Science.gov (United States)

    Lowery, B S; Hardin, C D; Sinclair, S

    2001-11-01

    Although most research on the control of automatic prejudice has focused on the efficacy of deliberate attempts to suppress or correct for stereotyping, the reported experiments tested the hypothesis that automatic racial prejudice is subject to common social influence. In experiments involving actual interethnic contact, both tacit and expressed social influence reduced the expression of automatic prejudice, as assessed by two different measures of automatic attitudes. Moreover, the automatic social tuning effect depended on participant ethnicity. European Americans (but not Asian Americans) exhibited less automatic prejudice in the presence of a Black experimenter than a White experimenter (Experiments 2 and 4), although both groups exhibited reduced automatic prejudice when instructed to avoid prejudice (Experiment 3). Results are consistent with shared reality theory, which postulates that social regulation is central to social cognition. PMID:11708561

  12. Automatic hypermnesia and impaired recollection in schizophrenia.

    Science.gov (United States)

    Linscott, R J; Knight, R G

    2001-10-01

    Evidence from studies of nonmnemonic automatic cognitive processes provides reason to expect that schizophrenia is associated with exaggerated automatic memory (implicit memory), or automatic hypermnesia. Participants with schizophrenia (n = 22) and control participants (n = 26) were compared on word stem completion (WSC) and list discrimination (LD) tasks administered using the process dissociation procedure. Unadjusted, extended measurement model and dual-process signal-detection methods were used to estimate recollection and automatic memory indices. Schizophrenia was associated with automatic hypermnesia on the WSC task and impaired recollection on both tasks. Thought disorder was associated with even greater automatic hypermnesia. The absence of automatic hypermnesia on the LD task was interpreted with reference to the neuropsychological bases of context and content memory. PMID:11761047

  13. Text-based LSTM networks for Automatic Music Composition

    OpenAIRE

    Choi, Keunwoo; Fazekas, George; Sandler, Mark

    2016-01-01

    In this paper, we introduce new methods and discuss results of text-based LSTM (Long Short-Term Memory) networks for automatic music composition. The proposed network is designed to learn relationships within text documents that represent chord progressions and drum tracks in two case studies. In the experiments, word-RNNs (Recurrent Neural Networks) show good results for both cases, while character-based RNNs (char-RNNs) only succeed to learn chord progressions. The proposed system can be us...

  14. FOLKSONOMIES VERSUS AUTOMATIC KEYWORD EXTRACTION: AN EMPIRICAL STUDY

    OpenAIRE

    Al-Khalifa, Hend S.; Davis, Hugh C.

    2006-01-01

    This paper reports on an evaluation of the keywords produced by Yahoo API context-based term extractor compared to a folksonomy set for the same website. The evaluation process is made in two ways: automatically, by measuring the percentage of overlap between the folksonomy set and Yahoo keywords set; and subjectively, by asking a human indexer to rate the quality of the generated keywords from both systems. The result of the experiment will be considered as an evidence for the rich semantics...

  15. Multilabel Learning for Automatic Web Services Tagging

    Directory of Open Access Journals (Sweden)

    Mustapha AZNAG

    2014-08-01

    Full Text Available Recently, some web services portals and search engines as Biocatalogue and Seekda!, have allowed users to manually annotate Web services using tags. User Tags provide meaningful descriptions of services and allow users to index and organize their contents. Tagging technique is widely used to annotate objects in Web 2.0 applications. In this paper we propose a novel probabilistic topic model (which extends the CorrLDA model - Correspondence Latent Dirichlet Allocation- to automatically tag web services according to existing manual tags. Our probabilistic topic model is a latent variable model that exploits local correlation labels. Indeed, exploiting label correlations is a challenging and crucial problem especially in multi-label learning context. Moreover, several existing systems can recommend tags for web services based on existing manual tags. In most cases, the manual tags have better quality. We also develop three strategies to automatically recommend the best tags for web services. We also propose, in this paper, WS-Portal; An Enriched Web Services Search Engine which contains 7063 providers, 115 sub-classes of category and 22236 web services crawled from the Internet. In WS-Portal, severals technologies are employed to improve the effectiveness of web service discovery (i.e. web services clustering, tags recommendation, services rating and monitoring. Our experiments are performed out based on real-world web services. The comparisons of Precision@n, Normalised Discounted Cumulative Gain (NDCGn values for our approach indicate that the method presented in this paper outperforms the method based on the CorrLDA in terms of ranking and quality of generated tags.

  16. Digital movie-based on automatic titrations.

    Science.gov (United States)

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. PMID:26592600

  17. The RNA world, automatic sequences and oncogenetics

    International Nuclear Information System (INIS)

    We construct a model of the RNA world in terms of naturally evolving nucleotide sequences assuming only Crick-Watson base pairing and self-cleaving/splicing capability. These sequences have the following properties. 1) They are recognizable by an automation (or automata). That is, to each k-sequence, there exist a k-automation which accepts, recognizes or generates the k-sequence. These are known as automatic sequences. Fibonacci and Morse-Thue sequences are the most natural outcome of pre-biotic chemical conditions. 2) Infinite (resp. large) sequences are self-similar (resp. nearly self-similar) under certain rewrite rules and consequently give rise to fractal (resp.fractal-like) structures. Computationally, such sequences can also be generated by their corresponding deterministic parallel re-write system, known as a DOL system. The self-similar sequences are fixed points of their respective rewrite rules. Some of these automatic sequences have the capability that they can read or 'accept' other sequences while others can detect errors and trigger error-correcting mechanisms. They can be enlarged and have block and/or palindrome structure. Linear recurring sequences such as Fibonacci sequence are simply Feed-back Shift Registers, a well know model of information processing machines. We show that a mutation of any rewrite rule can cause a combinatorial explosion of error and relates this to oncogenetical behavior. On the other hand, a mutation of sequences that are not rewrite rules, leads to normal evolutionary change. Known experimental results support our hypothesis. (author). Refs

  18. Unification of automatic target tracking and automatic target recognition

    Science.gov (United States)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  19. The Parametric Identification Of A Stationary Process

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In the problems of identification it is supposed that the process has at least one measurable input size and at least one measurable output size. The identification of a process has three stages: the obtaining of a registration of process measurable sizes; the choice of a proper mathematical model for the process; the extract of the parameter values of the mathematical models from registered data. The parametric identification problem is an optimization problem, in which the best combination of values for the model parameters set is searched. In the paper is presented the parametric identification of a water flow process in a laboratory stand. The identification had the following dims: detailed understanding of how the stand works, finding a new illustrative experiment for the stand, the application of advanced techniques of automat control, and the development of a project of new stand, meant to allow a large variety of experiments.

  20. Semi-automatic classification of textures in thoracic CT scans

    Science.gov (United States)

    Kockelkorn, Thessa T. J. P.; de Jong, Pim A.; Schaefer-Prokop, Cornelia M.; Wittenberg, Rianne; Tiehuis, Audrey M.; Gietema, Hester A.; Grutters, Jan C.; Viergever, Max A.; van Ginneken, Bram

    2016-08-01

    The textural patterns in the lung parenchyma, as visible on computed tomography (CT) scans, are essential to make a correct diagnosis in interstitial lung disease. We developed one automatic and two interactive protocols for classification of normal and seven types of abnormal lung textures. Lungs were segmented and subdivided into volumes of interest (VOIs) with homogeneous texture using a clustering approach. In the automatic protocol, VOIs were classified automatically by an extra-trees classifier that was trained using annotations of VOIs from other CT scans. In the interactive protocols, an observer iteratively trained an extra-trees classifier to distinguish the different textures, by correcting mistakes the classifier makes in a slice-by-slice manner. The difference between the two interactive methods was whether or not training data from previously annotated scans was used in classification of the first slice. The protocols were compared in terms of the percentages of VOIs that observers needed to relabel. Validation experiments were carried out using software that simulated observer behavior. In the automatic classification protocol, observers needed to relabel on average 58% of the VOIs. During interactive annotation without the use of previous training data, the average percentage of relabeled VOIs decreased from 64% for the first slice to 13% for the second half of the scan. Overall, 21% of the VOIs were relabeled. When previous training data was available, the average overall percentage of VOIs requiring relabeling was 20%, decreasing from 56% in the first slice to 13% in the second half of the scan.

  1. Automatic Generation of Technical Documentation

    CERN Document Server

    Reiter, E R; Levine, J; Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of our experiences with IDAS and the lessons we have learned from it will be beneficial for other researchers who wish to build technical-documentation generation systems.

  2. Unsupervised automatic music genre classification

    OpenAIRE

    Barreira, Luís Filipe Marques

    2010-01-01

    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática In this study we explore automatic music genre recognition and classification of digital music. Music has always been a reflection of culture di erences and an influence in our society. Today’s digital content development triggered the massive use of digital music. Nowadays,digital music is manually labeled without following a universa...

  3. Real time automatic scene classification

    OpenAIRE

    Israël, Menno; Broek, van den, Wouter; Putten, van, M.J.A.M.; Uyl, den, T.M.; Verbrugge, R.; Taatgen, N.; Schomaker, L.

    2004-01-01

    This work has been done as part of the EU VICAR (IST) project and the EU SCOFI project (IAP). The aim of the first project was to develop a real time video indexing classification annotation and retrieval system. For our systems, we have adapted the approach of Picard and Minka [3], who categorized elements of a scene automatically with so-called ’stuff’ categories (e.g., grass, sky, sand, stone). Campbell et al. [1] use similar concepts to describe certain parts of an image, which they named...

  4. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro

  5. Automatic transcription of polyphonic singing

    OpenAIRE

    Paščinski, Uroš

    2015-01-01

    In this work we focus on automatic transcription of polyphonic singing. In particular we do the multiple fundamental frequency (F0) estimation. From the terrain recordings a test set of Slovenian folk songs with polyphonic singing is extracted and manually transcribed. On the test set we try the general algorithm for multiple F0 detection. An interactive visualization of the main parts of the algorithm is made to analyse how it works and try to detect possible issues. As the data set is ne...

  6. Automatic analysis of multiparty meetings

    Indian Academy of Sciences (India)

    Steve Renals

    2011-10-01

    This paper is about the recognition and interpretation of multiparty meetings captured as audio, video and other signals. This is a challenging task since the meetings consist of spontaneous and conversational interactions between a number of participants: it is a multimodal, multiparty, multistream problem. We discuss the capture and annotation of the Augmented Multiparty Interaction (AMI) meeting corpus, the development of a meeting speech recognition system, and systems for the automatic segmentation, summarization and social processing of meetings, together with some example applications based on these systems.

  7. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  8. A bar-code reader for an alpha-beta automatic counting system - FAG

    International Nuclear Information System (INIS)

    A bar-code laser system for sample number reading was integrated into the FAG Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. Installation of the bar-code reader system required several modifications: Mechanical changes in the automatic sample changer, design and production of new sample holders, modification of the sample planchettes, changes in the electronic system, update of the operating software of the system (authors)

  9. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  10. Automatic scanning for nuclear emulsion

    International Nuclear Information System (INIS)

    Automatic scanning systems have been recently developed for application in neutrino experiments exploiting nuclear emulsion detectors of particle tracks. These systems speed up substantially the analysis of events in emulsion, allowing the realisation of experiments with unprecedented statistics. The pioneering work on automatic scanning has been done by the University of Nagoya (Japan). The so called new track selector has a very good reproducibility in position (∼1 μm) and angle (∼3 mrad), with the possibility to reconstruct, in about 3 s, all the tracks in a view of 150x150 μm2 and 1 mm of thickness. A new system (ultratrack selector), with speed higher by one order of magnitude, has started to be in operation. R and D programs are going on in Nagoya and in other laboratories for new systems. The scanning speed in nuclear emulsion be further increased by an order of magnitude. The recent progress in the technology of digital signal processing and of image acquisition systems (CCDs and fast frame grabbers) allows the realisation of systems with high performance. New interesting applications of the technique in other fields (e.g. in biophysics) have recently been envisaged

  11. SRV-automatic handling device

    International Nuclear Information System (INIS)

    Automatic handling device for the steam relief valves (SRV's) is developed in order to achieve a decrease in exposure of workers, increase in availability factor, improvement in reliability, improvement in safety of operation, and labor saving. A survey is made during a periodical inspection to examine the actual SVR handling operation. An SRV automatic handling device consists of four components: conveyor, armed conveyor, lifting machine, and control/monitoring system. The conveyor is so designed that the existing I-rail installed in the containment vessel can be used without any modification. This is employed for conveying an SRV along the rail. The armed conveyor, designed for a box rail, is used for an SRV installed away from the rail. By using the lifting machine, an SRV installed away from the I-rail is brought to a spot just below the rail so that the SRV can be transferred by the conveyor. The control/monitoring system consists of a control computer, operation panel, TV monitor and annunciator. The SRV handling device is operated by remote control from a control room. A trial equipment is constructed and performance/function testing is carried out using actual SRV's. As a result, is it shown that the SRV handling device requires only two operators to serve satisfactorily. The required time for removal and replacement of one SRV is about 10 minutes. (Nogami, K.)

  12. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings wit...... the mean value enclosure of an integral operator and uses interval Bernstein polynomials for enclosing the solution. Two numerical examples are given, using two orders of approximation and using different numbers of discretization points.......This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings with...... intervals as initial values. A modification of the mean value enclosure of discrete mappings is considered, namely the extended mean value enclosure which in most cases leads to even better enclosures. These methods have previously been described in connection with discretizing solutions of ordinary...

  13. Peak fitting and identification software library for high resolution gamma-ray spectra

    Science.gov (United States)

    Uher, Josef; Roach, Greg; Tickner, James

    2010-07-01

    A new gamma-ray spectral analysis software package is under development in our laboratory. It can be operated as a stand-alone program or called as a software library from Java, C, C++ and MATLAB TM environments. It provides an advanced graphical user interface for data acquisition, spectral analysis and radioisotope identification. The code uses a peak-fitting function that includes peak asymmetry, Compton continuum and flexible background terms. Peak fitting function parameters can be calibrated as functions of energy. Each parameter can be constrained to improve fitting of overlapping peaks. All of these features can be adjusted by the user. To assist with peak identification, the code can automatically measure half-lives of single or multiple overlapping peaks from a time series of spectra. It implements library-based peak identification, with options for restricting the search based on radioisotope half-lives and reaction types. The software also improves the reliability of isotope identification by utilizing Monte-Carlo simulation results.

  14. Peak fitting and identification software library for high resolution gamma-ray spectra

    International Nuclear Information System (INIS)

    A new gamma-ray spectral analysis software package is under development in our laboratory. It can be operated as a stand-alone program or called as a software library from Java, C, C++ and MATLABTM environments. It provides an advanced graphical user interface for data acquisition, spectral analysis and radioisotope identification. The code uses a peak-fitting function that includes peak asymmetry, Compton continuum and flexible background terms. Peak fitting function parameters can be calibrated as functions of energy. Each parameter can be constrained to improve fitting of overlapping peaks. All of these features can be adjusted by the user. To assist with peak identification, the code can automatically measure half-lives of single or multiple overlapping peaks from a time series of spectra. It implements library-based peak identification, with options for restricting the search based on radioisotope half-lives and reaction types. The software also improves the reliability of isotope identification by utilizing Monte-Carlo simulation results.

  15. Automatic Differentiation of Algorithms for Machine Learning

    OpenAIRE

    Baydin, Atilim Gunes; Pearlmutter, Barak A.

    2014-01-01

    Automatic differentiation --- the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately --- dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available...

  16. Automatic Speech Segmentation Based on HMM

    OpenAIRE

    M. Kroul

    2007-01-01

    This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is ...

  17. Automatic Control of Water Pumping Stations

    Institute of Scientific and Technical Information of China (English)

    Muhannad Alrheeh; JIANG Zhengfeng

    2006-01-01

    Automatic Control of pumps is an interesting proposal to operate water pumping stations among many kinds of water pumping stations according to their functions.In this paper, our pumping station is being used for water supply system. This paper is to introduce the idea of pump controller and the important factors that must be considering when we want to design automatic control system of water pumping stations. Then the automatic control circuit with the function of all components will be introduced.

  18. Automatic inference of specifications using matching logic

    OpenAIRE

    Alpuente Frasnedo, María; Feliú Gabaldón, Marco Antonio; Villanueva García, Alicia

    2013-01-01

    Formal specifications can be used for various software engineering activities ranging from finding errors to documenting software and automatic test-case generation. Automatically discovering specifications for heap-manipulating programs is a challenging task. In this paper, we propose a technique for automatically inferring formal specifications from C code which is based on the symbolic execution and automated reasoning tandem "MATCHING LOGIC /K framework". We implemented our technique for ...

  19. An automatic visual analysis system for tennis

    OpenAIRE

    Connaghan, Damien; Moran, Kieran; O''Connor, Noel E.

    2013-01-01

    This article presents a novel video analysis system for coaching tennis players of all levels, which uses computer vision algorithms to automatically edit and index tennis videos into meaningful annotations. Existing tennis coaching software lacks the ability to automatically index a tennis match into key events, and therefore, a coach who uses existing software is burdened with time-consuming manual video editing. This work aims to explore the effectiveness of a system to automatically de...

  20. An Automated System for Garment Texture Design Class Identification

    OpenAIRE

    Emon Kumar Dey; Md. Nurul Ahad Tawhid; Mohammad Shoyaib

    2015-01-01

    Automatic identification of garment design class might play an important role in the garments and fashion industry. To achieve this, essential initial works are found in the literature. For example, construction of a garment database, automatic segmentation of garments from real life images, categorizing them into the type of garments such as shirts, jackets, tops, skirts, etc. It is now essential to find a system such that it will be possible to identify the particular design (printed, stri...

  1. Automatic generation of stop word lists for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  2. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    Science.gov (United States)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog

  3. Automatic image enhancement by artificial bee colony algorithm

    Science.gov (United States)

    Yimit, Adiljan; Hagihara, Yoshihiro; Miyoshi, Tasuku; Hagihara, Yukari

    2013-03-01

    With regard to the improvement of image quality, image enhancement is an important process to assist human with better perception. This paper presents an automatic image enhancement method based on Artificial Bee Colony (ABC) algorithm. In this method, ABC algorithm is applied to find the optimum parameters of a transformation function, which is used in the enhancement by utilizing the local and global information of the image. In order to solve the optimization problem by ABC algorithm, an objective criterion in terms of the entropy and edge information is introduced to measure the image quality to make the enhancement as an automatic process. Several images are utilized in experiments to make a comparison with other enhancement methods, which are genetic algorithm-based and particle swarm optimization algorithm-based image enhancement methods.

  4. Detection of Off-normal Images for NIF Automatic Alignment

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Awwal, A S; McClay, W A; Ferguson, S W; Burkhart, S C

    2005-07-11

    One of the major purposes of National Ignition Facility at Lawrence Livermore National Laboratory is to accurately focus 192 high energy laser beams on a nanoscale (mm) fusion target at the precise location and time. The automatic alignment system developed for NIF is used to align the beams in order to achieve the required focusing effect. However, if a distorted image is inadvertently created by a faulty camera shutter or some other opto-mechanical malfunction, the resulting image termed ''off-normal'' must be detected and rejected before further alignment processing occurs. Thus the off-normal processor acts as a preprocessor to automatic alignment image processing. In this work, we discuss the development of an ''off-normal'' pre-processor capable of rapidly detecting the off-normal images and performing the rejection. Wide variety of off-normal images for each loop is used to develop the criterion for rejections accurately.

  5. Automatic gamma spectrometry analytical apparatus

    International Nuclear Information System (INIS)

    This invention falls within the area of quantitative or semi-quantitative analysis by gamma spectrometry and particularly refers to a device for bringing the samples into the counting position. The purpose of this invention is precisely to provide an automatic apparatus specifically adapted to the analysis of hard gamma radiations. To this effect, the invention relates to a gamma spectrometry analytical device comprising a lead containment, a detector of which the sensitive part is located inside the containment and additionally comprising a transfer system for bringing the analyzed samples in succession to a counting position inside the containment above the detector. A feed compartment enables the samples to be brought in turn one by one on to the transfer system through a duct connecting the compartment to the transfer system. Sequential systems for the coordinated forward feed of the samples in the compartment and the transfer system complete this device

  6. Automatic home medical product recommendation.

    Science.gov (United States)

    Luo, Gang; Thomas, Selena B; Tang, Chunqiang

    2012-04-01

    Web-based personal health records (PHRs) are being widely deployed. To improve PHR's capability and usability, we proposed the concept of intelligent PHR (iPHR). In this paper, we use automatic home medical product recommendation as a concrete application to demonstrate the benefits of introducing intelligence into PHRs. In this new application domain, we develop several techniques to address the emerging challenges. Our approach uses treatment knowledge and nursing knowledge, and extends the language modeling method to (1) construct a topic-selection input interface for recommending home medical products, (2) produce a global ranking of Web pages retrieved by multiple queries, and (3) provide diverse search results. We demonstrate the effectiveness of our techniques using USMLE medical exam cases. PMID:20703712

  7. Automatic sampling of radioactive liquors

    International Nuclear Information System (INIS)

    This paper describes the latest techniques in sampling radioactive liquors in an Irradiated Fuel Reprocessing Plant. Previously to obtain a sample from these liquors operators were involved at the point of sampling, the transport of samples in shielded containers to the laboratories and at the offloading of the samples at the laboratory. Penetration of the radioactive containments occurred at the sampling point and again in the laboratory, these operations could lead to possible radioactive contamination. The latest design consists of a Sample Bottle Despatch Facility Autosampler units, Pneumatic Transfer System and Receipt Facility which reduces considerably operator involvement, provides a safe rapid transport system and minimises any possibility of radioactive contamination. The system can be made fully automatic and ease of maintenance has been ensured by the design

  8. Automatic sampling of radioactive liquors

    International Nuclear Information System (INIS)

    This paper describes the latest techniques in sampling radioactive liquors in an Irradiated Fuel Reprocessing Plant. Previously to obtain a sample from these liquors operators were involved at the point of sampling, the transport of samples in shielded containers to the laboratories and at the offloading of the samples at the laboratory. Penetration of the radioactive containments occurred at the sampling point and again in the laboratory, these operations could lead to possible radioactive contamination. The latest design consists of a Sample Bottle Despatch Facility Autosampler units, Pneumatic Transfer System and Receipt Facility which reduces considerably operator involvement, provides a safe rapid transport system and minimises any possibility of radioactive contamination. The system can be made fully automatic and ease of maintenance has been ensured by the design. (author)

  9. Automatic sampling of radioactive liquors

    International Nuclear Information System (INIS)

    The latest techniques in sampling radioactive liquors in an Irradiated Fuel Reprocessing Plant are described. Previously to obtain a sample from these liquors operators were involved at the point of sampling, the transport of samples in shielded containers to the laboratories and at the offloading of the samples at the laboratory. Penetration of the radioactive containments occurred at the sampling point and again in the laboratory; these operations could lead to possible radioactive contamination. The latest design consists of a Sample Bottle Despatch Facility Autosampler units, Pneumatic Transfer System and Receipt Facility which reduces considerably operator involvement, provides a safe rapid transport system and minimises any possibility of radioactive contamination. The system can be made fully automatic and ease of maintenance has been ensured by the design. (author)

  10. Automatic Sequencing for Experimental Protocols

    Science.gov (United States)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  11. Autoclass: An automatic classification system

    Science.gov (United States)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  12. Techniques for automatic speech recognition

    Science.gov (United States)

    Moore, R. K.

    1983-05-01

    A brief insight into some of the algorithms that lie behind current automatic speech recognition system is provided. Early phonetically based approaches were not particularly successful, due mainly to a lack of appreciation of the problems involved. These problems are summarized, and various recognition techniques are reviewed in the contect of the solutions that they provide. It is pointed out that the majority of currently available speech recognition equipments employ a "whole-word' pattern matching approach which, although relatively simple, has proved particularly successful in its ability to recognize speech. The concepts of time-normalizing plays a central role in this type of recognition process and a family of such algorithms is described in detail. The technique of dynamic time warping is not only capable of providing good performance for isolated word recognition, but how it is also extended to the recognition of connected speech (thereby removing one of the most severe limitations of early speech recognition equipment).

  13. Automatic force balance calibration system

    Science.gov (United States)

    Ferris, Alice T.

    1995-05-01

    A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.

  14. Automatic contact algorithm in ppercase[dyna3d] for crashworthiness and impact problems

    International Nuclear Information System (INIS)

    This paper presents a new approach for the automatic definition and treatment of mechanical contact in explicit non-linear finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. Key aspects of the proposed new method include automatic identification of adjacent and opposite surfaces in the global search phase, and the use of a well-defined surface normal which allows a consistent treatment of shell intersection and corner contact conditions without adhoc rules. The paper concludes with three examples which illustrate the performance of the newly proposed algorithm in the public ppercase[dyna3d] code. ((orig.))

  15. Design of multi-point automatic positioning pre-programmed crane control system

    International Nuclear Information System (INIS)

    This automatic positioning system is designed for the crane which is in common use. The crane hall is cut into several parts, the position is detected by the photoelectric switches, and the PLC and inverter are used for the control devices. The automatic positioning control of the crane is completed by means of the timing movement. The positioning error is less than 2.1 cm, the swing range when lifting object is less than 5.7 cm, and the average speed of the crane is more than 90% of the rated speed. This paper gives further description in the positioning control, direction control, speed control and the identification of the starting point. Finally it gives an analysis of the automatic positioning performance. (authors)

  16. Methods of automatic scanning of SSNTDs

    International Nuclear Information System (INIS)

    The methods of automatic scanning of solid state nuclear track detectors are reviewed. The paper deals with transmission of light, charged particles, chemicals and electrical current through conventionally etched detectors. Special attention is given to the jumping spark technique and breakdown counters. Eventually optical automatic devices are examined. (orig.)

  17. Automatic control of nuclear power plants

    International Nuclear Information System (INIS)

    The fundamental concepts in automatic control are surveyed, and the purpose of the automatic control of pressurized water reactors is given. The response characteristics for the main components are then studied and block diagrams are given for the main control loops (turbine, steam generator, and nuclear reactors)

  18. Towards unifying inheritance and automatic program specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2002-01-01

    inheritance with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be e...

  19. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    Science.gov (United States)

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF…

  20. Solar Powered Automatic Shrimp Feeding System

    Directory of Open Access Journals (Sweden)

    Dindo T. Ani

    2015-12-01

    Full Text Available - Automatic system has brought many revolutions in the existing technologies. One among the technologies, which has greater developments, is the solar powered automatic shrimp feeding system. For instance, the solar power which is a renewable energy can be an alternative solution to energy crisis and basically reducing man power by using it in an automatic manner. The researchers believe an automatic shrimp feeding system may help solve problems on manual feeding operations. The project study aimed to design and develop a solar powered automatic shrimp feeding system. It specifically sought to prepare the design specifications of the project, to determine the methods of fabrication and assembly, and to test the response time of the automatic shrimp feeding system. The researchers designed and developed an automatic system which utilizes a 10 hour timer to be set in intervals preferred by the user and will undergo a continuous process. The magnetic contactor acts as a switch connected to the 10 hour timer which controls the activation or termination of electrical loads and powered by means of a solar panel outputting electrical power, and a rechargeable battery in electrical communication with the solar panel for storing the power. By undergoing through series of testing, the components of the modified system were proven functional and were operating within the desired output. It was recommended that the timer to be used should be tested to avoid malfunction and achieve the fully automatic system and that the system may be improved to handle changes in scope of the project.

  1. Towards an intelligent system for the automatic assignment of domains in globular proteins.

    Science.gov (United States)

    Sternberg, M J; Hegyi, H; Islam, S A; Luo, J; Russell, R B

    1995-01-01

    The automatic identification of protein domains from coordinates is the first step in the classification of protein folds and hence is required for databases to guide structure prediction. Most algorithms encode a single concept based and sometimes do not yield assignments that are consistent with the generally accepted perception. Our development of an automatic approach to identify reliably domains from protein coordinates is described. The algorithm is benchmarked against a manual identification of the domains in 284 representative protein chains. The first step is the domain assignment by distance (DAD) algorithm that considers the density of inter-residue contacts represented in a contact matrix. The algorithm yields 85% agreement with the manual assignment. The paper then considers how the reliability of these assignments could be evaluated. Finally the use of structural comparisons using the STAMP algorithm to validate domain assignment is reported on a test case. PMID:7584461

  2. AUTOMATIC DESIGNING OF POWER SUPPLY SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. I. Kirspou

    2016-01-01

    Full Text Available Development of automatic designing system for power supply of industrial enterprises is considered in the paper. Its complete structure and principle of operation are determined and established. Modern graphical interface and data scheme are developed, software is completely realized. Methodology and software correspond to the requirements of the up-to-date designing, describe a general algorithm of program process and also reveals properties of automatic designing system objects. Automatic designing system is based on module principle while using object-orientated programming. Automatic designing system makes it possible to carry out consistently designing calculations of power supply system and select the required equipment with subsequent output of all calculations in the form of explanatory note. Automatic designing system can be applied by designing organizations under conditions of actual designing.

  3. Krsko source term analysis

    International Nuclear Information System (INIS)

    The Krsko Source Term Analysis (STA) has been provided as integral part of the Krsko Individual Plant Examination Project (IPE) Level 2 (Containment Analysis). Based on its own definition, the STA quantifies the magnitude, time dependence and composition of the fission product releases which characterize each Release Category (RC). The Krsko STA also addresses the definition of each RC, identification and the choice of dominant accident sequences within a release category, analysis of the representative accident sequences using MAAP 3.OB (Modular Accident Analysis Program) revision 18 to estimate the source term characteristic and discussion of identified major areas of uncertainty. (author)

  4. Automatic measurement and representation of prosodic features

    Science.gov (United States)

    Ying, Goangshiuan Shawn

    Effective measurement and representation of prosodic features of the acoustic signal for use in automatic speech recognition and understanding systems is the goal of this work. Prosodic features-stress, duration, and intonation-are variations of the acoustic signal whose domains are beyond the boundaries of each individual phonetic segment. Listeners perceive prosodic features through a complex combination of acoustic correlates such as intensity, duration, and fundamental frequency (F0). We have developed new tools to measure F0 and intensity features. We apply a probabilistic global error correction routine to an Average Magnitude Difference Function (AMDF) pitch detector. A new short-term frequency-domain Teager energy algorithm is used to measure the energy of a speech signal. We have conducted a series of experiments performing lexical stress detection on words in continuous English speech from two speech corpora. We have experimented with two different approaches, a segment-based approach and a rhythm unit-based approach, in lexical stress detection. The first approach uses pattern recognition with energy- and duration-based measurements as features to build Bayesian classifiers to detect the stress level of a vowel segment. In the second approach we define rhythm unit and use only the F0-based measurement and a scoring system to determine the stressed segment in the rhythm unit. A duration-based segmentation routine was developed to break polysyllabic words into rhythm units. The long-term goal of this work is to develop a system that can effectively detect the stress pattern for each word in continuous speech utterances. Stress information will be integrated as a constraint for pruning the word hypotheses in a word recognition system based on hidden Markov models.

  5. Particle identification

    International Nuclear Information System (INIS)

    A variety of subjects are addressed within the general context of searching for limitations in capability of particle identification due to high average rates. Topics receiving attention included Cerenkov ring imaging, transition radiation, synchrotron radiation, time-of-flight, high P spectrometer, heavy quark tagging with leptons, general purpose muon and electron detector, and dE/dx. It is concluded that particle identification will probably not represent a primary obstacle at luminosities of 1033cm-2sec-1

  6. CRISPR Recognition Tool (CRT: a tool for automatic detection of clustered regularly interspaced palindromic repeats

    Directory of Open Access Journals (Sweden)

    Brown Kyndall

    2007-06-01

    Full Text Available Abstract Background Clustered Regularly Interspaced Palindromic Repeats (CRISPRs are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. Results CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT proved to be a huge improvement over Patscan. Both CRT and Pilercr were comparable in speed, however CRT was faster for genomes containing large numbers of repeats. Conclusion In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, showed some important improvements over current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n in space and O(nm/l in time.

  7. Automatic Color Sorting Machine Using TCS230 Color Sensor And PIC Microcontroller

    OpenAIRE

    Kunhimohammed C K; Muhammed Saifudeen K K; Sahna S; Gokul M S; Shaeez Usman Abdulla

    2015-01-01

    Sorting of products is a very difficult industrial process. Continuous manual sorting creates consistency issues. This paper describes a working prototype designed for automatic sorting of objects based on the color. TCS230 sensor was used to detect the color of the product and the PIC16F628A microcontroller was used to control the overall process. The identification of the color is based on the frequency analysis of the output of TCS230 sensor. Two conveyor belts were used, each ...

  8. Mathematical modelling and quality indices optimization of automatic control systems of reactor facility

    International Nuclear Information System (INIS)

    The mathematical modeling of automatic control systems of reactor facility WWER-1000 with various regulator types is considered. The linear and nonlinear models of neutron power control systems of nuclear reactor WWER-1000 with various group numbers of delayed neutrons are designed. The results of optimization of direct quality indexes of neutron power control systems of nuclear reactor WWER-1000 are designed. The identification and optimization of level control systems with various regulator types of steam generator are executed

  9. Automatic Dependent Surveillance-Broadcast for Sense and Avoid on Small Unmanned Aircraft

    OpenAIRE

    Duffield, Matthew; McLain, Timothy

    2015-01-01

    This paper presents a time-based path planning optimizer for separation assurance for unmanned aerial systems (UAS). Given Automatic Dependent Surveillance-Broadcast (ADS-B) as a sensor, position, velocity, and identification information is available at ranges on the order of 50 nautical miles. Such long-range intruder detection facilitates path planning for separation assurance, but also poses computational and robustness challenges. The time-based path optimizer presented in this paper prov...

  10. A Magnetic Resonance Image Based Atlas of the Rabbit Brain for Automatic Parcellation

    OpenAIRE

    Emma Muñoz-Moreno; Ariadna Arbat-Plana; Dafnis Batalle; Guadalupe Soria; Miriam Illa; Alberto Prats-Galino; Elisenda Eixarch; Eduard Gratacos

    2013-01-01

    Rabbit brain has been used in several works for the analysis of neurodevelopment. However, there are not specific digital rabbit brain atlases that allow an automatic identification of brain regions, which is a crucial step for various neuroimage analyses, and, instead, manual delineation of areas of interest must be performed in order to evaluate a specific structure. For this reason, we propose an atlas of the rabbit brain based on magnetic resonance imaging, including both structural and d...

  11. Automatic Modulation Recognition Using Wavelet Transform and Neural Networks in Wireless Systems

    OpenAIRE

    Dayoub I; Hamouda W; Hassan K; Berbineau M

    2010-01-01

    Modulation type is one of the most important characteristics used in signal waveform identification. In this paper, an algorithm for automatic digital modulation recognition is proposed. The proposed algorithm is verified using higher-order statistical moments (HOM) of continuous wavelet transform (CWT) as a features set. A multilayer feed-forward neural network trained with resilient backpropagation learning algorithm is proposed as a classifier. The purpose is to discriminate among differe...

  12. Progress on Statistical Learning Systems as Data Mining Tools for the Creation of Automatic Databases in Fusion Environments

    International Nuclear Information System (INIS)

    Fusion devices produce tens of thousands of discharges but only a very limited part of the collected information is analysed. The analysis of physical events requires their identification and temporal location and the generation of specialized databases in relation to these time instants. The automatic determination of precise time instants in which events happen and the automatic search for potential relevant time intervals could be made thanks to classification techniques and regression techniques. Classification and regression techniques have been used for the automatic creation of specialized databases for JET and have allowed the automatic determination of disruptive / non-disruptive character of discharges. The validation of the recognition method has been carried out with 4400 JET discharges and the global success rate has been 99.02 per cent

  13. Sleep facilitates long-term face adaptation

    OpenAIRE

    Ditye, T.; A.H Javadi; Carbon, C.C.; Walsh, V

    2013-01-01

    Adaptation is an automatic neural mechanism supporting the optimization of visual processing on the basis of previous experiences. While the short-term effects of adaptation on behaviour and physiology have been studied extensively, perceptual long-term changes associated with adaptation are still poorly understood. Here, we show that the integration of adaptation-dependent long-term shifts in neural function is facilitated by sleep. Perceptual shifts induced by adaptation to a distorted imag...

  14. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  15. Radio frequency identification technology and applications

    OpenAIRE

    Chen, Wenqi

    2015-01-01

    RFID technology is a pattern of automatic identification and a kind of wireless communication technology. It can be applied to different aspects in our life. RFID technology has developed rapidly in the recent years and has enabled the use of low cost electronic tags (EPC). RFID technology is expected to develop into a huge internet of things in the future. Therefore, it has a great potential in the new information age. This thesis introduces the system structure, operating principles, app...

  16. Automatic reactor power control device

    International Nuclear Information System (INIS)

    Anticipated transient without scram (ATWS) of a BWR type reactor is judged to generate a signal based on a reactor power signal and a scram actuation demand signal. The ATWS signal and a predetermined water level signal to be generated upon occurrence of ATWS are inputted, and an injection water flow rate signal exhibiting injection water flow rate optimum to reactor flooding and power suppression is outputted. In addition, a reactor pressure setting signal is outputted based on injection performance of a high pressure water injection system or a lower pressure water injection system upon occurrence of ATWS. Further, the reactor pressure setting signal is inputted to calculate opening/closing setting pressure of a main steam relief valve and output an opening setting pressure signal and a closure setting pressure signal for the main steam relief valve. As a result, the reactor power and the reactor water level can be automatically controlled even upon occurrence of ATWS due to failure of insertion of all of the control rods thereby enabling to maintain integrity and safety of the reactor, the reactor pressure vessel and the reactor container. (N.H.)

  17. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  18. Sequentiality of daily life physiology: an automatized segmentation approach.

    Science.gov (United States)

    Fontecave-Jallon, J; Baconnier, P; Tanguy, S; Eymaron, M; Rongier, C; Guméry, P Y

    2013-09-01

    Based on the hypotheses that (1) a physiological organization exists inside each activity of daily life and (2) the pattern of evolution of physiological variables is characteristic of each activity, pattern changes should be detected on daily life physiological recordings. The present study aims at investigating whether a simple segmentation method can be set up to detect pattern changes on physiological recordings carried out during daily life. Heart and breathing rates and skin temperature have been non-invasively recorded in volunteers following scenarios made of "daily life" steps (13 records). An observer, undergoing the scenario, wrote down annotations during the recording time. Two segmentation procedures have been compared to the annotations, a visual inspection of the signals and an automatic program based on a trends detection algorithm applied to one physiological signal (skin temperature). The annotations resulted in a total number of 213 segments defined on the 13 records, the best visual inspection detected less segments (120) than the automatic program (194). If evaluated in terms of the number of correspondences between the times marks given by annotations and those resulting from both physiologically based segmentations, the automatic program was better than the visual inspection. The mean time lags between annotation and program time marks remain variables time series recorded in common life conditions exhibit different successive patterns that can be detected by a simple trends detection algorithm. Theses sequences are coherent with the corresponding annotated activity. PMID:23943146

  19. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  20. Automatic molecular structure perception for the universal force field.

    Science.gov (United States)

    Artemova, Svetlana; Jaillet, Léonard; Redon, Stephane

    2016-05-15

    The Universal Force Field (UFF) is a classical force field applicable to almost all atom types of the periodic table. Such a flexibility makes this force field a potential good candidate for simulations involving a large spectrum of systems and, indeed, UFF has been applied to various families of molecules. Unfortunately, initializing UFF, that is, performing molecular structure perception to determine which parameters should be used to compute the UFF energy and forces, appears to be a difficult problem. Although many perception methods exist, they mostly focus on organic molecules, and are thus not well-adapted to the diversity of systems potentially considered with UFF. In this article, we propose an automatic perception method for initializing UFF that includes the identification of the system's connectivity, the assignment of bond orders as well as UFF atom types. This perception scheme is proposed as a self-contained UFF implementation integrated in a new module for the SAMSON software platform for computational nanoscience (http://www.samson-connect.net). We validate both the automatic perception method and the UFF implementation on a series of benchmarks. PMID:26927616

  1. Reactor protection system with automatic self-testing and diagnostic

    International Nuclear Information System (INIS)

    A reactor protection system is disclosed having four divisions, with quad redundant sensors for each scram parameter providing input to four independent microprocessor-based electronic chassis. Each electronic chassis acquires the scram parameter data from its own sensor, digitizes the information, and then transmits the sensor reading to the other three electronic chassis via optical fibers. To increase system availability and reduce false scrams, the reactor protection system employs two levels of voting on a need for reactor scram. The electronic chassis perform software divisional data processing, vote 2/3 with spare based upon information from all four sensors, and send the divisional scram signals to the hardware logic panel, which performs a 2/4 division vote on whether or not to initiate a reactor scram. Each chassis makes a divisional scram decision based on data from all sensors. Automatic detection and discrimination against failed sensors allows the reactor protection system to automatically enter a known state when sensor failures occur. Cross communication of sensor readings allows comparison of four theoretically ''identical'' values. This permits identification of sensor errors such as drift or malfunction. A diagnostic request for service is issued for errant sensor data. Automated self test and diagnostic monitoring, sensor input through output relay logic, virtually eliminate the need for manual surveillance testing. This provides an ability for each division to cross-check all divisions and to sense failures of the hardware logic. 16 figs

  2. An automatic and effective tooth isolation method for dental radiographs

    Science.gov (United States)

    Lin, P.-L.; Huang, P.-W.; Cho, Y. S.; Kuo, C.-H.

    2013-03-01

    Tooth isolation is a very important step for both computer-aided dental diagnosis and automatic dental identification systems, because it will directly affect the accuracy of feature extraction and, thereby, the final results of both types of systems. This paper presents an effective and fully automatic tooth isolation method for dental X-ray images, which contains up-per-lower jaw separation, single tooth isolation, over-segmentation verification, and under-segmentation detection. The upper-lower jaw separation mechanism is based on a gray-scale integral projection to avoid possible information loss and incorporates with the angle adjustment to handle skewed images. In a single tooth isolation, an adaptive windowing scheme for locating gap valleys is proposed to improve the accuracy. In over-segmentation, an isolation-curve verification scheme is proposed to remove excessive curves; and in under-segmentation, a missing-teeth detection scheme is proposed. The experimental results demonstrate that our method achieves the accuracy rates of 95.63% and 98.71% for the upper and lower jaw images, respectively, from the test database of 60 bitewing dental radiographs, and performs better for images with severe teeth occlusion, excessive dental works, and uneven illumination than that of Nomir and Abdel-Mottaleb's method. The method without upper-lower jaw separation step also works well for panoramic and periapical images.

  3. Iris Recognition System using canny edge detection for Biometric Identification

    OpenAIRE

    Bhawna Chouhan; Dr.(Mrs) Shailja Shukla

    2011-01-01

    biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Most commercial iris recognition systems use patented algorithms developed by Daugman, and these algorithms are able to produce perfect recognition rates. Especially it focuses on image segmentation and feature extraction for iris recognition process...

  4. Device for single-phase or three-phase automatic reclosure of 500-750 kV transmission lines

    Energy Technology Data Exchange (ETDEWEB)

    Strelkov, V.I.; Fokin, G.C.; Yakubson, G.G.; Kostina, A.D.

    1985-08-01

    A device for automatic reclosure of 500-700 kV as well as 220-330 kV transmission lines in conjunction with the new PDE 2000 protective relaying and line automation equipment set has been developed by the All-Union Scientific Research Institute of Electrical Power Engineering and the All-Union State Planning-Surveying and Scientific Research Institute of Power Systems and Electrical Networks, jointly with the Chelyabinsk Electrical Equipment Plant, to replace the APV-751 device and later also the APV-503 device. The principal functions of this PDE 2004.01 are: identification of the faulty phase and its automatic reclosure after a phase-to-ground short, with the aid of selective elements; disconnection of three phases and their automatic reclosure once after any kind of polyphase short (including one evolved from a single phase-to-ground short or caused by faults in not yet disconnected phases) and prior to single-phase automatic reclosure, with any direct phase-to-phase short isolated immediately ahead of selective action; disconnection of three phases after any kind of short with possibility of three-phase automatic reclosure after unsuccessful single-phase automatic reclosure; three-phase automatic reclosure once after three phase had been disconnected for reasons other than a fault or a human error. Monitoring and other functions of the device are also described.

  5. Automatic onset phase picking for portable seismic array observation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Automatic phase picking is a critical procedure for seismic data processing, especially for a huge amount of seismic data recorded by a large-scale portable seismic array. In this study is presented a new method used for automatic accurate onset phase picking based on the property of dense seismic array observations. In our method, the Akaike's information criterion (AIC) for the single channel observation and the least-squares cross-correlation for the multi-channel observation are combined together. The tests by the seismic array observation data after triggering with the short-term average/long-term average (STA/LTA) technique show that the phase picking error is less than 0.3 s for local events by using the single channel AIC algorithm. In terms of multi-channel least-squares cross-correlation technique, the clear teleseismic P onset can be detected reliably. Even for the teleseismic records with high noise level, our algorithm is also able to effectually avoid manual misdetections.

  6. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  7. Automatic semi-continuous accumulation chamber for diffuse gas emissions monitoring in volcanic and non-volcanic areas

    Science.gov (United States)

    Lelli, Matteo; Raco, Brunella; Norelli, Francesco; Virgili, Giorgio; Continanza, Davide

    2016-04-01

    Since various decades the accumulation chamber method is intensively used in monitoring activities of diffuse gas emissions in volcanic areas. Although some improvements have been performed in terms of sensitivity and reproducibility of the detectors, the equipment used for measurement of gas emissions temporal variation usually requires expensive and bulky equipment. The unit described in this work is a low cost, easy to install-and-manage instrument that will make possible the creation of low-cost monitoring networks. The Non-Dispersive Infrared detector used has a concentration range of 0-5% CO2, but the substitution with other detector (range 0-5000 ppm) is possible and very easy. Power supply unit has a 12V, 7Ah battery, which is recharged by a 35W solar panel (equipped with charge regulator). The control unit contains a custom programmed CPU and the remote transmission is assured by a GPRS modem. The chamber is activated by DataLogger unit, using a linear actuator between the closed position (sampling) and closed position (idle). A probe for the measure of soil temperature, soil electrical conductivity, soil volumetric water content, air pressure and air temperature is assembled on the device, which is already arranged for the connection of others external sensors, including an automatic weather station. The automatic station has been tested on the field at Lipari island (Sicily, Italy) during a period of three months, performing CO2 flux measurement (and also weather parameters), each 1 hour. The possibility to measure in semi-continuous mode, and at the same time, the gas fluxes from soil and many external parameters, helps the time series analysis aimed to the identification of gas flux anomalies due to variations in deep system (e.g. onset of volcanic crises) from those triggered by external conditions.

  8. Laser Scanner For Automatic Storage

    Science.gov (United States)

    Carvalho, Fernando D.; Correia, Bento A.; Rebordao, Jose M.; Rodrigues, F. Carvalho

    1989-01-01

    The automated magazines are beeing used at industry more and more. One of the problems related with the automation of a Store House is the identification of the products envolved. Already used for stock management, the Bar Codes allows an easy way to identify one product. Applied to automated magazines, the bar codes allows a great variety of items in a small code. In order to be used by the national producers of automated magazines, a devoted laser scanner has been develloped. The Prototype uses an He-Ne laser whose beam scans a field angle of 75 degrees at 16 Hz. The scene reflectivity is transduced by a photodiode into an electrical signal, which is then binarized. This digital signal is the input of the decodifying program. The machine is able to see barcodes and to decode the information. A parallel interface allows the comunication with the central unit, which is responsible for the management of automated magazine.

  9. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  10. Extensometer automatically measures elongation in elastomers

    Science.gov (United States)

    Hooper, C. D.

    1966-01-01

    Extensometer, with a calibrated shaft, measures the elongation of elastomers and automatically records this distance on a chart. It is adaptable to almost any tensile testing machine and is fabricated at a relatively low cost.

  11. Computer systems for automatic earthquake detection

    Science.gov (United States)

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  12. Automatic program debugging for intelligent tutoring systems

    Energy Technology Data Exchange (ETDEWEB)

    Murray, W.R.

    1986-01-01

    This thesis explores the process by which student programs can be automatically debugged in order to increase the instructional capabilities of these systems. This research presents a methodology and implementation for the diagnosis and correction of nontrivial recursive programs. In this approach, recursive programs are debugged by repairing induction proofs in the Boyer-Moore Logic. The potential of a program debugger to automatically debug widely varying novice programs in a nontrivial domain is proportional to its capabilities to reason about computational semantics. By increasing these reasoning capabilities a more powerful and robust system can result. This thesis supports these claims by examining related work in automated program debugging and by discussing the design, implementation, and evaluation of Talus, an automatic degugger for LISP programs. Talus relies on its abilities to reason about computational semantics to perform algorithm recognition, infer code teleology, and to automatically detect and correct nonsyntactic errors in student programs written in a restricted, but nontrivial, subset of LISP.

  13. Variable load automatically tests dc power supplies

    Science.gov (United States)

    Burke, H. C., Jr.; Sullivan, R. M.

    1965-01-01

    Continuously variable load automatically tests dc power supplies over an extended current range. External meters monitor current and voltage, and multipliers at the outputs facilitate plotting the power curve of the unit.

  14. Coke oven automatic combustion control system

    Energy Technology Data Exchange (ETDEWEB)

    Shihara, Y.

    1981-01-01

    This article describes and discusses the development and application of an automatic combustion control system for coke ovens that has been used at the Yawata Works of the Nippon Steel Corporation, Japan. (In Japanese)

  15. Automatic calibration system for pressure transducers

    Science.gov (United States)

    1968-01-01

    Fifty-channel automatic pressure transducer calibration system increases quantity and accuracy for test evaluation calibration. The pressure transducers are installed in an environmental tests chamber and manifolded to connect them to a pressure balance which is uniform.

  16. A Demonstration of Automatically Switched Optical Network

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    We build an automatically switched optical network (ASON) testbed with four optical cross-connect nodes. Many fundamental ASON features are demonstrated, which is implemented by control protocols based on generalized multi-protocol label switching (GMPLS) framework.

  17. Automatic text categorisation of racist webpages

    OpenAIRE

    Greevy, Edel

    2004-01-01

    Automatic Text Categorisation (TC) involves the assignment of one or more predefined categories to text documents in order that they can be effectively managed. In this thesis we examine the possibility of applying automatic text categorisation to the problem of categorising texts (web pages) based on whether or not they are racist. TC has proven successful for topic-based problems such as news story categorisation. However, the problem of detecting racism is dissimilar to topic-based pro...

  18. Development of automatic weld strength testing machine

    International Nuclear Information System (INIS)

    In order to improve the testing process and accuracy so as to carry out all the manual works including documentation automatically and effortlessly, an automatic computerised strength testing machine with the latest state-of-art technology, including both the hardware and software was developed. The operator has to only submit the weld to the machine for testing and start the testing process merely by pressing a switch. This paper depicts the salient features of this machine

  19. Dynamic Automatic Noisy Speech Recognition System (DANSR)

    OpenAIRE

    Paul, Sheuli

    2014-01-01

    In this thesis we studied and investigated a very common but a long existing noise problem and we provided a solution to this problem. The task is to deal with different types of noise that occur simultaneously and which we call hybrid. Although there are individual solutions for specific types one cannot simply combine them because each solution affects the whole speech. We developed an automatic speech recognition system DANSR ( Dynamic Automatic Noisy Speech Recognition System) for hybri...

  20. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    OpenAIRE

    Veena Thakur; Trupti Gedam

    2015-01-01

    Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS) are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify...

  1. Automatic Morphometry of Nerve Histological Sections

    OpenAIRE

    Romero, E.; Cuisenaire, O.; Denef, J.; Delbeke, J.; Macq, B.; Veraart, C.

    2000-01-01

    A method for the automatic segmentation, recognition and measurement of neuronal myelinated fibers in nerve histological sections is presented. In this method, the fiber parameters i.e. perimeter, area, position of the fiber and myelin sheath thickness are automatically computed. Obliquity of the sections may be taken into account. First, the image is thresholded to provide a coarse classification between myelin and non-myelin pixels. Next, the resulting binary image is further simplified usi...

  2. An automatic system for multielement solvent extractions

    International Nuclear Information System (INIS)

    The automatic system described is suitable for multi-element separations by solvent extraction techniques with organic solvents heavier than water. The analysis is run automatically by a central control unit and includes steps such as pH regulation and reduction or oxidation. As an example, the separation of radioactive Hg2+, Cu2+, Mo6+, Cd2+, As5+, Sb5+, Fe3+, and Co3+ by means of diethyldithiocarbonate complexes is reported. (Auth.)

  3. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  4. Evaluation framework for automatic singing transcription

    OpenAIRE

    Molina, Emilio; Ana M. Barbancho; Tardón, Lorenzo J.; Barbancho, Isabel

    2014-01-01

    In this paper, we analyse the evaluation strategies used in previous works on automatic singing transcription, and we present a novel, comprehensive and freely available evaluation framework for automatic singing transcription. This framework consists of a cross-annotated dataset and a set of extended evaluation measures, which are integrated in a Matlab toolbox. The presented evaluation measures are based on standard MIREX note-tracking measures, but they provide extra information about the ...

  5. Automatic Programming with Ant Colony Optimization

    OpenAIRE

    Green, Jennifer; Jacqueline L. Whalley; Johnson, Colin G.

    2004-01-01

    Automatic programming is the use of search techniques to find programs that solve a problem. The most commonly explored automatic programming technique is genetic programming, which uses genetic algorithms to carry out the search. In this paper we introduce a new technique called Ant Colony Programming (ACP) which uses an ant colony based search in place of genetic algorithms. This algorithm is described and compared with other approaches in the literature.

  6. Face-Based Automatic Personality Perception

    OpenAIRE

    Al Moubayed, Noura; Vazquez-Alvarez, Yolanda; McKay, Alex; Vinciarelli, Alessandro

    2014-01-01

    Automatic Personality Perception is the task of automatically predicting the personality traits people attribute to others. This work presents experiments where such a task is performed by mapping facial appearance into the Big-Five personality traits, namely Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. The experiments are performed over the pictures of the FERET corpus, originally collected for biometrics purposes, for a total of 829 individuals. The results show...

  7. Automatic processing of dominance and submissiveness

    OpenAIRE

    Moors, Agnes; De Houwer, Jan

    2005-01-01

    We investigated whether people are able to detect in a relatively automatic manner the dominant or submissive status of persons engaged in social interactions. Using a variant of the affective Simon task (De Houwer & Eelen, 1998), we demonstrated that the verbal response DOMINANT or SUBMISSIVE was facilitated when it had to be made to a target person that was respectively dominant or submissive. These results provide new information about the automatic nature of appraisals and ...

  8. Efficient formulations of the material identification problem using full-field measurements

    Science.gov (United States)

    Pérez Zerpa, Jorge M.; Canelas, Alfredo

    2016-08-01

    The material identification problem addressed consists of determining the constitutive parameters distribution of a linear elastic solid using displacement measurements. This problem has been considered in important applications such as the design of methodologies for breast cancer diagnosis. Since the resolution of real life problems involves high computational costs, there is great interest in the development of efficient methods. In this paper two new efficient formulations of the problem are presented. The first formulation leads to a second-order cone optimization problem, and the second one leads to a quadratic optimization problem, both allowing the resolution of the problem with high efficiency and precision. Numerical examples are solved using synthetic input data with error. A regularization technique is applied using the Morozov criterion along with an automatic selection strategy of the regularization parameter. The proposed formulations present great advantages in terms of efficiency, when compared to other formulations that require the application of general nonlinear optimization algorithms.

  9. Musical Instrument Identification using Multiscale Mel-frequency Cepstral Coefficients

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Morvidone, Marcela; Daudet, Laurent

    We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs...

  10. 48 CFR 252.211-7006 - Radio Frequency Identification.

    Science.gov (United States)

    2010-10-01

    ... supply, as defined in DoD 4140.1-R, DoD Supply Chain Materiel Management Regulation, AP1.1.11: (A... immediate, automatic, and accurate identification of any item in the supply chain of any company, in any..., organizational tool kits, hand tools, and administrative and housekeeping supplies and equipment. (C) Class...

  11. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    Science.gov (United States)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    Structural Health Monitoring together with all the dynamic identification techniques and damage detection techniques are increasing in popularity in both scientific and civil community in last years. The basic idea arises from the observation that spectral properties, described in terms of the so-called modal parameters (eigenfrequencies, mode shapes, and modal damping), are functions of the physical properties of the structure (mass, energy dissipation mechanisms and stiffness). Damage detection techniques traditionally consist in visual inspection and/or non-destructive testing. A different approach consists in vibration based methods detecting changes of feature related to damage. Structural damage exhibits its main effects in terms of stiffness and damping variation. Damage detection approach based on dynamic monitoring of structural properties over time has received a considerable attention in recent scientific literature. We focused the attention on the structural damage localization and detection after an earthquake, from the evaluation of the mode curvature difference. The methodology is based on the acquisition of the structural dynamic response through a three-directional accelerometer installed on the top floor of the structure. It is able to assess the presence of any damage on the structure providing also information about the related position and severity of the damage. The procedure is based on a Band-Variable Filter, (Ditommaso et al., 2012), used to extract the dynamic characteristics of systems that evolve over time by acting simultaneously in both time and frequency domain. In this paper using a combined approach based on the Fourier Transform and on the seismic interferometric analysis, an useful tool for the automatic fundamental frequency evaluation of nonlinear structures has been proposed. Moreover, using this kind of approach it is possible to improve some of the existing methods for the automatic damage detection providing stable results

  12. Identification of fast-changing signals by means of adaptive chaotic transformations

    OpenAIRE

    Berezowski, Marek; Lawnik, Marcin

    2016-01-01

    The adaptive approach of strongly non-linear fast-changing signals identification is discussed. The approach is devised by adaptive sampling based on chaotic mapping in yourself of a signal. Presented sampling way may be utilized online in the automatic control of chemical reactor (throughout identification of concentrations and temperature oscillations in real-time), in medicine (throughout identification of ECG and EEG signals in real-time), etc. In this paper, we presented it to identify t...

  13. Is Mobile-Assisted Language Learning Really Useful? An Examination of Recall Automatization and Learner Autonomy

    Science.gov (United States)

    Sato, Takeshi; Murase, Fumiko; Burden, Tyler

    2015-01-01

    The aim of this study is to examine the advantages of Mobile-Assisted Language Learning (MALL), especially vocabulary learning of English as a foreign or second language (L2) in terms of the two strands: automatization and learner autonomy. Previous studies articulate that technology-enhanced L2 learning could bring about some positive effects.…

  14. Relatedness Proportion Effects in Semantic Categorization: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2014-01-01

    Semantic priming effects at a short prime-target stimulus onset asynchrony are commonly explained in terms of an automatic spreading activation process. According to this view, the proportion of related trials should have no impact on the size of the semantic priming effect. Using a semantic categorization task ("Is this a living…

  15. Reflecting and deflecting stereotypes : Assimilation and contrast in impression formation and automatic behavior

    NARCIS (Netherlands)

    Dijksterhuis, A; Spears, R; Lepinasse, V

    2001-01-01

    Factors influencing the tendency to represent a social stimulus primarily in stereotypic terms. or more as a distinct exemplar, were predicted to moderate automatic behavior effects, producing assimilation and contrast respectively. In Experiment I, we demonstrated that when an impression pertained

  16. 中药材传统经验鉴别术语与药用植物学的内在联系研究%Studies on the Internal Relationship between Traditional Identification Term in Chinese Medicine and Phar-maceutical Botany

    Institute of Scientific and Technical Information of China (English)

    林丽; 晋玲; 高素芳; 陈红刚; 施晓龙

    2015-01-01

    OBJECTIVE:To enrich the identification diversity of traditional Chinese medicine(TCM),and provide theoretical guidance for the quality evaluation of TCM. METHODS:According to literature references and traditional identification experienc-es,characteristics including medicinal shape,size,color and lustre,surface,texture,section,odor and other aspects were identi-fied by sense organs such as eyes,hands,nose and mouth. The vivid traditional identification term were obtained through systemat-ic summarization in order to explore the internal relationship with pharmaceutical botany. RESULTS&CONCLUSIONS:As the sim-plest identification method,traditional identification method can rapidly identify the species and quality of TCM,evaluate the quali-ty,and has great significance to solve the security issues of clinical medication and health care in daily life. There was a correlation between the traditional identification and botanical research,which could be able to provide theoretical guidance to characters identi-fication and quality evaluation of TCM.%目的:丰富中药材鉴别方法的多样性,为中药的质量评价提供理论性指导。方法:通过查阅参考文献资料及总结多年来传统鉴别经验,采用眼、手、鼻、口等感官对中药材的形状、大小、色泽、表面、质地、断面及气味等进行性状鉴别,并进行系统归纳总结,得出形象、生动的传统经验鉴别术语,并寻找其与药用植物学学科间的内在联系。结果与结论:传统鉴别方法作为最简单的鉴别方法,能够快速鉴别中药品种,评价质量,对解决临床用药与日常生活保健用药的安全问题具有重大意义。传统经验鉴别和药用植物学学科间存在相关性,可为中药材性状鉴别及质量评价提供理论性指导。

  17. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  18. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  19. Sparse discriminant analysis for breast cancer biomarker identification and classification

    Institute of Scientific and Technical Information of China (English)

    Yu Shi; Daoqing Dai; Chaochun Liu; Hong Yan

    2009-01-01

    Biomarker identification and cancer classification are two important procedures in microarray data analysis. We propose a novel uni-fied method to carry out both tasks. We first preselect biomarker candidates by eliminating unrelated genes through the BSS/WSS ratio filter to reduce computational cost, and then use a sparse discriminant analysis method for simultaneous biomarker identification and cancer classification. Moreover, we give a mathematical justification about automatic biomarker identification. Experimental results show that the proposed method can identify key genes that have been verified in biochemical or biomedical research and classify the breast cancer type correctly.

  20. Towards Automatic Improvement of Patient Queries in Health Retrieval Systems

    Directory of Open Access Journals (Sweden)

    Nesrine KSENTINI

    2016-07-01

    Full Text Available With the adoption of health information technology for clinical health, e-health is becoming usual practice today. Users of this technology find it difficult to seek information relevant to their needs due to the increasing amount of the clinical and medical data on the web, and the lack of knowledge of medical jargon. In this regards, a method is described to improve user's needs by automatically adding new related terms to their queries which appear in the same context of the original query in order to improve final search results. This method is based on the assessment of semantic relationships defined by a proposed statistical method between a set of terms or keywords. Experiments were performed on CLEF-eHealth-2015 database and the obtained results show the effectiveness of our proposed method.

  1. Automatic Performance Debugging of SPMD-style Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Kunlin; Shi, Weisong; Yuan, Lin; Meng, Dan; Wang, Lei

    2011-01-01

    The simple program and multiple data (SPMD) programming model is widely used for both high performance computing and Cloud computing. In this paper, we design and implement an innovative system, AutoAnalyzer, that automates the process of debugging performance problems of SPMD-style parallel programs, including data collection, performance behavior analysis, locating bottlenecks, and uncovering their root causes. AutoAnalyzer is unique in terms of two features: first, without any apriori knowledge, it automatically locates bottlenecks and uncovers their root causes for performance optimization; second, it is lightweight in terms of the size of performance data to be collected and analyzed. Our contributions are three-fold: first, we propose two effective clustering algorithms to investigate the existence of performance bottlenecks that cause process behavior dissimilarity or code region behavior disparity, respectively; meanwhile, we present two searching algorithms to locate bottlenecks; second, on a basis o...

  2. Reference Lists for the Evaluation of Term Extraction Tools

    OpenAIRE

    Loginova Clouet, Elizaveta; Gojun, Anita; Blancafort, Helena; Guegan, Marie; Gornostay, Tatiana; Heid, Ulrich

    2012-01-01

    In this paper, we discuss practical and methodological issues of the creation of reference term lists (RTLs) for the evaluation of mono- lingual and bilingual term candidate extraction from comparable corpora in the domains of wind energy and mobile technology. These reference term lists are intended to serve as a "gold standard" for the qualita- tive and quantitative evaluation of automatic term extraction tools. We present the preliminary results of the evaluation of the monolingual term ex...

  3. Automatic welding techniques for nuclear power plants

    International Nuclear Information System (INIS)

    Improved type BWRs (ABWR) further heightened the operation properties, safety and economic efficiency as the synthetic characteristics of the plants by simplification and the heightening of performance. Especially, reactor internal pumps (RIP) together with improved control rod driving (CRD) system promoted the simplification of reactor system and the improvement of operation properties and safety. The structures of RIP casing proper and the welded parts, the automatic TIG welder for RIP casings and the nondestructive inspection after the welding, the three-dimensional automatic welding of CRD stubs, the narrow gap welding of flow nozzles and the automatic welding of spent fuel storage racks are reported. The works of exchanging the recirculating pipings in reactors to 316L pipes withstanding SCC by remote automatic welding to reduce the radiation exposure of workers are introduced. The fully automatic TIG welding system for pipings was developed for the purpose of realizing unmanned welding or the welding that does not require skill, and its constitution and the performance are described. (K.I.)

  4. Aftershock identification

    OpenAIRE

    Zaliapin, Ilya; Gabrielov, Andrei; Keilis-Borok, Vladimir; Wong, Henry

    2007-01-01

    Earthquake aftershock identification is closely related to the question "Are aftershocks different from the rest of earthquakes?" We give a positive answer to this question and introduce a general statistical procedure for clustering analysis of seismicity that can be used, in particular, for aftershock detection. The proposed approach expands the analysis of Baiesi and Paczuski [PRE, 69, 066106 (2004)] based on the space-time-magnitude nearest-neighbor distance $\\eta$ between earthquakes. We...

  5. On the malleability of automatic attitudes: combating automatic prejudice with images of admired and disliked individuals.

    Science.gov (United States)

    Dasgupta, N; Greenwald, A G

    2001-11-01

    Two experiments examined whether exposure to pictures of admired and disliked exemplars can reduce automatic preference for White over Black Americans and younger over older people. In Experiment 1, participants were exposed to either admired Black and disliked White individuals, disliked Black and admired White individuals, or nonracial exemplars. Immediately after exemplar exposure and 24 hr later, they completed an Implicit Association Test that assessed automatic racial attitudes and 2 explicit attitude measures. Results revealed that exposure to admired Black and disliked White exemplars significantly weakened automatic pro-White attitudes for 24 hr beyond the treatment but did not affect explicit racial attitudes. Experiment 2 provided a replication using automatic age-related attitudes. Together, these studies provide a strategy that attempts to change the social context and, through it, to reduce automatic prejudice and preference. PMID:11708558

  6. Automatic contrast: evidence that automatic comparison with the social self affects evaluative responses.

    Science.gov (United States)

    Ruys, Kirsten I; Spears, Russell; Gordijn, Ernestine H; de Vries, Nanne K

    2007-08-01

    The aim of the present research was to investigate whether unconsciously presented affective information may cause opposite evaluative responses depending on what social category the information originates from. We argue that automatic comparison processes between the self and the unconscious affective information produce this evaluative contrast effect. Consistent with research on automatic behaviour, we propose that when an intergroup context is activated, an automatic comparison to the social self may determine the automatic evaluative responses, at least for highly visible categories (e.g. sex, ethnicity). Contrary to previous research on evaluative priming, we predict automatic contrastive responses to affective information originating from an outgroup category such that the evaluative response to neutral targets is opposite to the valence of the suboptimal primes. Two studies using different intergroup contexts provide support for our hypotheses. PMID:17705936

  7. Identification of biomolecules by terahertz spectroscopy and fuzzy pattern recognition

    Science.gov (United States)

    Chen, Tao; Li, Zhi; Mo, Wei

    2013-04-01

    An approach for automatic identification of terahertz (THz) spectra of biomolecules is proposed based on principal component analysis (PCA) and fuzzy pattern recognition in this paper, and THz transmittance spectra of some typical amino acid and saccharide biomolecular samples are investigated to prove its feasibility. Firstly, PCA is applied to reduce the dimensionality of the original spectrum data and extract features of the data. Secondly, instead of the original spectrum variables, the selected principal component scores matrix is fed into the model of fuzzy pattern recognition, where a principle of fuzzy closeness based optimization is employed to identify those samples. Results demonstrate that THz spectroscopy combined with PCA and fuzzy pattern recognition can be efficiently utilized for automatic identification of biomolecules. The proposed approach provides a new effective method in the detection and identification of biomolecules using THz spectroscopy.

  8. 26 CFR 301.7701-12 - Employer identification number.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 18 2010-04-01 2010-04-01 false Employer identification number. 301.7701-12...) PROCEDURE AND ADMINISTRATION PROCEDURE AND ADMINISTRATION Definitions § 301.7701-12 Employer identification number. For purposes of this chapter, the term employer identification number means the...

  9. Unsupervised Identification of Isotope-Labeled Peptides.

    Science.gov (United States)

    Goldford, Joshua E; Libourel, Igor G L

    2016-06-01

    In vivo isotopic labeling coupled with high-resolution proteomics is used to investigate primary metabolism in techniques such as stable isotope probing (protein-SIP) and peptide-based metabolic flux analysis (PMFA). Isotopic enrichment of carbon substrates and intracellular metabolism determine the distribution of isotopes within amino acids. The resulting amino acid mass distributions (AMDs) are convoluted into peptide mass distributions (PMDs) during protein synthesis. With no a priori knowledge on metabolic fluxes, the PMDs are therefore unknown. This complicates labeled peptide identification because prior knowledge on PMDs is used in all available peptide identification software. An automated framework for the identification and quantification of PMDs for nonuniformly labeled samples is therefore lacking. To unlock the potential of peptide labeling experiments for high-throughput flux analysis and other complex labeling experiments, an unsupervised peptide identification and quantification method was developed that uses discrete deconvolution of mass distributions of identified peptides to inform on the mass distributions of otherwise unidentifiable peptides. Uniformly (13)C-labeled Escherichia coli protein was used to test the developed feature reconstruction and deconvolution algorithms. The peptide identification was validated by comparing MS(2)-identified peptides to peptides identified from PMDs using unlabeled E. coli protein. Nonuniformly labeled Glycine max protein was used to demonstrate the technology on a representative sample suitable for flux analysis. Overall, automatic peptide identification and quantification were comparable or superior to manual extraction, enabling proteomics-based technology for high-throughput flux analysis studies. PMID:27145348

  10. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. ((orig.))

  11. Automatic Detection of Childhood Absence Epilepsy Seizures: Toward a Monitoring Device

    DEFF Research Database (Denmark)

    Duun-Henriksen, Jonas; Madsen, Rasmus E.; Remvig, Line S.;

    2012-01-01

    Automatic detections of paroxysms in patients with childhood absence epilepsy have been neglected for several years. We acquire reliable detections using only a single-channel brainwave monitor, allowing for unobtrusive monitoring of antiepileptic drug effects. Ultimately we seek to obtain optimal...... long-term prognoses, balancing antiepileptic effects and side effects. The electroencephalographic appearance of paroxysms in childhood absence epilepsy is fairly homogeneous, making it feasible to develop patient-independent automatic detection. We implemented a state-of-the-art algorithm to...

  12. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  13. A versatile automatic TLD system under development

    International Nuclear Information System (INIS)

    This paper describes an automatic TLD personnel monitoring system intended to replace the filmbadges used by the Radiological Service Unit TNO. The basis of the system is a versatile automatic TLD reader in which the detectors are heated with hot nitrogen gas. After a short description of the reader and some experimental results, a prototype of a TLD badge designed for automatic processing is presented. In this badge - which is waterproof and cannot be opened by the wearer - up to four detectors can be mounted, covered by appropriate filters. A variety of TLD's may be used such as discs, hot-pressed chips and rods. The system is completed with a fingering dosemeter, the detector holder of which can be separated from the ring for other applications (e.g. dosimetry in X-ray diagnostics). (author)

  14. Fault injection system for automatic testing system

    Institute of Scientific and Technical Information of China (English)

    王胜文; 洪炳熔

    2003-01-01

    Considering the deficiency of the means for confirming the attribution of fault redundancy in the re-search of Automatic Testing System(ATS) , a fault-injection system has been proposed to study fault redundancyof automatic testing system through compurison. By means of a fault-imbeded environmental simulation, thefaults injected at the input level of the software are under test. These faults may induce inherent failure mode,thus bringing about unexpected output, and the anticipated goal of the test is attained. The fault injection con-sists of voltage signal generator, current signal generator and rear drive circuit which are specially developed,and the ATS can work regularly by means of software simulation. The experimental results indicate that the faultinjection system can find the deficiency of the automatic testing software, and identify the preference of fault re-dundancy. On the other hand, some soft deficiency never exposed before can be identified by analyzing the tes-ting results.

  15. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  16. AUTOMATIC WEB SCRAPPING USING VISUAL SELECTORS

    Directory of Open Access Journals (Sweden)

    Rashmi Bhosale

    2015-12-01

    Full Text Available The amount of information that is currently vailable on the net grows at a very fast pace, thus web can be considered as the largest knowledge repository ever developed and made available to the public. A web data extraction system is a system that extracts data from web pages automatically. Web data analysis applications such as extracting mutual funds information from a website, extracting opening and closing price of stock daily from a web page and so on involves web data extraction.Early techniques were construcingt wrapper to visit those sites and collect data which is time consuming. Thus a technique called as Automatic Web Scrapping Using Visual Selectors(AWSUVS is proposed. For selected data sections AWSUVS discovers extraction pattern automatically. AWSUVS uses visual cues to identify data records while ignoring noise items such as advertises and navigation bars.

  17. Research on an Intelligent Automatic Turning System

    Directory of Open Access Journals (Sweden)

    Lichong Huang

    2012-12-01

    Full Text Available Equipment manufacturing industry is the strategic industries of a country. And its core part is the CNC machine tool. Therefore, enhancing the independent research of relevant technology of CNC machine, especially the open CNC system, is of great significance. This paper presented some key techniques of an Intelligent Automatic Turning System and gave a viable solution for system integration. First of all, the integrated system architecture and the flexible and efficient workflow for perfoming the intelligent automatic turning process is illustrated. Secondly, the innovated methods of the workpiece feature recognition and expression and process planning of the NC machining are put forward. Thirdly, the cutting tool auto-selection and the cutting parameter optimization solution are generated with a integrated inference of rule-based reasoning and case-based reasoning. Finally, the actual machining case based on the developed intelligent automatic turning system proved the presented solutions are valid, practical and efficient.

  18. An Automatic Hierarchical Delay Analysis Tool

    Institute of Scientific and Technical Information of China (English)

    FaridMheir-El-Saadi; BozenaKaminska

    1994-01-01

    The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.

  19. Automatic Gain Control in Compact Spectrometers.

    Science.gov (United States)

    Protopopov, Vladimir

    2016-03-01

    An image intensifier installed in the optical path of a compact spectrometer may act not only as a fast gating unit, which is widely used for time-resolved measurements, but also as a variable attenuator-amplifier in a continuous wave mode. This opens the possibility of an automatic gain control, a new feature in spectroscopy. With it, the user is relieved from the necessity to manually adjust signal level at a certain value that it is done automatically by means of an electronic feedback loop. It is even more important that automatic gain control is done without changing exposure time, which is an additional benefit in time-resolved experiments. The concept, algorithm, design considerations, and experimental results are presented. PMID:26810181

  20. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  1. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  2. Automatic detection of microcalcifications in mammography using a neuromimetic system based on retina.

    Science.gov (United States)

    Vibert, Jean-François; Valleron, Alain-jacques

    2003-01-01

    The incidence of breast cancer in France is roughly 26,000 and the annual number of deaths is 11,000. The mammography is the choice examination for the early identification of the tumours in an asymptomatic population. This is a simple, reliable, inexpensive examination, allowing to identify a grave and frequent pathology, but that can be the object of an effective treatment if early detected. The recognition of the microcalcifications in the mammographies is the key for early detection of cancers. Automatic detection methods were already proposed, but they have a very weak specificity and a relatively low sensibility. Currently, the eye of the expert still remains the better judge. We propose a neuromimetic method to localize automatically the microcalcifications. In this method, we devise a network of formal neurones inspired from the mammal retina architecture. This model mimics one characteristic of the retina which is is a sensor that automatically adapts to the image characteristics to analyse and realize the outlines extraction and adaptative filtering of the pictures, based on its network properties. The results were tested using a public standardized data set (DDSM), which was designed to test the automatic detection methods. We show that our "retina" can extracts most of the microcalcifications that can be grouped together in clusters. While we achieve a 95% sensitivity, we must acknowledge a low specificity (22%). Current efforts will focus to enhance this latter parameter. PMID:14664051

  3. AUTOMATIC RECOGNITION OF BOTH INTER AND INTRA CLASSES OF DIGITAL MODULATED SIGNALS USING ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    JIDE JULIUS POPOOLA

    2014-04-01

    Full Text Available In radio communication systems, signal modulation format recognition is a significant characteristic used in radio signal monitoring and identification. Over the past few decades, modulation formats have become increasingly complex, which has led to the problem of how to accurately and promptly recognize a modulation format. In addressing these challenges, the development of automatic modulation recognition systems that can classify a radio signal’s modulation format has received worldwide attention. Decision-theoretic methods and pattern recognition solutions are the two typical automatic modulation recognition approaches. While decision-theoretic approaches use probabilistic or likelihood functions, pattern recognition uses feature-based methods. This study applies the pattern recognition approach based on statistical parameters, using an artificial neural network to classify five different digital modulation formats. The paper deals with automatic recognition of both inter-and intra-classes of digitally modulated signals in contrast to most of the existing algorithms in literature that deal with either inter-class or intra-class modulation format recognition. The results of this study show that accurate and prompt modulation recognition is possible beyond the lower bound of 5 dB commonly acclaimed in literature. The other significant contribution of this paper is the usage of the Python programming language which reduces computational complexity that characterizes other automatic modulation recognition classifiers developed using the conventional MATLAB neural network toolbox.

  4. Wearable Automatic External Deifbrillators%穿戴式自动体外除颤仪

    Institute of Scientific and Technical Information of China (English)

    罗华杰; 罗章源; 金勋; 张蕾蕾; 王长金; 张文赞; 涂权

    2015-01-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes ECG measurements, bioelectrical impedance measurement, discharge defibrilation module, which can automatic identify VF signal, biphasic exponential waveform defibrilation discharge. After verified by animal tests, the device can realize ECG acquisition and automatic identification. After identifying the ventricular fibrilation signal, it can automatic defibrilate to abort ventricular fibrilation and to realize the cardiac electrical cardioversion.%电击除颤是最有效的治疗室颤(VF)的方法,该文介绍的是基于嵌入式系统的穿戴式自动体外除颤仪,包括心电信号测量,生物阻抗测量,放电除颤模块,能够自动识别室颤信号,以双相指数波型进行除颤放电。经动物实验验证,该设备可实现心电信号采集及自动识别,在识别出室颤信号后能自动进行电击除颤,可中止室颤,实现心脏电转复。

  5. Aircraft noise effects on sleep: a systematic comparison of EEG awakenings and automatically detected cardiac activations

    International Nuclear Information System (INIS)

    Polysomnography is the gold standard for investigating noise effects on sleep, but data collection and analysis are sumptuous and expensive. We recently developed an algorithm for the automatic identification of cardiac activations associated with cortical arousals, which uses heart rate information derived from a single electrocardiogram (ECG) channel. We hypothesized that cardiac activations can be used as estimates for EEG awakenings. Polysomnographic EEG awakenings and automatically detected cardiac activations were systematically compared using laboratory data of 112 subjects (47 male, mean ± SD age 37.9 ± 13 years), 985 nights and 23 855 aircraft noise events (ANEs). The probability of automatically detected cardiac activations increased monotonically with increasing maximum sound pressure levels of ANEs, exceeding the probability of EEG awakenings by up to 18.1%. If spontaneous reactions were taken into account, exposure–response curves were practically identical for EEG awakenings and cardiac activations. Automatically detected cardiac activations may be used as estimates for EEG awakenings. More investigations are needed to further validate the ECG algorithm in the field and to investigate inter-individual differences in its ability to predict EEG awakenings. This inexpensive, objective and non-invasive method facilitates large-scale field studies on the effects of traffic noise on sleep

  6. Semi-automatic knee cartilage segmentation

    Science.gov (United States)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  7. Development of automatic laser welding system

    International Nuclear Information System (INIS)

    Laser are a new production tool for high speed and low distortion welding and applications to automatic welding lines are increasing. IHI has long experience of laser processing for the preservation of nuclear power plants, welding of airplane engines and so on. Moreover, YAG laser oscillators and various kinds of hardware have been developed for laser welding and automation. Combining these welding technologies and laser hardware technologies produce the automatic laser welding system. In this paper, the component technologies are described, including combined optics intended to improve welding stability, laser oscillators, monitoring system, seam tracking system and so on. (author)

  8. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  9. Automatic speech recognition a deep learning approach

    CERN Document Server

    Yu, Dong

    2015-01-01

    This book summarizes the recent advancement in the field of automatic speech recognition with a focus on discriminative and hierarchical models. This will be the first automatic speech recognition book to include a comprehensive coverage of recent developments such as conditional random field and deep learning techniques. It presents insights and theoretical foundation of a series of recent models such as conditional random field, semi-Markov and hidden conditional random field, deep neural network, deep belief network, and deep stacking models for sequential learning. It also discusses practical considerations of using these models in both acoustic and language modeling for continuous speech recognition.

  10. The Ballistic Flight of an Automatic Duck

    Directory of Open Access Journals (Sweden)

    Fabienne Collignon

    2012-10-01

    Full Text Available This article analyses Jacques de Vaucanson's automatic duck and its successive appearances in Thomas Pynchon's work (both Mason & Dixon and, by extension, Gravity's Rainbow to discuss the correlations between (self- evolving technologies and space age gadgets. The Cold War serves, therefore, as the frame of reference for this article, which is further preoccupied with the geographical positions that automatons or prototype cyborgs occupy: the last part of the essay analyses Walter Benjamin's Arcades Project, where mechanical hens stand at the entrance to dreamworlds. Automatic fowl guard, and usher into being, new technologised worlds.

  11. Automatic malware analysis an emulator based approach

    CERN Document Server

    Yin, Heng

    2012-01-01

    Malicious software (i.e., malware) has become a severe threat to interconnected computer systems for decades and has caused billions of dollars damages each year. A large volume of new malware samples are discovered daily. Even worse, malware is rapidly evolving becoming more sophisticated and evasive to strike against current malware analysis and defense systems. Automatic Malware Analysis presents a virtualized malware analysis framework that addresses common challenges in malware analysis. In regards to this new analysis framework, a series of analysis techniques for automatic malware analy

  12. Automatic emotional expression analysis from eye area

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  13. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  14. Automatic segmentation of mammogram and tomosynthesis images

    Science.gov (United States)

    Sargent, Dusty; Park, Sun Young

    2016-03-01

    Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for

  15. Semi-automatic parcellation of the corpus striatum

    Science.gov (United States)

    Al-Hakim, Ramsey; Nain, Delphine; Levitt, James; Shenton, Martha; Tannenbaum, Allen

    2007-03-01

    The striatum is the input component of the basal ganglia from the cerebral cortex. It includes the caudate, putamen, and nucleus accumbens. Thus, the striatum is an important component in limbic frontal-subcortical circuitry and is believed to be relevant both for reward-guided behaviors and for the expression of psychosis. The dorsal striatum is composed of the caudate and putamen, both of which are further subdivided into pre- and post-commissural components. The ventral striatum (VS) is primarily composed of the nucleus accumbens. The striatum can be functionally divided into three broad regions: 1) a limbic; 2) a cognitive and 3) a sensor-motor region. The approximate corresponding anatomic subregions for these 3 functional regions are: 1) the VS; 2) the pre/post-commissural caudate and the pre-commissural putamen and 3) the post-commissural putamen. We believe assessing these subregions, separately, in disorders with limbic and cognitive impairment such as schizophrenia may yield more informative group differences in comparison with normal controls than prior parcellation strategies of the striatum such as assessing the caudate and putamen. The manual parcellation of the striatum into these subregions is currently defined using certain landmark points and geometric rules. Since identification of these areas is important to clinical research, a reliable and fast parcellation technique is required. Currently, only full manual parcellation using editing software is available; however, this technique is extremely time intensive. Previous work has shown successful application of heuristic rules into a semi-automatic platform1. We present here a semi-automatic algorithm which implements the rules currently used for manual parcellation of the striatum, but requires minimal user input and significantly reduces the time required for parcellation.

  16. Automatic segmentation of maxillofacial cysts in cone beam CT images.

    Science.gov (United States)

    Abdolali, Fatemeh; Zoroofi, Reza Aghaeizadeh; Otake, Yoshito; Sato, Yoshinobu

    2016-05-01

    Accurate segmentation of cysts and tumors is an essential step for diagnosis, monitoring and planning therapeutic intervention. This task is usually done manually, however manual identification and segmentation is tedious. In this paper, an automatic method based on asymmetry analysis is proposed which is general enough to segment various types of jaw cysts. The key observation underlying this approach is that normal head and face structure is roughly symmetric with respect to midsagittal plane: the left part and the right part can be divided equally by an axis of symmetry. Cysts and tumors typically disturb this symmetry. The proposed approach consists of three main steps as follows: At first, diffusion filtering is used for preprocessing and symmetric axis is detected. Then, each image is divided into two parts. In the second stage, free form deformation (FFD) is used to correct slight displacement of corresponding pixels of the left part and a reflected copy of the right part. In the final stage, intensity differences are analyzed and a number of constraints are enforced to remove false positive regions. The proposed method has been validated on 97 Cone Beam Computed Tomography (CBCT) sets containing various jaw cysts which were collected from various image acquisition centers. Validation is performed using three similarity indicators (Jaccard index, Dice's coefficient and Hausdorff distance). The mean Dice's coefficient of 0.83, 0.87 and 0.80 is achieved for Radicular, Dentigerous and KCOT classes, respectively. For most of the experiments done, we achieved high true positive (TP). This means that a large number of cyst pixels are correctly classified. Quantitative results of automatic segmentation show that the proposed method is more effective than one of the recent methods in the literature. PMID:27035862

  17. Automatic Synchronization as the Element of a Power System's Anti-Collapse Complex

    Science.gov (United States)

    Barkāns, J.; Žalostība, D.

    2008-01-01

    In the work, a new universal technical solution is proposed for blackout prevention in a power system, which combines the means for its optimal short-term sectioning and automatic self-restoration to normal conditions. The key element of self-restoration is automatic synchronization. The authors show that for this purpose it is possible to use automatic re-closing with a device for synchronism-check. The results of computations, with simplified formulas and a relevant mathematical model employed, indicate the area of application for this approach. The proposed solution has been created based on many-year experience in the liquidation of emergencies and on the potentialities of equipment, taking into account new features of blackout development that have come into being recently.

  18. Component protection based automatic control

    International Nuclear Information System (INIS)

    Control and safety systems as well as operation procedures are designed on the basis of critical process parameters limits. The expectation is that short and long term mechanical damage and process failures will be avoided by operating the plant within the specified constraints envelopes. In this paper, one of the Advanced Liquid Metal Reactor (ALMR) design duty cycles events is discussed to corroborate that the time has come to explicitly make component protection part of the control system. Component stress assessment and aging data should be an integral part of the control system. Then transient trajectory planning and operating limits could be aimed at minimizing component specific and overall plant component damage cost functions. The impact of transients on critical components could then be managed according to plant lifetime design goals. The need for developing methodologies for online transient trajectory planning and assessment of operating limits in order to facilitate the explicit incorporation of damage assessment capabilities to the plant control and protection systems is discussed. 12 refs

  19. Whale Identification

    Science.gov (United States)

    1991-01-01

    R:BASE for DOS, a computer program developed under NASA contract, has been adapted by the National Marine Mammal Laboratory and the College of the Atlantic to provide and advanced computerized photo matching technique for identification of humpback whales. The program compares photos with stored digitized descriptions, enabling researchers to track and determine distribution and migration patterns. R:BASE is a spinoff of RIM (Relational Information Manager), which was used to store data for analyzing heat shielding tiles on the Space Shuttle Orbiter. It is now the world's second largest selling line of microcomputer database management software.

  20. Prejudice and perception: the role of automatic and controlled processes in misperceiving a weapon.

    Science.gov (United States)

    Payne, B K

    2001-08-01

    Two experiments used a priming paradigm to investigate the influence of racial cues on the perceptual identification of weapons. In Experiment 1, participants identified guns faster when primed with Black faces compared with White faces. In Experiment 2, participants were required to respond quickly, causing the racial bias to shift from reaction time to accuracy. Participants misidentified tools as guns more often when primed with a Black face than with a White face. L. L. Jacoby's (1991) process dissociation procedure was applied to demonstrate that racial primes influenced automatic (A) processing, but not controlled (C) processing. The response deadline reduced the C estimate but not the A estimate. The motivation to control prejudice moderated the relationship between explicit prejudice and automatic bias. Implications are discussed on applied and theoretical levels. PMID:11519925