WorldWideScience

Sample records for automatic term identification

  1. Automatic term identification for bibliometric mapping

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo); E.C.M. Noyons (Ed); R.K. Buter (Reindert)

    2010-01-01

    textabstractA term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subject

  2. Automatic Term Identification for Bibliometric Mapping

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo); E.C.M. Noyons (Ed); R.K. Buter (Reindert)

    2008-01-01

    textabstractA term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subject

  3. Automatic Identification and Organization of Index Terms for Interactive Browsing.

    Science.gov (United States)

    Wacholder, Nina; Evans, David K.; Klavans, Judith L.

    The potential of automatically generated indexes for information access has been recognized for several decades, but the quantity of text and the ambiguity of natural language processing have made progress at this task more difficult than was originally foreseen. Recently, a body of work on development of interactive systems to support phrase…

  4. Automatic identification and classification of muscle spasms in long-term EMG recordings.

    Science.gov (United States)

    Winslow, Jeffrey; Martinez, Adriana; Thomas, Christine K

    2015-03-01

    Spinal cord injured (SCI) individuals may be afflicted by spasticity, a condition in which involuntary muscle spasms are common. EMG recordings can be analyzed to quantify this symptom of spasticity but manual identification and classification of spasms are time consuming. Here, an algorithm was created to find and classify spasm events automatically within 24-h recordings of EMG. The algorithm used expert rules and time-frequency techniques to classify spasm events as tonic, unit, or clonus spasms. A companion graphical user interface (GUI) program was also built to verify and correct the results of the automatic algorithm or manually defined events. Eight channel EMG recordings were made from seven different SCI subjects. The algorithm was able to correctly identify an average (±SD) of 94.5 ± 3.6% spasm events and correctly classify 91.6 ± 1.9% of spasm events, with an accuracy of 61.7 ± 16.2%. The accuracy improved to 85.5 ± 5.9% and the false positive rate decreased to 7.1 ± 7.3%, respectively, if noise events between spasms were removed. On average, the algorithm was more than 11 times faster than manual analysis. Use of both the algorithm and the GUI program provide a powerful tool for characterizing muscle spasms in 24-h EMG recordings, information which is important for clinical management of spasticity.

  5. Automatic Kurdish Dialects Identification

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2016-02-01

    Full Text Available Automatic dialect identification is a necessary Lan guage Technology for processing multi- dialect languages in which the dialects are linguis tically far from each other. Particularly, this becomes crucial where the dialects are mutually uni ntelligible. Therefore, to perform computational activities on these languages, the sy stem needs to identify the dialect that is the subject of the process. Kurdish language encompasse s various dialects. It is written using several different scripts. The language lacks of a standard orthography. This situation makes the Kurdish dialectal identification more interesti ng and required, both form the research and from the application perspectives. In this research , we have applied a classification method, based on supervised machine learning, to identify t he dialects of the Kurdish texts. The research has focused on two widely spoken and most dominant Kurdish dialects, namely, Kurmanji and Sorani. The approach could be applied to the other Kurdish dialects as well. The method is also applicable to the languages which are similar to Ku rdish in their dialectal diversity and differences.

  6. Automatic Identification of Metaphoric Utterances

    Science.gov (United States)

    Dunn, Jonathan Edwin

    2013-01-01

    This dissertation analyzes the problem of metaphor identification in linguistic and computational semantics, considering both manual and automatic approaches. It describes a manual approach to metaphor identification, the Metaphoricity Measurement Procedure (MMP), and compares this approach with other manual approaches. The dissertation then…

  7. Toward the Automatic Identification of Sublanguage Vocabulary.

    Science.gov (United States)

    Haas, Stephanie W.; He, Shaoyi

    1993-01-01

    Describes the development of a method for the automatic identification of sublanguage vocabulary words as they occur in abstracts. Highlights include research relating to sublanguages and their vocabulary; domain terms; evaluation criteria, including recall and precision; and implications for natural language processing and information retrieval.…

  8. Automatic Language Identification

    Science.gov (United States)

    2000-08-01

    Phonology . A "phoneme" is an underlying men- THIS WORK WAS SPONSORED BY THE DEPARTMENT tal representation of a phonological unit in a lan- OF DEFENSE...34 is a realization of an acoustic- FORCE. phonetic unit or segment. It is the actual sound 106 ACOUSTIC AND LANGUAGE MODEL LIBRARY AFRIKAANS...LID. HMM-based language identification phonetic transcription (sequence of symbols representing was first proposed by House and Neuburg [17]. Savic

  9. Automatic sign language identification

    OpenAIRE

    Gebre, B.G.; Wittenburg, P.; Heskes, T.

    2013-01-01

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy...

  10. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  11. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  12. Automatic identification for standing tree limb pruning

    Institute of Scientific and Technical Information of China (English)

    Sun Renshan; Li Wenbin; Tian Yongchen; Hua Li

    2006-01-01

    To meet the demand of automatic pruning machines,this paper presents a new method for dynamic automatic identification of standing tree limbs and capture of the digital images of Platycladus orientalis.Methods of computer vision,image processing and wavelet analysis technology were used to compress,filter,segment,abate noise and capture the outline of the picture.We then present the arithmetic for dynamic automatic identification of standing tree limbs,extracting basic growth characteristics of the standing trees such as the form,size,degree of bending and their relative spatial position.We use pattern recognition technology to confirm the proportionate relationship matching the database and thus achieve the goal of dynamic automatic identification of standing tree limbs.

  13. Intelligent Storage System Based on Automatic Identification

    Directory of Open Access Journals (Sweden)

    Kolarovszki Peter

    2014-09-01

    Full Text Available This article describes RFID technology in conjunction with warehouse management systems. Article also deals with automatic identification and data capture technologies and each processes, which are used in warehouse management system. It describes processes from entering goods into production to identification of goods and also palletizing, storing, bin transferring and removing goods from warehouse. Article focuses on utilizing AMP middleware in WMS processes in Nowadays, the identification of goods in most warehouses is carried through barcodes. In this article we want to specify, how can be processes described above identified through RFID technology. All results are verified by measurement in our AIDC laboratory, which is located at the University of Žilina, and also in Laboratory of Automatic Identification Goods and Services located in GS1 Slovakia. The results of our research bring the new point of view and indicate the ways using of RFID technology in warehouse management system.

  14. An efficient automatic firearm identification system

    Science.gov (United States)

    Chuan, Zun Liang; Liong, Choong-Yeun; Jemain, Abdul Aziz; Ghani, Nor Azura Md.

    2014-06-01

    Automatic firearm identification system (AFIS) is highly demanded in forensic ballistics to replace the traditional approach which uses comparison microscope and is relatively complex and time consuming. Thus, several AFIS have been developed for commercial and testing purposes. However, those AFIS are still unable to overcome some of the drawbacks of the traditional firearm identification approach. The goal of this study is to introduce another efficient and effective AFIS. A total of 747 firing pin impression images captured from five different pistols of same make and model are used to evaluate the proposed AFIS. It was demonstrated that the proposed AFIS is capable of producing firearm identification accuracy rate of over 95.0% with an execution time of less than 0.35 seconds per image.

  15. Automatic Palette Identification of Colored Graphics

    Science.gov (United States)

    Lacroix, Vinciane

    The median-shift, a new clustering algorithm, is proposed to automatically identify the palette of colored graphics, a pre-requisite for graphics vectorization. The median-shift is an iterative process which shifts each data point to the "median" point of its neighborhood defined thanks to a distance measure and a maximum radius, the only parameter of the method. The process is viewed as a graph transformation which converges to a set of clusters made of one or several connected vertices. As the palette identification depends on color perception, the clustering is performed in the L*a*b* feature space. As pixels located on edges are made of mixed colors not expected to be part of the palette, they are removed from the initial data set by an automatic pre-processing. Results are shown on scanned maps and on the Macbeth color chart and compared to well established methods.

  16. On the advances of automatic modal identification for SHM

    Directory of Open Access Journals (Sweden)

    Cardoso Rharã

    2015-01-01

    Full Text Available Structural health monitoring of civil infrastructures has great practical importance for engineers, owners and stakeholders. Numerous researches have been carried out using long-term monitoring, for instance the Rio-Niterói Bridge in Brazil, the former Z24 Bridge in Switzerland, the Millau Bridge in France, among others. In fact, some structures are monitored 24/7 in order to supply dynamic measurements that can be used for the identification of structural problems such as the presence of cracks, excessive vibration, damage or even to perform a quite extensive structural evaluation concerning its reliability and life cycle. The outputs of such an analysis, commonly entitled modal identification, are the so-called modal parameters, i.e. natural frequencies, damping ratios and mode shapes. Therefore, the development and validation of tools for the automatic identification of modal parameters based on the structural responses during normal operation is fundamental, as the success of subsequent damage detection algorithms depends on the accuracy of the modal parameters estimates. The proposed methodology uses the data driven stochastic subspace identification method (SSI-DATA, which is then complemented by a novel procedure developed for the automatic analysis of the stabilization diagrams provided by the SSI-DATA method. The efficiency of the proposed approach is attested via experimental investigations on a simply supported beam tested in laboratory and on a motorway bridge.

  17. Time Synchronization Module for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Choi Il-heung; Oh Sang-heon; Choi Dae-soo; Park Chan-sik; Hwang Dong-hwan; Lee Sang-jeong

    2003-01-01

    This paper proposed a design and implementation procedure of the Time Synchronization Module (TSM) for the Automatic Identification System (AIS). The proposed TSM module uses a Temperature Compensated Crystal Oscillator (TCXO) as a local reference clock, and consists of a Digitally Controlled Oscillator (DCO), a divider, a phase discriminator, and register blocks. The TSM measures time difference between the 1 PPS from the Global Navigation Satellite System (GNSS) receiver and the generated transmitter clock. The measured time difference is compensated by controlling the DCO and the transmit clock is synchronized to the Universal Time Coordinated (UTC). The designed TSM can also be synchronized to the reference time derived from the received message. The proposed module is tested using the experimental AIS transponder set. The experimental results show that the proposed module satisfies the functional and timing specification of the AIS technical standard, ITU-R M.1371.

  18. Statistical pattern recognition for automatic writer identification and verification

    NARCIS (Netherlands)

    Bulacu, Marius Lucian

    2007-01-01

    The thesis addresses the problem of automatic person identification using scanned images of handwriting.Identifying the author of a handwritten sample using automatic image-based methods is an interesting pattern recognition problem with direct applicability in the forensic and historic document ana

  19. Automatic handwriting identification on medieval documents

    NARCIS (Netherlands)

    Bulacu, M.L.; Schomaker, L.R.B.

    2007-01-01

    In this paper, we evaluate the performance of text-independent writer identification methods on a handwriting dataset containing medieval English documents. Applicable identification rates are achieved by combining textural features (joint directional probability distributions) with allographic feat

  20. All-optical automatic pollen identification: Towards an operational system

    Science.gov (United States)

    Crouzy, Benoît; Stella, Michelle; Konzelmann, Thomas; Calpini, Bertrand; Clot, Bernard

    2016-09-01

    We present results from the development and validation campaign of an optical pollen monitoring method based on time-resolved scattering and fluorescence. Focus is first set on supervised learning algorithms for pollen-taxa identification and on the determination of aerosol properties (particle size and shape). The identification capability provides a basis for a pre-operational automatic pollen season monitoring performed in parallel to manual reference measurements (Hirst-type volumetric samplers). Airborne concentrations obtained from the automatic system are compatible with those from the manual method regarding total pollen and the automatic device provides real-time data reliably (one week interruption over five months). In addition, although the calibration dataset still needs to be completed, we are able to follow the grass pollen season. The high sampling from the automatic device allows to go beyond the commonly-presented daily values and we obtain statistically significant hourly concentrations. Finally, we discuss remaining challenges for obtaining an operational automatic monitoring system and how the generic validation environment developed for the present campaign could be used for further tests of automatic pollen monitoring devices.

  1. FORENSIC LINGUISTICS: AUTOMATIC WEB AUTHOR IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    A. A. Vorobeva

    2016-03-01

    Full Text Available Internet is anonymous, this allows posting under a false name, on behalf of others or simply anonymous. Thus, individuals, criminal or terrorist organizations can use Internet for criminal purposes; they hide their identity to avoid the prosecuting. Existing approaches and algorithms for author identification of web-posts on Russian language are not effective. The development of proven methods, technics and tools for author identification is extremely important and challenging task. In this work the algorithm and software for authorship identification of web-posts was developed. During the study the effectiveness of several classification and feature selection algorithms were tested. The algorithm includes some important steps: 1 Feature extraction; 2 Features discretization; 3 Feature selection with the most effective Relief-f algorithm (to find the best feature set with the most discriminating power for each set of candidate authors and maximize accuracy of author identification; 4 Author identification on model based on Random Forest algorithm. Random Forest and Relief-f algorithms are used to identify the author of a short text on Russian language for the first time. The important step of author attribution is data preprocessing - discretization of continuous features; earlier it was not applied to improve the efficiency of author identification. The software outputs top q authors with maximum probabilities of authorship. This approach is helpful for manual analysis in forensic linguistics, when developed tool is used to narrow the set of candidate authors. For experiments on 10 candidate authors, real author appeared in to top 3 in 90.02% cases, on first place real author appeared in 70.5% of cases.

  2. Person categorization and automatic racial stereotyping effects on weapon identification.

    Science.gov (United States)

    Jones, Christopher R; Fazio, Russell H

    2010-08-01

    Prior stereotyping research provides conflicting evidence regarding the importance of person categorization along a particular dimension for the automatic activation of a stereotype corresponding to that dimension. Experiment 1 replicated a racial stereotyping effect on object identification and examined whether it could be attenuated by encouraging categorization by age. Experiment 2 employed socially complex person stimuli and manipulated whether participants categorized spontaneously or by race. In Experiment 3, the distinctiveness of the racial dimension was manipulated by having Black females appear in the context of either Black males or White females. The results indicated that conditions fostering categorization by race consistently produced automatic racial stereotyping and that conditions fostering nonracial categorization can eliminate automatic racial stereotyping. Implications for the relation between automatic stereotype activation and dimension of categorization are discussed.

  3. Automatic seagrass pattern identification on sonar images

    Science.gov (United States)

    Rahnemoonfar, Maryam; Rahman, Abdullah

    2016-05-01

    Natural and human-induced disturbances are resulting in degradation and loss of seagrass. Freshwater flooding, severe meteorological events and invasive species are among the major natural disturbances. Human-induced disturbances are mainly due to boat propeller scars in the shallow seagrass meadows and anchor scars in the deeper areas. Therefore, there is a vital need to map seagrass ecosystems in order to determine worldwide abundance and distribution. Currently there is no established method for mapping the pothole or scars in seagrass. One of the most precise sensors to map the seagrass disturbance is side scan sonar. Here we propose an automatic method which detects seagrass potholes in sonar images. Side scan sonar images are notorious for having speckle noise and uneven illumination across the image. Moreover, disturbance presents complex patterns where most segmentation techniques will fail. In this paper, by applying mathematical morphology technique and calculating the local standard deviation of the image, the images were enhanced and the pothole patterns were identified. The proposed method was applied on sonar images taken from Laguna Madre in Texas. Experimental results show the effectiveness of the proposed method.

  4. A model for automatic identification of human pulse signals

    Institute of Scientific and Technical Information of China (English)

    Hui-yan WANG; Pei-yong ZHANG

    2008-01-01

    This paper presents a quantitative method for automatic identification of human pulse signals. The idea is to start with the extraction of characteristic parameters and then to construct the recognition model based on Bayesian networks. To identify depth, frequency and rhythm, several parameters are proposed. To distinguish the strength and shape, which cannot be represented by one or several parameters and are hard to recognize, the main time-domain feature parameters are computed based on the feature points of the pulse signal. Then the extracted parameters are taken as the input and five models for automatic pulse signal identification are constructed based on Bayesian networks. Experimental results demonstrate that the method is feasible and effective in recognizing depth, frequency, rhythm, strength and shape of pulse signals, which can be expected to facilitate the modernization of pulse diagnosis.

  5. MAC, A System for Automatically IPR Identification, Collection and Distribution

    Science.gov (United States)

    Serrão, Carlos

    Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

  6. Automatic identification of artifacts in electrodermal activity data.

    Science.gov (United States)

    Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind

    2015-01-01

    Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.

  7. Rewriting and suppressing UMLS terms for improved biomedical term identification

    NARCIS (Netherlands)

    K.M. Hettne (Kristina); E.M. van Mulligen (Erik); M.J. Schuemie (Martijn); R.J.A. Schijvenaars (Bob); J.A. Kors (Jan)

    2010-01-01

    textabstractBackground: Identification of terms is essential for biomedical text mining. We concentrate here on the use of vocabularies for term identification, specifically the Unified Medical Language System (UMLS). To make the UMLS more suitable for biomedical text mining we implemented and evalu

  8. Optimization of the PubMed Automatic Term Mapping.

    Science.gov (United States)

    Thirion, Benoit; Robu, Ioana; Darmoni, Stéfan J

    2009-01-01

    PubMed, freely available on the internet, is the best known database for medical information. We propose a method of optimization of the PubMed Automatic Term Mapping (ATM) that includes MeSH terms. This method is evaluated using two queries constructed to emphasize the differences between the PubMed queries as they are at present and also between these queries and the optimized one. The proposed query is significantly more precise than the current PubMed query (54.5% vs. 27%). The optimized query proposed would be easy to implement into PubMed.

  9. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  10. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  11. Automatic annotation of protein motif function with Gene Ontology terms

    Directory of Open Access Journals (Sweden)

    Gopalakrishnan Vanathi

    2004-09-01

    Full Text Available Abstract Background Conserved protein sequence motifs are short stretches of amino acid sequence patterns that potentially encode the function of proteins. Several sequence pattern searching algorithms and programs exist foridentifying candidate protein motifs at the whole genome level. However, amuch needed and importanttask is to determine the functions of the newly identified protein motifs. The Gene Ontology (GO project is an endeavor to annotate the function of genes or protein sequences with terms from a dynamic, controlled vocabulary and these annotations serve well as a knowledge base. Results This paperpresents methods to mine the GO knowledge base and use the association between the GO terms assigned to a sequence and the motifs matched by the same sequence as evidence for predicting the functions of novel protein motifs automatically. The task of assigning GO terms to protein motifsis viewed as both a binary classification and information retrieval problem, where PROSITE motifs are used as samples for mode training and functional prediction. The mutual information of a motif and aGO term association isfound to be a very useful feature. We take advantageof the known motifs to train a logistic regression classifier, which allows us to combine mutual information with other frequency-based features and obtain a probability of correctassociation. The trained logistic regression model has intuitively meaningful and logically plausible parameter values, and performs very well empirically according to our evaluation criteria. Conclusions In this research, different methods for automatic annotation of protein motifs have been investigated. Empirical result demonstrated that the methods have a great potential for detecting and augmenting information about thefunctions of newly discovered candidate protein motifs.

  12. Channel Access Algorithm Design for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong

    2003-01-01

    The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.

  13. Automatic identification and normalization of dosage forms in drug monographs

    Directory of Open Access Journals (Sweden)

    Li Jiao

    2012-02-01

    Full Text Available Abstract Background Each day, millions of health consumers seek drug-related information on the Web. Despite some efforts in linking related resources, drug information is largely scattered in a wide variety of websites of different quality and credibility. Methods As a step toward providing users with integrated access to multiple trustworthy drug resources, we aim to develop a method capable of identifying drug's dosage form information in addition to drug name recognition. We developed rules and patterns for identifying dosage forms from different sections of full-text drug monographs, and subsequently normalized them to standardized RxNorm dosage forms. Results Our method represents a significant improvement compared with a baseline lookup approach, achieving overall macro-averaged Precision of 80%, Recall of 98%, and F-Measure of 85%. Conclusions We successfully developed an automatic approach for drug dosage form identification, which is critical for building links between different drug-related resources.

  14. Automatic Identification of Antibodies in the Protein Data Bank

    Institute of Scientific and Technical Information of China (English)

    LI Xun; WANG Renxiao

    2009-01-01

    An automatic method has been developed for identifying antibody entries in the protein data bank (PDB). Our method, called KIAb (Keyword-based Identification of Antibodies), parses PDB-format files to search for particular keywords relevant to antibodies, and makes judgment accordingly. Our method identified 780 entries as antibodies on the entire PDB. Among them, 767 entries were confirmed by manual inspection, indicating a high success rate of 98.3%. Our method recovered basically all of the entries compiled in the Summary of Antibody Crystal Structures (SACS) database. It also identified a number of entries missed by SACS. Our method thus provides a more com-plete mining of antibody entries in PDB with a very low false positive rate.

  15. AUTOMATIC LICENSE PLATE LOCALISATION AND IDENTIFICATION VIA SIGNATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Lorita Angeline

    2014-02-01

    Full Text Available A new algorithm for license plate localisation and identification is proposed on the basis of Signature analysis. Signature analysis has been used to locate license plate candidate and its properties can be further utilised in supporting and affirming the license plate character recognition. This paper presents Signature Analysis and the improved conventional Connected Component Analysis (CCA to design an automatic license plate localisation and identification. A procedure called Euclidean Distance Transform is added to the conventional CCA in order to tackle the multiple bounding boxes that occurred. The developed algorithm, SAICCA achieved 92% successful rate, with 8% failed localisation rate due to the restrictions such as insufficient light level, clarity and license plate perceptual information. The processing time for a license plate localisation and recognition is a crucial criterion that needs to be concerned. Therefore, this paper has utilised several approaches to decrease the processing time to an optimal value. The results obtained show that the proposed system is capable to be implemented in both ideal and non-ideal environments.

  16. Automatic Identification System modular receiver for academic purposes

    Science.gov (United States)

    Cabrera, F.; Molina, N.; Tichavska, M.; Araña, V.

    2016-07-01

    The Automatic Identification System (AIS) standard is encompassed within the Global Maritime Distress and Safety System (GMDSS), in force since 1999. The GMDSS is a set of procedures, equipment, and communication protocols designed with the aim of increasing the safety of sea crossings, facilitating navigation, and the rescue of vessels in danger. The use of this system not only is increasingly attractive to security issues but also potentially creates intelligence products throughout the added-value information that this network can transmit from ships on real time (identification, position, course, speed, dimensions, flag, among others). Within the marine electronics market, commercial receivers implement this standard and allow users to access vessel-broadcasted information if in the range of coverage. In addition to satellite services, users may request actionable information from private or public AIS terrestrial networks where real-time feed or historical data can be accessed from its nodes. This paper describes the configuration of an AIS receiver based on a modular design. This modular design facilitates the evaluation of specific modules and also a better understanding of the standard and the possibility of changing hardware modules to improve the performance of the prototype. Thus, the aim of this paper is to describe the system's specifications, its main hardware components, and to present educational didactics on the setup and use of a modular and terrestrial AIS receiver. The latter is for academic purposes and in undergraduate studies such as electrical engineering, telecommunications, and maritime studies.

  17. Automatic Language Identification with Discriminative Language Characterization Based on SVM

    Science.gov (United States)

    Suo, Hongbin; Li, Ming; Lu, Ping; Yan, Yonghong

    Robust automatic language identification (LID) is the task of identifying the language from a short utterance spoken by an unknown speaker. The mainstream approaches include parallel phone recognition language modeling (PPRLM), support vector machine (SVM) and the general Gaussian mixture models (GMMs). These systems map the cepstral features of spoken utterances into high level scores by classifiers. In this paper, in order to increase the dimension of the score vector and alleviate the inter-speaker variability within the same language, multiple data groups based on supervised speaker clustering are employed to generate the discriminative language characterization score vectors (DLCSV). The back-end SVM classifiers are used to model the probability distribution of each target language in the DLCSV space. Finally, the output scores of back-end classifiers are calibrated by a pair-wise posterior probability estimation (PPPE) algorithm. The proposed language identification frameworks are evaluated on 2003 NIST Language Recognition Evaluation (LRE) databases and the experiments show that the system described in this paper produces comparable results to the existing systems. Especially, the SVM framework achieves an equal error rate (EER) of 4.0% in the 30-second task and outperforms the state-of-art systems by more than 30% relative error reduction. Besides, the performances of proposed PPRLM and GMMs algorithms achieve an EER of 5.1% and 5.0% respectively.

  18. Automatic Identification of Interictal Epileptiform Discharges in Secondary Generalized Epilepsy

    Science.gov (United States)

    Chang, Won-Du; Cha, Ho-Seung; Lee, Chany; Kang, Hoon-Chul; Im, Chang-Hwan

    2016-01-01

    Ictal epileptiform discharges (EDs) are characteristic signal patterns of scalp electroencephalogram (EEG) or intracranial EEG (iEEG) recorded from patients with epilepsy, which assist with the diagnosis and characterization of various types of epilepsy. The EEG signal, however, is often recorded from patients with epilepsy for a long period of time, and thus detection and identification of EDs have been a burden on medical doctors. This paper proposes a new method for automatic identification of two types of EDs, repeated sharp-waves (sharps), and runs of sharp-and-slow-waves (SSWs), which helps to pinpoint epileptogenic foci in secondary generalized epilepsy such as Lennox-Gastaut syndrome (LGS). In the experiments with iEEG data acquired from a patient with LGS, our proposed method detected EDs with an accuracy of 93.76% and classified three different signal patterns with a mean classification accuracy of 87.69%, which was significantly higher than that of a conventional wavelet-based method. Our study shows that it is possible to successfully detect and discriminate sharps and SSWs from background EEG activity using our proposed method. PMID:27379172

  19. Automatic Identification of Interictal Epileptiform Discharges in Secondary Generalized Epilepsy

    Directory of Open Access Journals (Sweden)

    Won-Du Chang

    2016-01-01

    Full Text Available Ictal epileptiform discharges (EDs are characteristic signal patterns of scalp electroencephalogram (EEG or intracranial EEG (iEEG recorded from patients with epilepsy, which assist with the diagnosis and characterization of various types of epilepsy. The EEG signal, however, is often recorded from patients with epilepsy for a long period of time, and thus detection and identification of EDs have been a burden on medical doctors. This paper proposes a new method for automatic identification of two types of EDs, repeated sharp-waves (sharps, and runs of sharp-and-slow-waves (SSWs, which helps to pinpoint epileptogenic foci in secondary generalized epilepsy such as Lennox-Gastaut syndrome (LGS. In the experiments with iEEG data acquired from a patient with LGS, our proposed method detected EDs with an accuracy of 93.76% and classified three different signal patterns with a mean classification accuracy of 87.69%, which was significantly higher than that of a conventional wavelet-based method. Our study shows that it is possible to successfully detect and discriminate sharps and SSWs from background EEG activity using our proposed method.

  20. Automatic validation of phosphopeptide identifications from tandem mass spectra.

    Science.gov (United States)

    Lu, Bingwen; Ruse, Cristian; Xu, Tao; Park, Sung Kyu; Yates, John

    2007-02-15

    We developed and compared two approaches for automated validation of phosphopeptide tandem mass spectra identified using database searching algorithms. Phosphopeptide identifications were obtained through SEQUEST searches of a protein database appended with its decoy (reversed sequences). Statistical evaluation and iterative searches were employed to create a high-quality data set of phosphopeptides. Automation of postsearch validation was approached by two different strategies. By using statistical multiple testing, we calculate a p value for each tentative peptide phosphorylation. In a second method, we use a support vector machine (SVM; a machine learning algorithm) binary classifier to predict whether a tentative peptide phosphorylation is true. We show good agreement (85%) between postsearch validation of phosphopeptide/spectrum matches by multiple testing and that from support vector machines. Automatic methods conform very well with manual expert validation in a blinded test. Additionally, the algorithms were tested on the identification of synthetic phosphopeptides. We show that phosphate neutral losses in tandem mass spectra can be used to assess the correctness of phosphopeptide/spectrum matches. An SVM classifier with a radial basis function provided classification accuracy from 95.7% to 96.8% of the positive data set, depending on search algorithm used. Establishing the efficacy of an identification is a necessary step for further postsearch interrogation of the spectra for complete localization of phosphorylation sites. Our current implementation performs validation of phosphoserine/phosphothreonine-containing peptides having one or two phosphorylation sites from data gathered on an ion trap mass spectrometer. The SVM-based algorithm has been implemented in the software package DeBunker. We illustrate the application of the SVM-based software DeBunker on a large phosphorylation data set.

  1. System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator

    Science.gov (United States)

    2006-08-01

    System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator Jae-Jun Kim∗ and Brij N. Agrawal † Department of...TITLE AND SUBTITLE System Identification and Automatic Mass Balancing of Ground-Based Three-Axis Spacecraft Simulator 5a. CONTRACT NUMBER 5b...and Dynamics, Vol. 20, No. 4, July-August 1997, pp. 625-632. 6Schwartz, J. L. and Hall, C. D., “ System Identification of a Spherical Air-Bearing

  2. Salient Feature Identification and Analysis using Kernel-Based Classification Techniques for Synthetic Aperture Radar Automatic Target Recognition

    Science.gov (United States)

    2014-03-27

    SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION...FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION THESIS Presented...SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION

  3. Rewriting and suppressing UMLS terms for improved biomedical term identification

    Directory of Open Access Journals (Sweden)

    Hettne Kristina M

    2010-03-01

    Full Text Available Abstract Background Identification of terms is essential for biomedical text mining.. We concentrate here on the use of vocabularies for term identification, specifically the Unified Medical Language System (UMLS. To make the UMLS more suitable for biomedical text mining we implemented and evaluated nine term rewrite and eight term suppression rules. The rules rely on UMLS properties that have been identified in previous work by others, together with an additional set of new properties discovered by our group during our work with the UMLS. Our work complements the earlier work in that we measure the impact on the number of terms identified by the different rules on a MEDLINE corpus. The number of uniquely identified terms and their frequency in MEDLINE were computed before and after applying the rules. The 50 most frequently found terms together with a sample of 100 randomly selected terms were evaluated for every rule. Results Five of the nine rewrite rules were found to generate additional synonyms and spelling variants that correctly corresponded to the meaning of the original terms and seven out of the eight suppression rules were found to suppress only undesired terms. Using the five rewrite rules that passed our evaluation, we were able to identify 1,117,772 new occurrences of 14,784 rewritten terms in MEDLINE. Without the rewriting, we recognized 651,268 terms belonging to 397,414 concepts; with rewriting, we recognized 666,053 terms belonging to 410,823 concepts, which is an increase of 2.8% in the number of terms and an increase of 3.4% in the number of concepts recognized. Using the seven suppression rules, a total of 257,118 undesired terms were suppressed in the UMLS, notably decreasing its size. 7,397 terms were suppressed in the corpus. Conclusions We recommend applying the five rewrite rules and seven suppression rules that passed our evaluation when the UMLS is to be used for biomedical term identification in MEDLINE. A software

  4. Principal Component Analysis and Automatic Relevance Determination in Damage Identification

    CERN Document Server

    Mdlazi, L; Stander, C J; Scheffer, C; Heyns, P S

    2007-01-01

    This paper compares two neural network input selection schemes, the Principal Component Analysis (PCA) and the Automatic Relevance Determination (ARD) based on Mac-Kay's evidence framework. The PCA takes all the input data and projects it onto a lower dimension space, thereby reduc-ing the dimension of the input space. This input reduction method often results with parameters that have significant influence on the dynamics of the data being diluted by those that do not influence the dynamics of the data. The ARD selects the most relevant input parameters and discards those that do not contribute significantly to the dynamics of the data being modelled. The ARD sometimes results with important input parameters being discarded thereby compromising the dynamics of the data. The PCA and ARD methods are implemented together with a Multi-Layer-Perceptron (MLP) network for fault identification in structures and the performance of the two methods is as-sessed. It is observed that ARD and PCA give similar accu-racy le...

  5. Automatic Personal Identification Using Feature Similarity Index Matching

    Directory of Open Access Journals (Sweden)

    R. Gayathri

    2012-01-01

    Full Text Available Problem statement: Biometrics based personal identification is as an effective method for automatically recognizing, a persons identity with high confidence. Palmprint is an essential biometric feature for use in access control and forensic applications. In this study, we present a multi feature extraction, based on edge detection scheme, applying Log Gabor filter to enhance image structures and suppress noise. Approach: A novel Feature-Similarity Indexing (FSIM of image algorithm is used to generate the matching score between the original image in database and the input test image. Feature Similarity (FSIM index for full reference (image quality assurance IQA is proposed based on the fact that Human Visual System (HVS understands an image mainly according to its low-level features. Results and Conclusion: The experimental results achieve recognition accuracy using canny and perwitt FSIM of 97.3227 and 94.718%, respectively, on the publicly available database of Hong Kong Polytechnic University. Totally 500 images of 100 individuals, 4 samples for each palm are randomly selected to train in this research. Then we get every person each palm image as a template (total 100. Experimental evaluation using palmprint image databases clearly demonstrates the efficient recognition performance of the proposed algorithm compared with the conventional palmprint recognition algorithms.

  6. Automatic Identification of Systolic Time Intervals in Seismocardiogram

    Science.gov (United States)

    Shafiq, Ghufran; Tatinati, Sivanagaraja; Ang, Wei Tech; Veluvolu, Kalyana C.

    2016-11-01

    Continuous and non-invasive monitoring of hemodynamic parameters through unobtrusive wearable sensors can potentially aid in early detection of cardiac abnormalities, and provides a viable solution for long-term follow-up of patients with chronic cardiovascular diseases without disrupting the daily life activities. Electrocardiogram (ECG) and siesmocardiogram (SCG) signals can be readily acquired from light-weight electrodes and accelerometers respectively, which can be employed to derive systolic time intervals (STI). For this purpose, automated and accurate annotation of the relevant peaks in these signals is required, which is challenging due to the inter-subject morphological variability and noise prone nature of SCG signal. In this paper, an approach is proposed to automatically annotate the desired peaks in SCG signal that are related to STI by utilizing the information of peak detected in the sliding template to narrow-down the search for the desired peak in actual SCG signal. Experimental validation of this approach performed in conventional/controlled supine and realistic/challenging seated conditions, containing over 5600 heart beat cycles shows good performance and robustness of the proposed approach in noisy conditions. Automated measurement of STI in wearable configuration can provide a quantified cardiac health index for long-term monitoring of patients, elderly people at risk and health-enthusiasts.

  7. Automatic Language Identification for Romance Languages using Stop Words and Diacritics

    OpenAIRE

    Truică, Ciprian-Octavian; Velcin, Julien; Boicea, Alexandru

    2015-01-01

    International audience; Automatic language identification is a natural language processing problem that tries to determine the natural language of a given content. In this paper we present a statistical method for automatic language identification of written text using dictionaries containing stop words and diacritics. We propose different approaches that combine the two dictionaries to accurately determine the language of textual corpora. This method was chosen because stop words and diacrit...

  8. Semi-automatic long-term acoustic surveying

    DEFF Research Database (Denmark)

    Andreassen, Tórur; Surlykke, Annemarie; Hallam, John

    2014-01-01

    data sampling rates (500kHz). Using a sound energy threshold criterion for triggering recording, we collected 236GB (Gi=10243) of data at full bandwidth. We implemented a simple automatic method using a Support Vector Machine (SVM) classifier based on a combination of temporal and spectral analyses...... in bat calls to reject short noise pulses, e.g. from rain. The SVM classifier reduced our dataset to 162MB of candidate bat calls with an estimated accuracy of 96% for dry nights and 70% when it was raining. The automatic survey revealed calls from two species of bat not previously recorded in the area...

  9. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  10. A Wireless Framework for Lecturers' Attendance System with Automatic Vehicle Identification (AVI Technology

    Directory of Open Access Journals (Sweden)

    Emammer Khamis Shafter

    2015-10-01

    Full Text Available Automatic Vehicle Identification (AVI technology is one type of Radio Frequency Identification (RFID method which can be used to significantly improve the efficiency of lecturers' attendance system. It provides the capability of automatic data capture for attendance records using mobile device equipped in users’ vehicle. The intent of this article is to propose a framework for automatic lecturers' attendance system using AVI technology. The first objective of this work involves gathering of requirements for Automatic Lecturers' Attendance System and to represent them using UML diagrams. The second objective is to put forward a framework that will provide guidelines for developing the system. A prototype has also been created as a pilot project.

  11. Automatic player detection and identification for sports entertainment applications

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khattak, Shadid; Hasan, Laiq; Khan, Samee U.

    2014-01-01

    In this paper, we develop an augmented reality sports broadcasting application for automatic detection, recognition of players during play, followed by display of personal information of players. The proposed application can be divided into four major steps. In first step, each player in the image i

  12. Complete approach to automatic identification and subpixel center location for ellipse feature

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To meet the need of automatic image features extraction with high precision in visual inspection, a complete approach to automatic identification and sub-pixel center location for similar-ellipse feature is proposed. In the method, the feature area is identified automatically based on the edge attribute, and the sub-pixel center location is accomplished with the leastsquare algorithm. It shows that the method is valid, practical, and has high precision by experiment. Meanwhile this method can meet the need of instrumentation of visual inspection because of easy realization and without man-machine interaction.

  13. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.

  14. Automatic script identification from images using cluster-based templates

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  15. 3-D Storm Automatic Identification Based on Mathematical Morphology

    Institute of Scientific and Technical Information of China (English)

    HAN Lei; ZHENG Yongguang; WANG Hongqing; LIN Yinjing

    2009-01-01

    The strom identification, tracking, and forecasting method is one of the important nowcasting techniques. Accurate storm identification is a prerequisite for successful storm tracking and forecasting. Storm identi-fication faces two difficulties: one is false merger and the other is failure to isolate adjacent storms within a cluster of storms. The TITAN (Thunderstorm Identification, Tracking, Analysis, and Nowcasting) algo-rithm is apt to identify adjacent storm cells as one storm because it uses a single refiectivity threshold. The SCIT (Storm Cell Identification and Tracking) algorithm uses seven reflectivity thresholds and therefore is capable of isolating adjacent storm cells, but it discards the results identified by the lower threshold, leading to the loss of the internal structure information of storms. Both TITAN and SCIT have the problem of failing to satisfactorily identify false merger. To overcome these shortcomings, this paper proposes a novel approach based on mathematical morphology. The approach first applies the single threshold identification followed by implementing an erosion process to mitigate the false merger problem. During multi-threshold identification stages, dilation operation is performed against the storm cells which are just obtained by the higher threshold identification, until the storm edges touch each other or touch the edges of the previous storms identified by the lower threshold. The results of experiment show that by combining the strengths of the dilation and erosion operations, this approach is able to mitigate the false merger problem as well as maintain the internal structure of sub-storms when isolating storms within a cluster of storms.

  16. Automatic and Direct Identification of Blink Components from Scalp EEG

    Directory of Open Access Journals (Sweden)

    Guojun Dai

    2013-08-01

    Full Text Available Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA. Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn’t need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects.

  17. Automatic and direct identification of blink components from scalp EEG.

    Science.gov (United States)

    Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun

    2013-08-16

    Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects.

  18. An Evaluation of Cellular Neural Networks for the Automatic Identification of Cephalometric Landmarks on Digital Images

    Directory of Open Access Journals (Sweden)

    Rosalia Leonardi

    2009-01-01

    Full Text Available Several efforts have been made to completely automate cephalometric analysis by automatic landmark search. However, accuracy obtained was worse than manual identification in every study. The analogue-to-digital conversion of X-ray has been claimed to be the main problem. Therefore the aim of this investigation was to evaluate the accuracy of the Cellular Neural Networks approach for automatic location of cephalometric landmarks on softcopy of direct digital cephalometric X-rays. Forty-one, direct-digital lateral cephalometric radiographs were obtained by a Siemens Orthophos DS Ceph and were used in this study and 10 landmarks (N, A Point, Ba, Po, Pt, B Point, Pg, PM, UIE, LIE were the object of automatic landmark identification. The mean errors and standard deviations from the best estimate of cephalometric points were calculated for each landmark. Differences in the mean errors of automatic and manual landmarking were compared with a 1-way analysis of variance. The analyses indicated that the differences were very small, and they were found at most within 0.59 mm. Furthermore, only few of these differences were statistically significant, but differences were so small to be in most instances clinically meaningless. Therefore the use of X-ray files with respect to scanned X-ray improved landmark accuracy of automatic detection. Investigations on softcopy of digital cephalometric X-rays, to search more landmarks in order to enable a complete automatic cephalometric analysis, are strongly encouraged.

  19. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  20. Pavement crack identification based on automatic threshold iterative method

    Science.gov (United States)

    Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao

    2017-01-01

    Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.

  1. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  2. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV...

  3. Automatic diatom identification using contour analysis by morphological curvature scale spaces

    NARCIS (Netherlands)

    Jalba, Andrei C.; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.; Bayer, Micha M.; Juggins, Stephen

    2005-01-01

    A method for automatic identification of diatoms (single-celled algae with silica shells) based on extraction of features on the contour of the cells by multi-scale mathematical morphology is presented. After extracting the contour of the cell, it is smoothed adaptively, encoded using Freeman chain

  4. Exploring features for automatic identification of news queries through query logs

    Institute of Scientific and Technical Information of China (English)

    Xiaojuan; ZHANG; Jian; LI

    2014-01-01

    Purpose:Existing researches of predicting queries with news intents have tried to extract the classification features from external knowledge bases,this paper tries to present how to apply features extracted from query logs for automatic identification of news queries without using any external resources.Design/methodology/approach:First,we manually labeled 1,220 news queries from Sogou.com.Based on the analysis of these queries,we then identified three features of news queries in terms of query content,time of query occurrence and user click behavior.Afterwards,we used 12 effective features proposed in literature as baseline and conducted experiments based on the support vector machine(SVM)classifier.Finally,we compared the impacts of the features used in this paper on the identification of news queries.Findings:Compared with baseline features,the F-score has been improved from 0.6414 to0.8368 after the use of three newly-identified features,among which the burst point(bst)was the most effective while predicting news queries.In addition,query expression(qes)was more useful than query terms,and among the click behavior-based features,news URL was the most effective one.Research limitations:Analyses based on features extracted from query logs might lead to produce limited results.Instead of short queries,the segmentation tool used in this study has been more widely applied for long texts.Practical implications:The research will be helpful for general-purpose search engines to address search intents for news events.Originality/value:Our approach provides a new and different perspective in recognizing queries with news intent without such large news corpora as blogs or Twitter.

  5. Defect Automatic Identification of Eddy Current Pulsed Thermography

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2014-01-01

    Full Text Available Eddy current pulsed thermography (ECPT is an effective nondestructive testing and evaluation (NDT&E technique, and has been applied for a wide range of conductive materials. Manual selected frames have been used for defects detection and quantification. Defects are indicated by high/low temperature in the frames. However, the variation of surface emissivity sometimes introduces illusory temperature inhomogeneity and results in false alarm. To improve the probability of detection, this paper proposes a two-heat balance states-based method which can restrain the influence of the emissivity. In addition, the independent component analysis (ICA is also applied to automatically identify defect patterns and quantify the defects. An experiment was carried out to validate the proposed methods.

  6. Automatic identification of model reductions for discrete stochastic simulation

    Science.gov (United States)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  7. Investigation of Ballistic Evidence through an Automatic Image Analysis and Identification System.

    Science.gov (United States)

    Kara, Ilker

    2016-05-01

    Automated firearms identification (AFI) systems contribute to shedding light on criminal events by comparison between different pieces of evidence on cartridge cases and bullets and by matching similar ones that were fired from the same firearm. Ballistic evidence can be rapidly analyzed and classified by means of an automatic image analysis and identification system. In addition, it can be used to narrow the range of possible matching evidence. In this study conducted on the cartridges ejected from the examined pistol, three imaging areas, namely the firing pin impression, capsule traces, and the intersection of these traces, were compared automatically using the image analysis and identification system through the correlation ranking method to determine the numeric values that indicate the significance of the similarities. These numerical features that signify the similarities and differences between pistol makes and models can be used in groupings to make a distinction between makes and models of pistols.

  8. Sensitivity Based Segmentation and Identification in Automatic Speech Recognition.

    Science.gov (United States)

    1984-03-30

    by a network constructed from phonemic, phonetic , and phonological rules. Regardless of the speech processing system used, Klatt 1 2 has described...analysis, and its use in the segmentation and identification of the phonetic units of speech, that was initiated during the 1982 Summer Faculty Research...practicable framework for incorporation of acoustic- phonetic variance as well as time and talker normalization. XOI iF- ? ’:: .:- .- . . l ] 2 D

  9. Using Automatic Identification System Technology to Improve Maritime Border Security

    Science.gov (United States)

    2014-12-01

    passengers for hire; • High-speed passenger vessels with 12 or more passengers for hire; • Certain dredges and floating plants ; • Vessels moving...requirement did not apply to private visiting vessels.51 The Mexican government has also taken steps to improve identification requirements of vessels in...government of Mexico to “locate and identify (in real time) any small vessels cruising Mexican National waters.”53 As of June 2014, the Mexican

  10. Ontology-based automatic identification of public health-related Turkish tweets.

    Science.gov (United States)

    Küçük, Emine Ela; Yapar, Kürşad; Küçük, Dilek; Küçük, Doğan

    2017-02-04

    Social media analysis, such as the analysis of tweets, is a promising research topic for tracking public health concerns including epidemics. In this paper, we present an ontology-based approach to automatically identify public health-related Turkish tweets. The system is based on a public health ontology that we have constructed through a semi-automated procedure. The ontology concepts are expanded through a linguistically motivated relaxation scheme as the last stage of ontology development, before being integrated into our system to increase its coverage. The ultimate lexical resource which includes the terms corresponding to the ontology concepts is used to filter the Twitter stream so that a plausible tweet subset, including mostly public-health related tweets, can be obtained. Experiments are carried out on two million genuine tweets and promising precision rates are obtained. Also implemented within the course of the current study is a Web-based interface, to track the results of this identification system, to be used by the related public health staff. Hence, the current social media analysis study has both technical and practical contributions to the significant domain of public health.

  11. Automatic type classification and speaker identification of african elephant (Loxodonta africana) vocalizations

    Science.gov (United States)

    Clemins, Patrick J.; Johnson, Michael T.

    2003-04-01

    This paper presents a system for automatically classifying African elephant vocalizations based on systems used for human speech recognition and speaker identification. The experiments are performed on vocalizations collected from captive elephants in a naturalistic environment. Features used for classification include Mel-Frequency Cepstral Coefficients (MFCCs) and log energy which are the most common features used in human speech processing. Since African elephants use lower frequencies than humans in their vocalizations, the MFCCs are computed using a shifted Mel-Frequency filter bank to emphasize the infrasound range of the frequency spectrum. In addition to these features, the use of less traditional features such as those based on fundamental frequency and the phase of the frequency spectrum is also considered. A Hidden Markov Model with Gaussian mixture state probabilities is used to model each type of vocalization. Vocalizations are classified based on type, speaker and estrous cycle. Experiments on continuous call type recognition, which can classify multiple vocalizations in the same utterance, are also performed. The long-term goal of this research is to develop a universal analysis framework and robust feature set for animal vocalizations that can be applied to many species.

  12. A multi-algorithm-based automatic person identification system

    Science.gov (United States)

    Monwar, Md. Maruf; Gavrilova, Marina

    2010-04-01

    Multimodal biometric is an emerging area of research that aims at increasing the reliability of biometric systems through utilizing more than one biometric in decision-making process. In this work, we develop a multi-algorithm based multimodal biometric system utilizing face and ear features and rank and decision fusion approach. We use multilayer perceptron network and fisherimage approaches for individual face and ear recognition. After face and ear recognition, we integrate the results of the two face matchers using rank level fusion approach. We experiment with highest rank method, Borda count method, logistic regression method and Markov chain method of rank level fusion approach. Due to the better recognition performance we employ Markov chain approach to combine face decisions. Similarly, we get combined ear decision. These two decisions are combined for final identification decision. We try with 'AND'/'OR' rule, majority voting rule and weighted majority voting rule of decision fusion approach. From the experiment results, we observed that weighted majority voting rule works better than any other decision fusion approaches and hence, we incorporate this fusion approach for the final identification decision. The final results indicate that using multi algorithm based can certainly improve the recognition performance of multibiometric systems.

  13. Perspective of the applications of automatic identification technologies in the Serbian Army

    Directory of Open Access Journals (Sweden)

    Velibor V. Jovanović

    2012-07-01

    Full Text Available Without modern information systems, supply-chain management is almost impossible. Automatic identification technologies provide automated data processing, which contributes to improving the conditions and support decision making. Automatic identification technology media, notably BARCODE and RFID technology, are used as carriers of labels with high quality data and adequate description of material means, for providing a crucial visibility of inventory levels through the supply chain. With these media and the use of an adequate information system, the Ministry of Defense of the Republic of Serbia will be able to establish a system of codification and, in accordance with the NATO codification system, to successfully implement a unique codification, classification and determination of storage numbers for all tools, components and spare parts for their unequivocal identification. In the perspective, this will help end users to perform everyday tasks without compromising the material integrity of security data. It will also help command structures to have reliable information for decision making to ensure optimal management. Products and services that pass the codification procedure will have the opportunity to be offered in the largest market of armament and military equipment. This paper gives a comparative analysis of two automatic identification technologies - BARCODE, the most common one, and RFID, the most advanced one - with an emphasis on the advantages and disadvantages of their use in tracking inventory through the supply chain. Their possible application in the Serbian Army is discussed in general.

  14. Online automatic identification of the modal parameters of a long span arch bridge

    Science.gov (United States)

    Magalhães, Filipe; Cunha, Álvaro; Caetano, Elsa

    2009-02-01

    The "Infante D. Henrique" bridge is a concrete arch bridge, with a span of 280 m that crosses the Douro River, linking the cities of Porto and Gaia located in the North of Portugal. This structure is being monitored by a recently installed dynamic monitoring system that comprises 12 acceleration channels. This paper describes the bridge structure, its dynamic parameters identified with a previously developed ambient vibration test, the installed monitoring equipment and the software that continuously processes the data received from the bridge through an Internet connection. Special emphasis is given to the algorithms that have been developed and implemented to perform the online automatic identification of the structure modal parameters from its measured responses during normal operation. The proposed methodology uses the covariance driven stochastic subspace identification method (SSI-COV), which is then complemented by a new algorithm developed for the automatic analysis of stabilization diagrams. This new tool, based on a hierarchical clustering algorithm, proved to be very efficient on the identification of the bridge first 12 modes. The results achieved during 2 months of observation, which involved the analysis of more than 2500 datasets, are presented in detail. It is demonstrated that with the combination of high-quality equipment and powerful identification algorithms, it is possible to estimate, in an automatic manner, accurate modal parameters for several modes. These can then be used as inputs for damage detection algorithms.

  15. Managing Returnable Containers Logistics - A Case Study Part II - Improving Visibility through Using Automatic Identification Technologies

    Directory of Open Access Journals (Sweden)

    Gretchen Meiser

    2011-05-01

    Full Text Available This case study is the result of a project conducted on behalf of a company that uses its own returnable containers to transport purchased parts from suppliers. The objective of this project was to develop a proposal to enable the company to more effectively track and manage its returnable containers. The research activities in support of this project included (1 the analysis and documentation of the physical flow and the information flow associated with the containers and (2 the investigation of new technologies to improve the automatic identification and tracking of containers. This paper explains the automatic identification technologies and important criteria for selection. A companion paper details the flow of information and containers within the logistics chain, and it identifies areas for improving the management of the containers.

  16. Semi-automatic identification photo generation with facial pose and illumination normalization

    Science.gov (United States)

    Jiang, Bo; Liu, Sijiang; Wu, Song

    2016-07-01

    Identification photo is a category of facial image that has strict requirements on image quality like size, illumination, user expression, dressing, etc. Traditionally, these photos are taken in professional studios. With the rapid popularity of mobile devices, how to conveniently take identification photo at any time and anywhere with such devices is an interesting problem. In this paper, we propose a novel semi-automatic identification photo generation approach. Given a user image, facial pose and expression are first normalized to meet the basic requirements. To correct uneven lighting condition in photo, an facial illumination normalization approach is adopted to further improve the image quality. Finally, foreground user is extracted and re-targeted to a specific photo size. Besides, background can also be changed as required. Preliminary experimental results show that the proposed method is efficient and effective in identification photo generation compared to commercial software based manual tunning.

  17. Automatic Identification of Tomato Maturation Using Multilayer Feed Forward Neural Network with Genetic Algorithms (GA)

    Institute of Scientific and Technical Information of China (English)

    FANG Jun-long; ZHANG Chang-li; WANG Shu-wen

    2004-01-01

    We set up computer vision system for tomato images. By using this system, the RGB value of tomato image was converted into HIS value whose H was used to acquire the color character of the surface of tomato. To use multilayer feed forward neural network with GA can finish automatic identification of tomato maturation. The results of experiment showed that the accuracy was upto 94%.

  18. Semi-automatic term extraction for the African languages, with special reference to Northern Sotho

    OpenAIRE

    Elsabé Taljard; Gilles-Maurice de Schryver

    2002-01-01

    Abstract: Worldwide, semi-automatically extracting terms from corpora is becoming the norm for the compilation of terminology lists, term banks or dictionaries for special purposes. If Africanlanguage terminologists are willing to take their rightful place in the new millennium, they must not only take cognisance of this trend but also be ready to implement the new technology. In this article it is advocated that the best way to do the latter two at this stage, is to opt for computat...

  19. Towards an automatic spectral and modal identification from operational modal analysis

    Science.gov (United States)

    Vu, V. H.; Thomas, M.; Lafleur, F.; Marcouiller, L.

    2013-01-01

    A method is developed for the automatic identification of the spectrum and modal parameters of an operational modal analysis using multi sensors. A multivariate autoregressive model is presented, and its parameters are estimated by least squares via the implementation of QR factorization. A noise-independent minimum model order, from which all available physical modes may be identified, is developed. This so-called optimal model order is selected from the convergence of a global order-wise signal-to-noise ratio index. At this model order or higher, the modes are classified based on a decreasing damped modal signal-to-noise (DMSN) criterion. This decreasing order classification allows for easy identification of all the physical modes. A significant change in the DMSN index enables the determination of the number of physical modes in a specific frequency range, and thus, an automatic procedure for identifying the modal parameters can be developed to discriminate harmonic and natural frequencies from spurious ones. Furthermore, a multispectral matrix can be constructed from selected frequencies by introducing a powered amplification factor, which provides a smooth, balanced, noise-free spectrum with all main peaks. The proposed method has been performed on simulated multi-degree-of-freedom systems, on a laboratory test bench, and on an industrial operating high power hydro-electric generator offering the potential for automatic operational modal analysis and structural health monitoring.

  20. An Automatic Identification Procedure to Promote the use of FES-Cycling Training for Hemiparetic Patients

    Directory of Open Access Journals (Sweden)

    Emilia Ambrosini

    2014-01-01

    Full Text Available Cycling induced by Functional Electrical Stimulation (FES training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  1. An automatic identification procedure to promote the use of FES-cycling training for hemiparetic patients.

    Science.gov (United States)

    Ambrosini, Emilia; Ferrante, Simona; Schauer, Thomas; Ferrigno, Giancarlo; Molteni, Franco; Pedrocchi, Alessandra

    2014-01-01

    Cycling induced by Functional Electrical Stimulation (FES) training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury) were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  2. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    Science.gov (United States)

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model.

  3. A new approach to the automatic identification of organism evolution using neural networks.

    Science.gov (United States)

    Kasperski, Andrzej; Kasperska, Renata

    2016-01-01

    Automatic identification of organism evolution still remains a challenging task, which is especially exiting, when the evolution of human is considered. The main aim of this work is to present a new idea to allow organism evolution analysis using neural networks. Here we show that it is possible to identify evolution of any organisms in a fully automatic way using the designed EvolutionXXI program, which contains implemented neural network. The neural network has been taught using cytochrome b sequences of selected organisms. Then, analyses have been carried out for the various exemplary organisms in order to demonstrate capabilities of the EvolutionXXI program. It is shown that the presented idea allows supporting existing hypotheses, concerning evolutionary relationships between selected organisms, among others, Sirenia and elephants, hippopotami and whales, scorpions and spiders, dolphins and whales. Moreover, primate (including human), tree shrew and yeast evolution has been reconstructed.

  4. Identification of mycobacterium tuberculosis in sputum smear slide using automatic scanning microscope

    Science.gov (United States)

    Rulaningtyas, Riries; Suksmono, Andriyan B.; Mengko, Tati L. R.; Saptawati, Putri

    2015-04-01

    Sputum smear observation has an important role in tuberculosis (TB) disease diagnosis, because it needs accurate identification to avoid high errors diagnosis. In development countries, sputum smear slide observation is commonly done with conventional light microscope from Ziehl-Neelsen stained tissue and it doesn't need high cost to maintain the microscope. The clinicians do manual screening process for sputum smear slide which is time consuming and needs highly training to detect the presence of TB bacilli (mycobacterium tuberculosis) accurately, especially for negative slide and slide with less number of TB bacilli. For helping the clinicians, we propose automatic scanning microscope with automatic identification of TB bacilli. The designed system modified the field movement of light microscope with stepper motor which was controlled by microcontroller. Every sputum smear field was captured by camera. After that some image processing techniques were done for the sputum smear images. The color threshold was used for background subtraction with hue canal in HSV color space. Sobel edge detection algorithm was used for TB bacilli image segmentation. We used feature extraction based on shape for bacilli analyzing and then neural network classified TB bacilli or not. The results indicated identification of TB bacilli that we have done worked well and detected TB bacilli accurately in sputum smear slide with normal staining, but not worked well in over staining and less staining tissue slide. However, overall the designed system can help the clinicians in sputum smear observation becomes more easily.

  5. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Yoshihisa Sakurai

    2016-04-01

    Full Text Available This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions.

  6. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors.

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-04-02

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier's subtechniques in course conditions.

  7. Automatic derivation of domain terms and concept location based on the analysis of the identifiers

    CERN Document Server

    Vaclavik, Peter; Mezei, Marek

    2010-01-01

    Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and web frameworks. Knowledge of domain terms enables easy localization of chunks of code that belong to a certain term. We consider these chunks of code as "concepts" and their placement in the code as "concept location". Application developers may also benefit from the obtained domain terms. These terms are parts of speech that characterize a certain concept. Concepts are encoded in "classes" (OO paradigm) and the obtained vocabulary of terms supports the selection and the comprehension of the class' appropriate identifiers. ...

  8. Automatic Identification of Axis Orbit Based on Both Wavelet Moment Invariants and Neural Network

    Institute of Scientific and Technical Information of China (English)

    FuXiang-qian; LiuGuang-lin; JiangJing; LiYou-ping

    2003-01-01

    Axis orbit is an important characteristic to be used in the condition monitoring and diagnosis system of rotating machine. The wavelet moment has the invariant to the translation, scaling and rotation. A method, which uses a neural network based on Radial Basis Function (RBF) and wavelet moment invariants to identify the orbit of shaft centerline of rotating machine is discussed in this paper. The principle and its application procedure of the method are introduced in detail. It gives simulation results of automatic identification for three typical axis orbits. It is proved that the method is effective and practicable.

  9. Automatic Identification of Axis Orbit Based on Both Wavelet Moment Invariants and Neural Network

    Institute of Scientific and Technical Information of China (English)

    Fu Xiang-qian; Liu Guang-lin; Jiang Jing; Li You-ping

    2003-01-01

    Axis orbit is an important characteristic to be used in the condition monitoring and diagnosis system of rota-ting machine. The wavelet moment has the invariant to the translation, scaling and rotation. A method, which uses a neural network based on Radial Basis Function (RBF) and wavelet moment invariants to identify the orbit of shaft centerline of rotating machine is discussed in this paper. The principle and its application procedure of the method are intro-duced in detail. It gives simulation results of automatic identi-fication for three typical axis orbits. It is proved that the method is effective and practicable.

  10. Automatic classification and speaker identification of African elephant (Loxodonta africana) vocalizations

    Science.gov (United States)

    Clemins, Patrick J.; Johnson, Michael T.; Leong, Kirsten M.; Savage, Anne

    2005-02-01

    A hidden Markov model (HMM) system is presented for automatically classifying African elephant vocalizations. The development of the system is motivated by successful models from human speech analysis and recognition. Classification features include frequency-shifted Mel-frequency cepstral coefficients (MFCCs) and log energy, spectrally motivated features which are commonly used in human speech processing. Experiments, including vocalization type classification and speaker identification, are performed on vocalizations collected from captive elephants in a naturalistic environment. The system classified vocalizations with accuracies of 94.3% and 82.5% for type classification and speaker identification classification experiments, respectively. Classification accuracy, statistical significance tests on the model parameters, and qualitative analysis support the effectiveness and robustness of this approach for vocalization analysis in nonhuman species. .

  11. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  12. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    Science.gov (United States)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  13. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    Directory of Open Access Journals (Sweden)

    Ferrández Oscar

    2012-07-01

    Full Text Available Abstract Background The increased use and adoption of Electronic Health Records (EHR causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI, which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity and precision (equivalent to positive predictive value metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches

  14. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Amato, G. [ISTI-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V. [IPCF-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Sorrentino, F., E-mail: sorrentino@fi.infn.i [Dipartimento di Fisica e astronomia, Universita di Firenze, Polo Scientifico, via Sansone 1, 50019 Sesto Fiorentino (Italy); Istituto di Cibernetica CNR, via Campi Flegrei 34, 80078 Pozzuoli (Italy); Marwan Technology, c/o Dipartimento di Fisica ' E. Fermi' , Largo Pontecorvo 3, 56127 Pisa (Italy); Tognoni, E. [INO-CNR, Area della Ricerca, Via Moruzzi 1, 56124 Pisa (Italy)

    2010-08-15

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  15. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Science.gov (United States)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  16. Automatic Threshold Determination for a Local Approach of Change Detection in Long-Term Signal Recordings

    Directory of Open Access Journals (Sweden)

    Khalil Mohamad

    2007-01-01

    Full Text Available CUSUM (cumulative sum is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS algorithm, previously developed for application to long-term electromyography (EMG recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold acted to stop the estimation process, the other one (upper threshold was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events. Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm.

  17. Semi-automatic charge and mass identification in two-dimensional matrices

    CERN Document Server

    Gruyer, Diego; Chbihi, A; Frankland, J D; Barlini, S; Borderie, B; Bougault, R; Duenas, J A; Neindre, N Le; Lopez, O; Pastore, G; Piantelli, S; Valdre, S; Verde, G; Vient, E

    2016-01-01

    This article presents a new semi-automatic method for charge and mass identification in two-dimensional matrices. The proposed algorithm is based on the matrix's properties and uses as little information as possible on the global form of the iden tification lines, making it applicable to a large variety of matrices, including various $\\Delta$E-E correlations, or those coming from Pulse Shape Analysis of the charge signal in silicon detectors. Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented.

  18. A support vector machine approach to the automatic identification of fluorescence spectra emitted by biological agents

    Science.gov (United States)

    Gelfusa, M.; Murari, A.; Lungaroni, M.; Malizia, A.; Parracino, S.; Peluso, E.; Cenciarelli, O.; Carestia, M.; Pizzoferrato, R.; Vega, J.; Gaudio, P.

    2016-10-01

    Two of the major new concerns of modern societies are biosecurity and biosafety. Several biological agents (BAs) such as toxins, bacteria, viruses, fungi and parasites are able to cause damage to living systems either humans, animals or plants. Optical techniques, in particular LIght Detection And Ranging (LIDAR), based on the transmission of laser pulses and analysis of the return signals, can be successfully applied to monitoring the release of biological agents into the atmosphere. It is well known that most of biological agents tend to emit specific fluorescence spectra, which in principle allow their detection and identification, if excited by light of the appropriate wavelength. For these reasons, the detection of the UVLight Induced Fluorescence (UV-LIF) emitted by BAs is particularly promising. On the other hand, the stand-off detection of BAs poses a series of challenging issues; one of the most severe is the automatic discrimination between various agents which emit very similar fluorescence spectra. In this paper, a new data analysis method, based on a combination of advanced filtering techniques and Support Vector Machines, is described. The proposed approach covers all the aspects of the data analysis process, from filtering and denoising to automatic recognition of the agents. A systematic series of numerical tests has been performed to assess the potential and limits of the proposed methodology. The first investigations of experimental data have already given very encouraging results.

  19. Identification of forensic samples by using an infrared-based automatic DNA sequencer.

    Science.gov (United States)

    Ricci, Ugo; Sani, Ilaria; Klintschar, Michael; Cerri, Nicoletta; De Ferrari, Francesco; Giovannucci Uzielli, Maria Luisa

    2003-06-01

    We have recently introduced a new protocol for analyzing all core loci of the Federal Bureau of Investigation's (FBI) Combined DNA Index System (CODIS) with an infrared (IR) automatic DNA sequencer (LI-COR 4200). The amplicons were labeled with forward oligonucleotide primers, covalently linked to a new infrared fluorescent molecule (IRDye 800). The alleles were displayed as familiar autoradiogram-like images with real-time detection. This protocol was employed for paternity testing, population studies, and identification of degraded forensic samples. We extensively analyzed some simulated forensic samples and mixed stains (blood, semen, saliva, bones, and fixed archival embedded tissues), comparing the results with donor samples. Sensitivity studies were also performed for the four multiplex systems. Our results show the efficiency, reliability, and accuracy of the IR system for the analysis of forensic samples. We also compared the efficiency of the multiplex protocol with ultraviolet (UV) technology. Paternity tests, undegraded DNA samples, and real forensic samples were analyzed with this approach based on IR technology and with UV-based automatic sequencers in combination with commercially-available kits. The comparability of the results with the widespread UV methods suggests that it is possible to exchange data between laboratories using the same core group of markers but different primer sets and detection methods.

  20. Automatic identification of pectoralis muscle on digital cranio-caudal-view mammograms

    Science.gov (United States)

    Ge, Mei; Mawdsley, Gordon; Yaffe, Martin

    2011-03-01

    To improve efficiency and reduce human error in the computerized calculation of volumetric breast density, we have developed an automatic identification process which suppresses the projected region of the pectoralis muscle on digital CC-view mammograms. The pixels in the image of the pectoralis muscle, represent dense tissue, but not related to risk, will cause an error in estimated breast density if counted as fibroglandular tissue. The pectoralis muscle on the CC-view is not always visible and has variable shape and location. Our algorithm robustly detects the existence of the pectoralis in the image and segments it as a semi-elliptical region that closely matches manually segmented images. We present a pipeline where adaptive thresholding and distance transforms have been used in the initial pectoralis region identification process; statistical region growing is applied to explore the region within the identified location aimed at refining the boundary; and a 2D shape descriptor is developed for the target validation: the segmented region is identified as the pectoralis muscle if it has a semi-elliptical contour. After the pectoralis muscle is identified, a 1D-FFT filtering is used for boundary smoothing. Quantitative evaluation was performed by comparing manual segmentation by a trained operator, and analysis using the algorithm in a set of 174 randomly selected digital mammograms. Use of the algorithm is shown to improve accuracy in the automatic determination of the volumetric ratio of breast composition by removal of the pectoralis muscle from both the numerator and denominator. As well, it greatly improves the efficiency and throughput in large scale volumetric mammographic density studies where previously interaction with an operator was required to obtain that level of accuracy.

  1. Automatic Identification of Artifact-Related Independent Components for Artifact Removal in EEG Recordings.

    Science.gov (United States)

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2016-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods.

  2. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training.

  3. Automatic identification of bird targets with radar via patterns produced by wing flapping.

    Science.gov (United States)

    Zaugg, Serge; Saporta, Gilbert; van Loon, Emiel; Schmaljohann, Heiko; Liechti, Felix

    2008-09-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research.

  4. Automatic identification of seismic swarms and other spatio-temporal clustering from catalogs

    Science.gov (United States)

    Nava, F. Alejandro; Glowacka, Ewa

    1994-06-01

    Statistical analysis of seismic catalogs usually requires identification of swarms and foreshocks-main event-aftershocks sequences-a tedious and time-consuming chore. SWaRMSHoW, a simple but versatile QBASIC program for PC, graphically displays on screen catalog epicentral activity, with optional temporal distribution scaling; identifies spatio-temporal hypocentral clusters (SwrSeq) which may be swarms or foreshocks-main event-aftershocks sequences and discriminates between these; and displays SwrSeq locations and limits, and assigns them equivalent magnitudes corresponding to those of single events having seismic energy equal to that of the whole SwrSeq. SWaRMSHoW features optional detailed disk output of swarms and clusters, including origin time, location, constituent events, equivalent magnitudes, and current parameters, that allows easy application of results. Graphic screen display includes optional maps and drawings. Operation can be completely automatic or interactive. Working parameters can be reset at any time during operation. Besides swarm and sequence identification, this program's modeling of the seismicity, scaled in both space and time, is useful for studying many aspects of spatio-temporal seismicity, such as fault activation, migration of activity, quiescence, etc.

  5. A new technology for automatic identification and sorting of plastics for recycling.

    Science.gov (United States)

    Ahmad, S R

    2004-10-01

    A new technology for automatic sorting of plastics, based upon optical identification of fluorescence signatures of dyes, incorporated in such materials in trace concentrations prior to product manufacturing, is described. Three commercial tracers were selected primarily on the basis of their good absorbency in the 310-370 nm spectral band and their identifiable narrow-band fluorescence signatures in the visible band of the spectrum when present in binary combinations. This absorption band was selected because of the availability of strong emission lines in this band from a commercial Hg-arc lamp and high fluorescence quantum yields of the tracers at this excitation wavelength band. The plastics chosen for tracing and identification are HDPE, LDPE, PP, EVA, PVC and PET and the tracers were compatible and chemically non-reactive with the host matrices and did not affect the transparency of the plastics. The design of a monochromatic and collimated excitation source, the sensor system are described and their performances in identifying and sorting plastics doped with tracers at a few parts per million concentration levels are evaluated. In an industrial sorting system, the sensor was able to sort 300 mm long plastic bottles at a conveyor belt speed of 3.5 m.sec(-1) with a sorting purity of -95%. The limitation was imposed due to mechanical singulation irregularities at high speed and the limited processing speed of the computer used.

  6. Long term Suboxone™ emotional reactivity as measured by automatic detection in speech.

    Science.gov (United States)

    Hill, Edward; Han, David; Dumouchel, Pierre; Dehak, Najim; Quatieri, Thomas; Moehs, Charles; Oscar-Berman, Marlene; Giordano, John; Simpatico, Thomas; Barh, Debmalya; Blum, Kenneth

    2013-01-01

    Addictions to illicit drugs are among the nation's most critical public health and societal problems. The current opioid prescription epidemic and the need for buprenorphine/naloxone (Suboxone®; SUBX) as an opioid maintenance substance, and its growing street diversion provided impetus to determine affective states ("true ground emotionality") in long-term SUBX patients. Toward the goal of effective monitoring, we utilized emotion-detection in speech as a measure of "true" emotionality in 36 SUBX patients compared to 44 individuals from the general population (GP) and 33 members of Alcoholics Anonymous (AA). Other less objective studies have investigated emotional reactivity of heroin, methadone and opioid abstinent patients. These studies indicate that current opioid users have abnormal emotional experience, characterized by heightened response to unpleasant stimuli and blunted response to pleasant stimuli. However, this is the first study to our knowledge to evaluate "true ground" emotionality in long-term buprenorphine/naloxone combination (Suboxone™). We found in long-term SUBX patients a significantly flat affect (p<0.01), and they had less self-awareness of being happy, sad, and anxious compared to both the GP and AA groups. We caution definitive interpretation of these seemingly important results until we compare the emotional reactivity of an opioid abstinent control using automatic detection in speech. These findings encourage continued research strategies in SUBX patients to target the specific brain regions responsible for relapse prevention of opioid addiction.

  7. Long term Suboxone™ emotional reactivity as measured by automatic detection in speech.

    Directory of Open Access Journals (Sweden)

    Edward Hill

    Full Text Available Addictions to illicit drugs are among the nation's most critical public health and societal problems. The current opioid prescription epidemic and the need for buprenorphine/naloxone (Suboxone®; SUBX as an opioid maintenance substance, and its growing street diversion provided impetus to determine affective states ("true ground emotionality" in long-term SUBX patients. Toward the goal of effective monitoring, we utilized emotion-detection in speech as a measure of "true" emotionality in 36 SUBX patients compared to 44 individuals from the general population (GP and 33 members of Alcoholics Anonymous (AA. Other less objective studies have investigated emotional reactivity of heroin, methadone and opioid abstinent patients. These studies indicate that current opioid users have abnormal emotional experience, characterized by heightened response to unpleasant stimuli and blunted response to pleasant stimuli. However, this is the first study to our knowledge to evaluate "true ground" emotionality in long-term buprenorphine/naloxone combination (Suboxone™. We found in long-term SUBX patients a significantly flat affect (p<0.01, and they had less self-awareness of being happy, sad, and anxious compared to both the GP and AA groups. We caution definitive interpretation of these seemingly important results until we compare the emotional reactivity of an opioid abstinent control using automatic detection in speech. These findings encourage continued research strategies in SUBX patients to target the specific brain regions responsible for relapse prevention of opioid addiction.

  8. An automatic method to homogenize trends in long-term monthly precipitation series

    Science.gov (United States)

    Rustemeier, E.; Kapala, A.; Mächel, H.; Meyer-Christoffer, A.; Schneider, U.; Ziese, M.; Venema, V.; Becker, A.; Simmer, C.

    2012-04-01

    Lack of homogeneity of long-term series of in-situ precipitation observations is a known problem and requires time consuming manual data correction in order to allow for a robust trend analysis. This work is focused on the development of an algorithm for automatic data correction of multiple stations. The algorithm relies on the similarity of climate signals between close stations. It consists of three steps: 1) Construction of networks of comparable precipitation behaviour; 2) Detection of breakpoints; 3) Trend correction. Detection and correction are based on the homogenization software (Prodige) adopted from Météo France (Caussinus and Mestre 2004). The networks are constructed based on monthly accumulated precipitation and several indices. For the classification, principal component analysis in S-mode is applied followed by a VARIMAX rotation. Within each network, a segmentation method is used to detect the breaks. In order to develop a fully automatic method, scaled time series are combined to create the reference series. The monthly correction applied is a multiple linear regression as described in Mestre, 2004 which also conserves the annual cycle. At present, the algorithm has been used to homogenize 100 years of precipitation records from stations in Germany, without any missing values. The data has been digitized recently by the Meteorological Institute of the University of Bonn and the Deutscher Wetterdienst. The resulting networks correspond well to the German geographical regions. The number of detected breaks varies between 0 ~7 breaks per station. The majority of breaks is very small (below ±10 mm per year) despite a few high (up to ±200 mm) ones. In future, the algorithm will be used to generate a homogenous global precipitation data set HOMPRA for the period 1951-2005 using more than 16000 stations in collaboration with the Global Precipitation Climatology Centre (GPCC, Becker et al., 2012).

  9. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    Science.gov (United States)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  10. A Novel OD Estimation Method Based on Automatic Vehicle Identification Data

    Science.gov (United States)

    Sun, Jian; Feng, Yu

    With the development and application of Automatic Vehicle Identification (AVI) technologies, a novel high resolution OD estimation method was proposed based on AVI detector information. 4 detected categories (Ox + Dy, Ox/Dy + Path(s), Ox/Dy, Path(s)) were divided at the first step. Then the initial OD matrix was updated using the Ox + Dy sample information considering the AVI detector errors. Referenced by particle filter, the link-path relationship data were revised using the last 3 categories information based on Bayesian inference and the possible trajectory and OD were determined using Monte Carlo random process at last. Finally, according to the current application of video detector in Shanghai, the North-South expressway was selected as the testbed which including 17 OD pairs and 9 AVI detectors. The results show that the calculated average relative error is 12.09% under the constraints that the simulation error is under 15% and the detector error is about 10%. It also shows that this method is highly efficient and can fully using the partial vehicle trajectory which can be satisfied with the dynamic traffic management application in reality.

  11. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature.

  12. Adoption of automatic identification systems by grocery retailersin the Johannesburg area

    Directory of Open Access Journals (Sweden)

    Christopher C. Darlington

    2011-11-01

    Full Text Available Retailers not only need the right data capture technology to meet the requirements of their applications, they must also decide on what the optimum technology is from the different symbologies that have been developed over the years. Automatic identification systems (AIS are a priority to decision makers as they attempt to obtain the best blend of equipment to ensure greater loss prevention and higher reliability in data capture. However there is a risk of having too simplistic a view of adopting AIS, since no one solution is applicable across an industry or business model. This problem is addressed through an exploratory, descriptive study, where the nature and value of AIS adoption by grocery retailers in the Johannesburg area is interrogated. Mixed empirical results indicate that, as retailers adopt AIS in order to improve their supply chain management systems, different types of applications are associated with various constraints and opportunities. Overall this study is in line with previous research that supports the notion that supply chain decisions are of a strategic nature even though efficient management of information is a day-to-day business operational decision.

  13. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  14. Automatic Identification Algorithm of KPI%KPI指标的自动辨别算法

    Institute of Scientific and Technical Information of China (English)

    张卓

    2016-01-01

    应用数理统计原理给出了通过实际监测数据作为样本来估计话务量正常取值范围的方法及结论,并将这一结论推广到了一般的KPI指标。通过实际监测数据作为样本来估计KPI指标的均值及方差,进而推断其分布函数及其正常取值范围,最终给出自动辨别算法及自动控制程序。%This paper applied the principle of mathematical statistics to the actual monitoring data as a sample to estimate traffic normal value scope of method and conclusion, and this conclusion has been expanded to general kPIs, such as traffic, cutting over the success rate and amount of paging, etc. The final automatic identification algorithm is presented.

  15. Multiple damage identification and imaging in an aluminum plate using effective Lamb wave response automatic extraction technology

    Science.gov (United States)

    Ouyang, Qinghua; Zhou, Li; Liu, Xiaotong

    2016-04-01

    In order to identify multiple damage in the structure, a method of multiple damage identification and imaging based on the effective Lamb wave response automatic extraction algorithm is proposed. In this method, the detected key area in the structure is divided into a number of subregions, and then, the effective response signals including the structural damage information are automatically extracted from the entire Lamb wave responses which are received by the piezoelectric sensors. Further, the damage index values of every subregion based on the correlation coefficient are calculated using the effective response signals. Finally, the damage identification and imaging are performed using the reconstruction algorithm for probabilistic inspection of damage (RAPID) technique. The experimental research was conducted using an aluminum plate. The experimental results show that the method proposed in this research can quickly and effectively identify the single damage or multiple damage and image the damages clearly in detected area.

  16. Automatic procedure for mass and charge identification of light isotopes detected in CsI(Tl) of the GARFIELD apparatus

    Science.gov (United States)

    Morelli, L.; Bruno, M.; Baiocco, G.; Bardelli, L.; Barlini, S.; Bini, M.; Casini, G.; D'Agostino, M.; Degerlier, M.; Gramegna, F.; Kravchuk, V. L.; Marchi, T.; Pasquali, G.; Poggi, G.

    2010-08-01

    Mass and charge identification of light charged particles detected with the 180 CsI(Tl) detectors of the GARFIELD apparatus is presented. A "tracking" method to automatically sample the Z and A ridges of "Fast-Slow" histograms is developed. An empirical analytic identification function is used to fit correlations between Fast and Slow, in order to determine, event by event, the atomic and mass numbers of the detected charged reaction products. A summary of the advantages of the proposed method with respect to "hand-based" procedures is reported.

  17. Automatic procedure for mass and charge identification of light isotopes detected in CsI(Tl) of the GARFIELD apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Morelli, L.; Bruno, M.; Baiocco, G. [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); Bardelli, L.; Barlini, S.; Bini, M.; Casini, G. [Dipartimento di Fisica dell' Universita and INFN, Firenze (Italy); D' Agostino, M., E-mail: dagostino@bo.infn.i [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); Degerlier, M.; Gramegna, F. [INFN, Laboratori Nazionali di Legnaro (Italy); Kravchuk, V.L. [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); INFN, Laboratori Nazionali di Legnaro (Italy); Marchi, T. [Dipartimento di Fisica dell' Universita, Padova, ItalyNUCL-EX Collaboration (Italy); INFN, Laboratori Nazionali di Legnaro (Italy); Pasquali, G.; Poggi, G. [Dipartimento di Fisica dell' Universita and INFN, Firenze (Italy)

    2010-08-21

    Mass and charge identification of light charged particles detected with the 180 CsI(Tl) detectors of the GARFIELD apparatus is presented. A 'tracking' method to automatically sample the Z and A ridges of 'Fast-Slow' histograms is developed. An empirical analytic identification function is used to fit correlations between Fast and Slow, in order to determine, event by event, the atomic and mass numbers of the detected charged reaction products. A summary of the advantages of the proposed method with respect to 'hand-based' procedures is reported.

  18. Maritime surveillance with synthetic aperture radar (SAR) and automatic identification system (AIS) onboard a microsatellite constellation

    Science.gov (United States)

    Peterson, E. H.; Zee, R. E.; Fotopoulos, G.

    2012-11-01

    New developments in small spacecraft capabilities will soon enable formation-flying constellations of small satellites, performing cooperative distributed remote sensing at a fraction of the cost of traditional large spacecraft missions. As part of ongoing research into applications of formation-flight technology, recent work has developed a mission concept based on combining synthetic aperture radar (SAR) with automatic identification system (AIS) data. Two or more microsatellites would trail a large SAR transmitter in orbit, each carrying a SAR receiver antenna and one carrying an AIS antenna. Spaceborne AIS can receive and decode AIS data from a large area, but accurate decoding is limited in high traffic areas, and the technology relies on voluntary vessel compliance. Furthermore, vessel detection amidst speckle in SAR imagery can be challenging. In this constellation, AIS broadcasts of position and velocity are received and decoded, and used in combination with SAR observations to form a more complete picture of maritime traffic and identify potentially non-cooperative vessels. Due to the limited transmit power and ground station downlink time of the microsatellite platform, data will be processed onboard the spacecraft. Herein we present the onboard data processing portion of the mission concept, including methods for automated SAR image registration, vessel detection, and fusion with AIS data. Georeferencing in combination with a spatial frequency domain method is used for image registration. Wavelet-based speckle reduction facilitates vessel detection using a standard CFAR algorithm, while leaving sufficient detail for registration of the filtered and compressed imagery. Moving targets appear displaced from their actual position in SAR imagery, depending on their velocity and the image acquisition geometry; multiple SAR images acquired from different locations are used to determine the actual positions of these targets. Finally, a probabilistic inference

  19. FragIdent – Automatic identification and characterisation of cDNA-fragments

    Directory of Open Access Journals (Sweden)

    Goehler Heike

    2009-03-01

    Full Text Available Abstract Background Many genetic studies and functional assays are based on cDNA fragments. After the generation of cDNA fragments from an mRNA sample, their content is at first unknown and must be assigned by sequencing reactions or hybridisation experiments. Even in characterised libraries, a considerable number of clones are wrongly annotated. Furthermore, mix-ups can happen in the laboratory. It is therefore essential to the relevance of experimental results to confirm or determine the identity of the employed cDNA fragments. However, the manual approach for the characterisation of these fragments using BLAST web interfaces is not suited for larger number of sequences and so far, no user-friendly software is publicly available. Results Here we present the development of FragIdent, an application for the automatic identification of open reading frames (ORFs within cDNA-fragments. The software performs BLAST analyses to identify the genes represented by the sequences and suggests primers to complete the sequencing of the whole insert. Gene-specific information as well as the protein domains encoded by the cDNA fragment are retrieved from Internet-based databases and included in the output. The application features an intuitive graphical interface and is designed for researchers without any bioinformatics skills. It is suited for projects comprising up to several hundred different clones. Conclusion We used FragIdent to identify 84 cDNA clones from a yeast two-hybrid experiment. Furthermore, we identified 131 protein domains within our analysed clones. The source code is freely available from our homepage at http://compbio.charite.de/genetik/FragIdent/.

  20. MetaboHunter: an automatic approach for identification of metabolites from 1H-NMR spectra of complex mixtures

    Directory of Open Access Journals (Sweden)

    Culf Adrian

    2011-10-01

    Full Text Available Abstract Background One-dimensional 1H-NMR spectroscopy is widely used for high-throughput characterization of metabolites in complex biological mixtures. However, the accurate identification of individual compounds is still a challenging task, particularly in spectral regions with higher peak densities. The need for automatic tools to facilitate and further improve the accuracy of such tasks, while using increasingly larger reference spectral libraries becomes a priority of current metabolomics research. Results We introduce a web server application, called MetaboHunter, which can be used for automatic assignment of 1H-NMR spectra of metabolites. MetaboHunter provides methods for automatic metabolite identification based on spectra or peak lists with three different search methods and with possibility for peak drift in a user defined spectral range. The assignment is performed using as reference libraries manually curated data from two major publicly available databases of NMR metabolite standard measurements (HMDB and MMCD. Tests using a variety of synthetic and experimental spectra of single and multi metabolite mixtures show that MetaboHunter is able to identify, in average, more than 80% of detectable metabolites from spectra of synthetic mixtures and more than 50% from spectra corresponding to experimental mixtures. This work also suggests that better scoring functions improve by more than 30% the performance of MetaboHunter's metabolite identification methods. Conclusions MetaboHunter is a freely accessible, easy to use and user friendly 1H-NMR-based web server application that provides efficient data input and pre-processing, flexible parameter settings, fast and automatic metabolite fingerprinting and results visualization via intuitive plotting and compound peak hit maps. Compared to other published and freely accessible metabolomics tools, MetaboHunter implements three efficient methods to search for metabolites in manually curated

  1. Google Earth Visualizations of the Marine Automatic Identification System (AIS): Monitoring Ship Traffic in National Marine Sanctuaries

    Science.gov (United States)

    Schwehr, K.; Hatch, L.; Thompson, M.; Wiley, D.

    2007-12-01

    The Automatic Identification System (AIS) is a new technology that provides ship position reports with location, time, and identity information without human intervention from ships carrying the transponders to any receiver listening to the broadcasts. In collaboration with the USCG's Research and Development Center, NOAA's Stellwagen Bank National Marine Sanctuary (SBNMS) has installed 3 AIS receivers around Massachusetts Bay to monitor ship traffic transiting the sanctuary and surrounding waters. The SBNMS and the USCG also worked together propose the shifting the shipping lanes (termed the traffic separation scheme; TSS) that transit the sanctuary slightly to the north to reduce the probability of ship strikes of whales that frequent the sanctuary. Following approval by the United Nation's International Maritime Organization, AIS provided a means for NOAA to assess changes in the distribution of shipping traffic caused by formal change in the TSS effective July 1, 2007. However, there was no easy way to visualize this type of time series data. We have created a software package called noaadata-py to process the AIS ship reports and produce KML files for viewing in Google Earth. Ship tracks can be shown changing over time to allow the viewer to feel the motion of traffic through the sanctuary. The ship tracks can also be gridded to create ship traffic density reports for specified periods of time. The density is displayed as map draped on the sea surface or as vertical histogram columns. Additional visualizations such as bathymetry images, S57 nautical charts, and USCG Marine Information for Safety and Law Enforcement (MISLE) can be combined with the ship traffic visualizations to give a more complete picture of the maritime environment. AIS traffic analyses have the potential to give managers throughout NOAA's National Marine Sanctuaries an improved ability to assess the impacts of ship traffic on the marine resources they seek to protect. Viewing ship traffic

  2. Shipborne automatic identification system%船载自动识别系统

    Institute of Scientific and Technical Information of China (English)

    叶猛; 高璐; 纪圣谋; 葛中芹; 徐健健

    2013-01-01

    提出了一种满足TDMA相关协议的新型船载自动识别系统设计.系统选用基于ARM7TDMI核的S3C44130X处理器,配合一款基带信号处理器芯片CMX910.设计的船载自动识别系统实现了ITDMA、RATDMA和SOTDMA的通信协议,实现了系统同步、发射和接收工作的要求,并完成了键盘与显示系统之间进行信息交互等所有主题通信软件的设计.进行了直接连接I/Q信号,通过转发器获取发送、接收信号,以及改变发送率等相关试验.相关与发射接收速率改变关系分析,并完成了整机的运行试验.试验结果表明:系统可以正确入网,并与其他船载设备以及基站之间进行正常稳定的信息收发工作.%This paper presents a new shipborne automatic identification system (AIS) meeting with the TDMA communication protocol. This system takes S3C44130X of ARM7TDMI as the processor and implements the ITDMA, RATDMA and SOTDMA communication protocol,and completes the synchronization,transmission and reception. The system designed in this paper also accomplishes the software design of communication between the keyboard and display system,and so on. We have done some related experiments such as connecting the I/Q signal directly,transmitting and receiving with the transponder, changing the transmitting rate, and running the whole system. The whole system works well and can communicate with other mobile station on vessels and base station very well according to the communication protocol.

  3. New semi-automatic method for reaction product charge and mass identification in heavy-ion collisions at Fermi energies

    Science.gov (United States)

    Gruyer, D.; Bonnet, E.; Chbihi, A.; Frankland, J. D.; Barlini, S.; Borderie, B.; Bougault, R.; Dueñas, J. A.; Galichet, E.; Kordyasz, A.; Kozik, T.; Le Neindre, N.; Lopez, O.; Pârlog, M.; Pastore, G.; Piantelli, S.; Valdré, S.; Verde, G.; Vient, E.

    2017-03-01

    This article presents a new semi-automatic method for charge and mass identification of charged nuclear fragments using either ΔE - E correlations between measured energy losses in two successive detectors or correlations between charge signal amplitude and rise time in a single silicon detector, derived from digital pulse shape analysis techniques. In both cases different nuclear species (defined by their atomic number Z and mass number A) can be visually identified from such correlations if they are presented as a two-dimensional histogram ('identification matrix'), in which case correlations for different species populate different ridge lines ('identification lines') in the matrix. The proposed algorithm is based on the identification matrix's properties and uses as little information as possible on the global form of the identification lines, making it applicable to a large variety of matrices. Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented.

  4. Automatic classification of long-term ambulatory ECG records according to type of ischemic heart disease

    Directory of Open Access Journals (Sweden)

    Smrdel Aleš

    2011-12-01

    Full Text Available Abstract Background Elevated transient ischemic ST segment episodes in the ambulatory electrocardiographic (AECG records appear generally in patients with transmural ischemia (e. g. Prinzmetal's angina while depressed ischemic episodes appear in patients with subendocardial ischemia (e. g. unstable or stable angina. Huge amount of AECG data necessitates automatic methods for analysis. We present an algorithm which determines type of transient ischemic episodes in the leads of records (elevations/depressions and classifies AECG records according to type of ischemic heart disease (Prinzmetal's angina; coronary artery diseases excluding patients with Prinzmetal's angina; other heart diseases. Methods The algorithm was developed using 24-hour AECG records of the Long Term ST Database (LTST DB. The algorithm robustly generates ST segment level function in each AECG lead of the records, and tracks time varying non-ischemic ST segment changes such as slow drifts and axis shifts to construct the ST segment reference function. The ST segment reference function is then subtracted from the ST segment level function to obtain the ST segment deviation function. Using the third statistical moment of the histogram of the ST segment deviation function, the algorithm determines deflections of leads according to type of ischemic episodes present (elevations, depressions, and then classifies records according to type of ischemic heart disease. Results Using 74 records of the LTST DB (containing elevated or depressed ischemic episodes, mixed ischemic episodes, or no episodes, the algorithm correctly determined deflections of the majority of the leads of the records and correctly classified majority of the records with Prinzmetal's angina into the Prinzmetal's angina category (7 out of 8; majority of the records with other coronary artery diseases into the coronary artery diseases excluding patients with Prinzmetal's angina category (47 out of 55; and correctly

  5. Introducing a semi-automatic method to simulate large numbers of forensic fingermarks for research on fingerprint identification.

    Science.gov (United States)

    Rodriguez, Crystal M; de Jongh, Arent; Meuwly, Didier

    2012-03-01

    Statistical research on fingerprint identification and the testing of automated fingerprint identification system (AFIS) performances require large numbers of forensic fingermarks. These fingermarks are rarely available. This study presents a semi-automatic method to create simulated fingermarks in large quantities that model minutiae features or images of forensic fingermarks. This method takes into account several aspects contributing to the variability of forensic fingermarks such as the number of minutiae, the finger region, and the elastic deformation of the skin. To investigate the applicability of the simulated fingermarks, fingermarks have been simulated with 5-12 minutiae originating from different finger regions for six fingers. An AFIS matching algorithm was used to obtain similarity scores for comparisons between the minutiae configurations of fingerprints and the minutiae configurations of simulated and forensic fingermarks. The results showed similar scores for both types of fingermarks suggesting that the simulated fingermarks are good substitutes for forensic fingermarks.

  6. Automatic solution for detection, identification and biomedical monitoring of a cow using remote sensing for optimised treatment of cattle

    Directory of Open Access Journals (Sweden)

    Yevgeny Beiderman

    2014-12-01

    Full Text Available In this paper we show how a novel photonic remote sensing system assembled on a robotic platform can extract vital biomedical parameters from cattle including their heart beating, breathing and chewing activity. The sensor is based upon a camera and a laser using selfinterference phenomena. The whole system intends to provide an automatic solution for detection, identification and biomedical monitoring of a cow. The detection algorithm is based upon image processing involving probability map construction. The identification algorithms involve well known image pattern recognition techniques. The sensor is used on top of an automated robotic platform in order to support animal decision making. Field tests and computer simulated results are presented.

  7. 6 CFR 37.21 - Temporary or limited-term driver's licenses and identification cards.

    Science.gov (United States)

    2010-01-01

    ... identification cards. 37.21 Section 37.21 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY REAL ID DRIVER'S LICENSES AND IDENTIFICATION CARDS Minimum Documentation, Verification, and Card... may only issue a temporary or limited-term REAL ID driver's license or identification card to...

  8. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM Recurrent Neural Networks.

    Directory of Open Access Journals (Sweden)

    Ruben Zazo

    Full Text Available Long Short Term Memory (LSTM Recurrent Neural Networks (RNNs have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs, in automatic Language Identification (LID, particularly when dealing with very short utterances (∼3s. In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling, which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  9. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  10. Automatic Identification and Alert of Gust Fronts%阵风锋自动识别与预警

    Institute of Scientific and Technical Information of China (English)

    郑佳锋; 张杰; 朱克云; 刘艳霞; 张涛

    2013-01-01

    根据阵风锋的回波特征,该文设计了阵风锋自动识别算法.在速度场中,考虑辐合线识别;在强度场中,考虑窄带回波识别;根据窄带与辐合线的空间一致性,综合二者识别出阵风锋.基于该算法,以锋线闪烁和物理量输出两种方式实现了预警功能.最后利用地面自动气象站资料和2009年6月3日河南商丘、郑州及2009年6月5日安徽阜阳3个雷达站探测的阵风锋98个体扫样本资料检验了识别效果,并采用临界成功指数进行评估.结果表明:双向梯度法能有效滤除大范围降水回波而保留窄带回波;该算法只需考虑较低仰角层,大大提高识别效率.在速度场中采用的算法能有效识别出径向辐合线,同时也适用于低空径向风切变和辐合线的识别;利用临界成功指数对98个体扫样本进行识别率评估,识别率达到68.4%.%Gust fronts often cause serious ground gale and strong wind shear. Therefore, the short-term forecast, nowcasting and civil aviation department pay high attention to the research of gust fronts. Based on the echo characteristics of gust fronts in reflectivity field and velocity field of Doppler radar, an identification algorithm for gust fronts is designed. In the velocity field, the convergence line is identified by finding the consistent decreasing radial velocity and inspected by using a convergence parameter threshold, a grads threshold and a flux threshold. In the reflectivity field, the reflectivity data are classified into different levels. Then, the narrowband is identified by an algorithm called bilateral grads, which is designed by fully using the narrowband geometrical characteristic, the interval between narrowband and echo matrix. The bilateral grads algorithm can effectively filter out the wide range of precipitation echoes and reserve the narrowband in reflectivity image. Meanwhile, in order to filter out the remainder noise, length calculated and image thinning

  11. Independent component analysis-based algorithm for automatic identification of Raman spectra applied to artistic pigments and pigment mixtures.

    Science.gov (United States)

    González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio

    2015-03-01

    A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.

  12. Semi-automatic identification of counterfeit offers in online shopping platforms

    OpenAIRE

    Wartner, Christian; Arnold, Patrick; Rahm, Erhard

    2015-01-01

    Product counterfeiting is a serious problem causing the industry estimated losses of billions of dollars every year. With the increasing spread of e-commerce, the number of counterfeit products sold online increased substantially. We propose the adoption of a semi-automatic workflow to identify likely counterfeit offers in online platforms and to present these offers to a domain expert for manual verification. The workflow includes steps to generate search queries for relevant product offers,...

  13. The Effects of Degraded Vision and Automatic Combat Identification Reliability on Infantry Friendly Fire Engagements

    OpenAIRE

    Kogler, Timothy Michael

    2003-01-01

    Fratricide is one of the most devastating consequences of any military conflict. Target identification failures have been identified as the last link in a chain of mistakes that can lead to fratricide. Other links include weapon and equipment malfunctions, command, control, and communication failures, navigation failures, fire discipline failures, and situation awareness failures. This research examined the effects of degraded vision and combat identification reliability on the time-stress...

  14. PARAMETRIC OPTIMIZATION OF THE MULTIMODAL DECISION-LEVEL FUSION SCHEME IN AUTOMATIC BIOMETRIC PERSON’S IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    A. V. Timofeev

    2014-05-01

    Full Text Available This paper deals with an original method of structure parametric optimization for multimodal decision-level fusion scheme which combines the results of the partial solution for the classification task obtained from assembly of the monomodal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained. Properties of the proposed approach are proved rigorously. Suggested method has an urgent practical application in the automatic multimodal biometric person’s identification systems and in the systems for remote monitoring of extended objects. The proposed solution is easy for practical implementation into real operating systems. The paper presents a simulation study of the effectiveness of this optimized multimodal fusion classifier carried out on special bimodal biometrical database. Simulation results showed high practical effectiveness of the suggested method.

  15. Compensation of Cable Voltage Drops and Automatic Identification of Cable Parameters in 400 Hz Ground Power Units

    DEFF Research Database (Denmark)

    Borup, Uffe; Nielsen, Bo Vork; Blaabjerg, Frede

    2004-01-01

    In this paper a new cable voltage drop compensation scheme for ground power units (GPU) is presented. The scheme is able to predict and compensate the voltage drop in an output cable by measuring the current quantities at the source. The prediction is based on an advanced cable model that includes...... self and mutual impedance parameters. The model predicts the voltage drop at both symmetrical and unbalanced loads. In order to determine the cable model parameters an automatic identification concept is derived. The concept is tested in full scale on a 90-kVA 400-Hz GPU with two different cables....... It is concluded that the performance is significantly improved both with symmetrical and unsymmetrical cables and with balanced and unbalanced loads....

  16. Associating fuzzy logic, neural networks and multivariable statistic methodologies in the automatic identification of oil reservoir lithologies through well logs

    Energy Technology Data Exchange (ETDEWEB)

    Carrasquilla, Abel [Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF), Macae, RJ (Brazil). Lab. de Engenharia e Exploracao de Petroleo]. E-mail: abel@lenep.uenf.br; Silva, Jadir da [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Dept. de Geologia; Flexa, Roosevelt [Baker Hughes do Brasil Ltda, Macae, RJ (Brazil)

    2008-07-01

    In this article, we present a new approach to the automatic identification of lithologies using only well log data, which associates fuzzy logic, neural networks and multivariable statistic methods. Firstly, we chose well log data that represents lithological types, as gamma rays (GR) and density (RHOB), and, immediately, we applied a fuzzy logic algorithm to determine optimal number of clusters. In the following step, a competitive neural network is developed, based on Kohonen's learning rule, where the input layer is composed of two neurons, which represent the same number of used logs. On the other hand, the competitive layer is composed by several neurons, which have the same number of clusters as determined by the fuzzy logic algorithm. Finally, some data bank elements of the lithological types are selected at random to be the discriminate variables, which correspond to the input data of the multigroup discriminate analysis program. In this form, with the application of this methodology, the lithological types were automatically identified throughout the a well of the Namorado Oil Field, Campos Basin, which presented some difficulty in the results, mainly because of geological complexity of this field. (author)

  17. Generalizability and comparison of automatic clinical text de-identification methods and resources.

    Science.gov (United States)

    Ferrández, Óscar; South, Brett R; Shen, Shuying; Friedlin, F Jeff; Samore, Matthew H; Meystre, Stéphane M

    2012-01-01

    In this paper, we present an evaluation of the hybrid best-of-breed automated VHA (Veteran's Health Administration) clinical text de-identification system, nicknamed BoB, developed within the VHA Consortium for Healthcare Informatics Research. We also evaluate two available machine learning-based text de-identifications systems: MIST and HIDE. Two different clinical corpora were used for this evaluation: a manually annotated VHA corpus, and the 2006 i2b2 de-identification challenge corpus. These experiments focus on the generalizability and portability of the classification models across different document sources. BoB demonstrated good recall (92.6%), satisfactorily prioritizing patient privacy, and also achieved competitive precision (83.6%) for preserving subsequent document interpretability. MIST and HIDE reached very competitive results, in most cases with high precision (92.6% and 93.6%), although recall was sometimes lower than desired for the most sensitive PHI categories.

  18. Automatic NMR-based identification of chemical reaction types in mixtures of co-occurring reactions.

    Science.gov (United States)

    Latino, Diogo A R S; Aires-de-Sousa, João

    2014-01-01

    The combination of chemoinformatics approaches with NMR techniques and the increasing availability of data allow the resolution of problems far beyond the original application of NMR in structure elucidation/verification. The diversity of applications can range from process monitoring, metabolic profiling, authentication of products, to quality control. An application related to the automatic analysis of complex mixtures concerns mixtures of chemical reactions. We encoded mixtures of chemical reactions with the difference between the (1)H NMR spectra of the products and the reactants. All the signals arising from all the reactants of the co-occurring reactions were taken together (a simulated spectrum of the mixture of reactants) and the same was done for products. The difference spectrum is taken as the representation of the mixture of chemical reactions. A data set of 181 chemical reactions was used, each reaction manually assigned to one of 6 types. From this dataset, we simulated mixtures where two reactions of different types would occur simultaneously. Automatic learning methods were trained to classify the reactions occurring in a mixture from the (1)H NMR-based descriptor of the mixture. Unsupervised learning methods (self-organizing maps) produced a reasonable clustering of the mixtures by reaction type, and allowed the correct classification of 80% and 63% of the mixtures in two independent test sets of different similarity to the training set. With random forests (RF), the percentage of correct classifications was increased to 99% and 80% for the same test sets. The RF probability associated to the predictions yielded a robust indication of their reliability. This study demonstrates the possibility of applying machine learning methods to automatically identify types of co-occurring chemical reactions from NMR data. Using no explicit structural information about the reactions participants, reaction elucidation is performed without structure elucidation of

  19. Automatic identification of corrosive factors categories according to the environmental factors

    Directory of Open Access Journals (Sweden)

    Qing Xu

    2013-11-01

    Full Text Available Time of wetness, and pollutants are three key factors for the selection of metal materials in engineering applications and the determination of atmospheric corrosivity categories. In the past, when one or more corrosive factors data is missing, corrosive factors categories were often subjectively determined according to expert experience. In order to overcome such difficulty, this paper presents a method to automatically determine corrosive factors categories using detected environmental factors data instead of expert scoring. In this method, Bayesian network was used to build the mathematical model. And the inference was obtained by clique tree algorithm. The validity of the model and algorithm was verified by the simulation results.

  20. 3D handheld laser scanner based approach for automatic identification and localization of EEG sensors.

    Science.gov (United States)

    Koessler, Laurent; Cecchin, Thierry; Ternisien, Eric; Maillard, Louis

    2010-01-01

    This paper describes and assesses for the first time the use of a handheld 3D laser scanner for scalp EEG sensor localization and co-registration with magnetic resonance images. Study on five subjects showed that the scanner had an equivalent accuracy, a better repeatability, and was faster than the reference electromagnetic digitizer. According to electrical source imaging, somatosensory evoked potentials experiments validated its ability to give precise sensor localization. With our automatic labeling method, the data provided by the scanner could be directly introduced in the source localization studies.

  1. Automatic Identification of Motion Artifacts in EHG Recording for Robust Analysis of Uterine Contractions

    Directory of Open Access Journals (Sweden)

    Yiyao Ye-Lin

    2014-01-01

    Full Text Available Electrohysterography (EHG is a noninvasive technique for monitoring uterine electrical activity. However, the presence of artifacts in the EHG signal may give rise to erroneous interpretations and make it difficult to extract useful information from these recordings. The aim of this work was to develop an automatic system of segmenting EHG recordings that distinguishes between uterine contractions and artifacts. Firstly, the segmentation is performed using an algorithm that generates the TOCO-like signal derived from the EHG and detects windows with significant changes in amplitude. After that, these segments are classified in two groups: artifacted and nonartifacted signals. To develop a classifier, a total of eleven spectral, temporal, and nonlinear features were calculated from EHG signal windows from 12 women in the first stage of labor that had previously been classified by experts. The combination of characteristics that led to the highest degree of accuracy in detecting artifacts was then determined. The results showed that it is possible to obtain automatic detection of motion artifacts in segmented EHG recordings with a precision of 92.2% using only seven features. The proposed algorithm and classifier together compose a useful tool for analyzing EHG signals and would help to promote clinical applications of this technique.

  2. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  3. Automatic identification of bird targets with radar via patterns produced by wing flapping

    NARCIS (Netherlands)

    Zaugg, S.; Saporta, G.; van Loon, E.; Schmaljohann, H.; Liechti, F.

    2008-01-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pa

  4. Price strategy and pricing strategy: terms and content identification

    OpenAIRE

    Panasenko Tetyana

    2015-01-01

    The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  5. A hybrid model for automatic identification of risk factors for heart disease.

    Science.gov (United States)

    Yang, Hui; Garibaldi, Jonathan M

    2015-12-01

    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task.

  6. Automatic identification of web-based risk markers for health events

    DEFF Research Database (Denmark)

    Yom-Tov, Elad; Borsa, Diana; Hayward, Andrew C.

    2015-01-01

    Background: The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual's risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies but these are......, low-cost approach to automatically identify risk factors, and support more timely and personalized public health efforts to bring human and economic benefits.......Background: The escalating cost of global health care is driving the development of new technologies to identify early indicators of an individual's risk of disease. Traditionally, epidemiologists have identified such risk factors using medical databases and lengthy clinical studies...... but these are often limited in size and cost and can fail to take full account of diseases where there are social stigmas or to identify transient acute risk factors. Objective: Here we report that Web search engine queries coupled with information on Wikipedia access patterns can be used to infer health events...

  7. Price strategy and pricing strategy: terms and content identification

    Directory of Open Access Journals (Sweden)

    Panasenko Tetyana

    2015-11-01

    Full Text Available The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  8. REMI and ROUSE: Quantitative Models for Long-Term and Short-Term Priming in Perceptual Identification

    NARCIS (Netherlands)

    E.J. Wagenmakers (Eric-Jan); R. Zeelenberg (René); D.E. Huber (David); J.G.W. Raaijmakers (Jeroen)

    2003-01-01

    textabstractThe REM model originally developed for recognition memory (Shiffrin & Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. We discuss two REM models based on Bayesian principles: a model for long-term priming (REMI; Sc

  9. Automatic Spatially-Adaptive Balancing of Energy Terms for Image Segmentation

    CERN Document Server

    Rao, Josna; Abugharbieh, Rafeef

    2009-01-01

    Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.

  10. Long Term Suboxone™ Emotional Reactivity As Measured by Automatic Detection in Speech

    OpenAIRE

    2013-01-01

    Addictions to illicit drugs are among the nation’s most critical public health and societal problems. The current opioid prescription epidemic and the need for buprenorphine/naloxone (Suboxone®; SUBX) as an opioid maintenance substance, and its growing street diversion provided impetus to determine affective states (“true ground emotionality”) in long-term SUBX patients. Toward the goal of effective monitoring, we utilized emotion-detection in speech as a measure of “true” emotionality in 36 ...

  11. Automatic identification of NDA measured items: Use of E-tags

    Energy Technology Data Exchange (ETDEWEB)

    Chitumbo, K.; Olsen, R. [International Atomic Energy Agency (United States); Hatcher, C.R. [Los Alamos National Lab., NM (United States); Kadner, S.P. [Aquila Technologies Group, Inc. (United States)

    1995-07-01

    This paper describes how electronic identification devices or E-tags could reduce the time spent by LAEA inspectors making nondestructive assay (NDA) measurements. As one example, the use of E-tags with a high-level neutron coincidence counter (HLNC) is discussed in detail. Sections of the paper include inspection procedures, system description, software, and future plans. Mounting of E-tabs, modifications to the HLNC, and the use of tamper indicating devices are also discussed. The technology appears to have wide application to different types of nuclear facilities and inspections and could significantly change NDA inspection procedures.

  12. A smart pattern recognition system for the automatic identification of aerospace acoustic sources

    Science.gov (United States)

    Cabell, R. H.; Fuller, C. R.

    1989-01-01

    An intelligent air-noise recognition system is described that uses pattern recognition techniques to distinguish noise signatures of five different types of acoustic sources, including jet planes, propeller planes, a helicopter, train, and wind turbine. Information for classification is calculated using the power spectral density and autocorrelation taken from the output of a single microphone. Using this system, as many as 90 percent of test recordings were correctly identified, indicating that the linear discriminant functions developed can be used for aerospace source identification.

  13. Variable identification and automatic tuning of the main module of a servo system of parallel mechanism

    Institute of Scientific and Technical Information of China (English)

    YANG Zhiyong; XU Meng; HUANG Tian; NI Yanbing

    2007-01-01

    The variables of the main module of a servo system for miniature reconfigurable parallel mechanism were identified and automatically tuned. With the reverse solution module of the translation, the module with the exerted translation joint was obtained, which included the location, velocity and acceleration of the parallelogram carriage- branch. The rigid dynamic reverse model was set as the virtual work principle. To identify the variables of the servo system, the triangle-shaped input signal with variable frequency was adopted to overcome the disadvantages of the pseudo-random number sequence, i.e., making the change of the vibration amplitude of the motor dramatically, easily impact the servo motor and make the velocity loop open and so on. Moreover, all the variables including,the rotary inertia of the servo system were identified by the additive mass. The overshoot and rise time were the optimum goals, the limited changing load with the attitude was considered, and the range of the controller variables in the servo system was identified. The results of the experiments prove that the method is accurate.

  14. Automatic identification of mobile and rigid substructures in molecular dynamics simulations and fractional structural fluctuation analysis.

    Directory of Open Access Journals (Sweden)

    Leandro Martínez

    Full Text Available The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD and Root Mean Square Fluctuations (RMSF of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.

  15. Automatic identification of critical follow-up recommendation sentences in radiology reports.

    Science.gov (United States)

    Yetisgen-Yildiz, Meliha; Gunn, Martin L; Xia, Fei; Payne, Thomas H

    2011-01-01

    Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports.

  16. Comparison between three implementations of automatic identification algorithms for the quantification and characterization of mesoscale eddies in the South Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    J. M. A. C. Souza

    2011-03-01

    Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.

  17. BOAT: automatic alignment of biomedical ontologies using term informativeness and candidate selection.

    Science.gov (United States)

    Chua, Watson Wei Khong; Kim, Jung-Jae

    2012-04-01

    The biomedical sciences is one of the few domains where ontologies are widely being developed to facilitate information retrieval and knowledge sharing, but there still remains the problem that applications using different ontologies cannot share knowledge without explicit references between overlapping concepts. Ontology alignment is the task of identifying such equivalence relations between concepts across ontologies. Its application to the biomedical domain should address two open issues: (1) determining the equivalence of concept-pairs which have overlapping terms in their names, and (2) the high run-time required to align large ontologies which are typical in the biomedical domain. To address them, we present a novel approach, named the Biomedical Ontologies Alignment Technique (BOAT), which is state-of-the-art in terms of F-measure, precision and speed. A key feature of BOAT is that it considers the informativeness of each component word in the concept labels, which has significant impact on biomedical ontologies, resulting in a 12.2% increase in F-measure. Another important feature of BOAT is that it selects for comparison only concept pairs that show high likelihoods of equivalence, based on the similarity of their annotations. BOAT's F-measure of 0.88 for the alignment of the mouse and human anatomy ontologies is on par with that of another state-of-the-art matcher, AgreementMaker, while taking a shorter time.

  18. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  19. Automatic Screening of Missing Objects and Identification with Group Coding of RF Tags

    Directory of Open Access Journals (Sweden)

    G. Vijayaraju

    2013-11-01

    Full Text Available Here the container of the shipping based phenomena it is a collection of the objects in a well oriented fashion by which there is a group oriented fashion related to the well efficient strategy of the objects based on the physical phenomena in a well efficient fashion respectively. Here by the enabling of the radio frequency identification based strategy in which object identification takes place in the system in a well efficient fashion and followed by the container oriented strategy in a well effective fashion respectively. Here there is a problem with respect to the present strategy in which there is a problem with respect to the design oriented mechanism by which there is a no proper analysis takes place for the accurate identification of the objects based on the missing strategy plays a major role in the system based aspect respectively. Here a new technique is proposed in order to overcome the problem of the previous method here the present design oriented powerful strategy includes the object oriented determination of the ID based on the user oriented phenomena in a well effective manner where the data related to the strategy of the missing strategy plays a major role in the system based aspect in a well effective fashion by which that is from the perfect analysis takes place from the same phenomena without the help of the entire database n a well respective fashion takes place in the system respectively. Here the main key aspect of the present method is to effectively divide the entire data related to the particular aspect and define based on the present strategy in a well effective manner in which there is coordination has to be maintained in the system based aspect respectively. Simulations have been conducted on the present method and a lot of analysis takes place on the large number of the data sets in a well oriented fashion with respect to the different environmental conditions where there is an accurate analysis with respect to

  20. 基于条件随机场的《伤寒论》中医术语自动识别%Automatic identification of TCM terminology in Shanghan Lun based on conditional random field

    Institute of Scientific and Technical Information of China (English)

    孟洪宇; 谢晴宇; 常虹; 孟庆刚

    2015-01-01

    Objective To explore the methods of automatic identification of TCM terminology and to ex-pand the forms of natural language processing in TCM documents.Methods Based on the methods of conditional random field( CRF) , annotation and automatic identification on terms of symptoms, diseases, pulse-types and prescriptions recorded in Shanghan Lun as the research subjects, the effects of different combinations of the features, such as Chinese character itself, part of speech, word boundary and term category label, on identification of terminology were analyzed and the most effective combination was selected.Results The TCM terminology automatic identification model, combining with the features of Chinese character itself, part of speech, word boundary and term category label, had the precision of 85.00%, recall of 68.00%and F score of 75.56%.Conclusion The multi-features model of combi-nation of Chinese character itself, part of speech, word boundary and the term category label achieved the best identifying result in all combinations.%目的:探索中医术语的自动识别方法,扩充中医文本的自然语言处理形式。方法采用基于条件随机场( CRF)的方法,针对《伤寒论》文本中的症状、病名、脉象、方剂等中医术语的自动识别标注问题,通过结合字本身、词性、词边界、术语类别标注的特征,分析不同特征组合对术语识别的影响,并探讨最具有效性的组合。结果以字本身、词边界、词性、类别标签为特征组合的中医术语识别模型准确率为85.00%,召回率为68.00%,F值为75.56%。结论字本身、词性、词边界、术语类别标注的多特征融合的模型识别效果最优。

  1. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    Science.gov (United States)

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community.

  2. Automatic Identification of Messages Related to Adverse Drug Reactions from Online User Reviews using Feature-based Classification.

    Directory of Open Access Journals (Sweden)

    Jingfang Liu

    2014-11-01

    Full Text Available User-generated medical messages on Internet contain extensive information related to adverse drug reactions (ADRs and are known as valuable resources for post-marketing drug surveillance. The aim of this study was to find an effective method to identify messages related to ADRs automatically from online user reviews.We conducted experiments on online user reviews using different feature set and different classification technique. Firstly, the messages from three communities, allergy community, schizophrenia community and pain management community, were collected, the 3000 messages were annotated. Secondly, the N-gram-based features set and medical domain-specific features set were generated. Thirdly, three classification techniques, SVM, C4.5 and Naïve Bayes, were used to perform classification tasks separately. Finally, we evaluated the performance of different method using different feature set and different classification technique by comparing the metrics including accuracy and F-measure.In terms of accuracy, the accuracy of SVM classifier was higher than 0.8, the accuracy of C4.5 classifier or Naïve Bayes classifier was lower than 0.8; meanwhile, the combination feature sets including n-gram-based feature set and domain-specific feature set consistently outperformed single feature set. In terms of F-measure, the highest F-measure is 0.895 which was achieved by using combination feature sets and a SVM classifier. In all, we can get the best classification performance by using combination feature sets and SVM classifier.By using combination feature sets and SVM classifier, we can get an effective method to identify messages related to ADRs automatically from online user reviews.

  3. Automatic Whole-Spectrum Matching Techniques for Identification of Pure and Mixed Minerals using Raman Spectroscopy

    Science.gov (United States)

    Dyar, M. D.; Carey, C. J.; Breitenfeld, L.; Tague, T.; Wang, P.

    2015-12-01

    In situuse of Raman spectroscopy on Mars is planned for three different instruments in the next decade. Although implementations differ, they share the potential to identify surface minerals and organics and inform Martian geology and geochemistry. Their success depends on the availability of appropriate databases and software for phase identification. For this project, we have consolidated all known publicly-accessible Raman data on minerals for which independent confirmation of phase identity is available, and added hundreds of additional spectra acquired using varying instruments and laser energies. Using these data, we have developed software tools to improve mineral identification accuracy. For pure minerals, whole-spectrum matching algorithms far outperform existing tools based on diagnostic peaks in individual phases. Optimal matching accuracy does depend on subjective end-user choices for data processing (such as baseline removal, intensity normalization, and intensity squashing), as well as specific dataset characteristics. So, to make this tuning process amenable to automated optimization methods, we developed a machine learning-based generalization of these choices within a preprocessing and matching framework. Our novel method dramatically reduces the burden on the user and results in improved matching accuracy. Moving beyond identifying pure phases into quantification of relative abundances is a complex problem because relationships between peak intensity and mineral abundance are obscured by complicating factors: exciting laser frequency, the Raman cross section of the mineral, crystal orientation, and long-range chemical and structural ordering in the crystal lattices. Solving this un-mixing problem requires adaptation of our whole-spectrum algorithms and a large number of test spectra of minerals in known volume proportions, which we are creating for this project. Key to this effort is acquisition of spectra from mixtures of pure minerals paired

  4. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    Science.gov (United States)

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  5. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Directory of Open Access Journals (Sweden)

    Malik M. Naeem Mannan

    2016-02-01

    Full Text Available Contamination of eye movement and blink artifacts in Electroencephalogram (EEG recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI. In this paper, we proposed an automatic framework based on independent component analysis (ICA and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.

  6. 基于DSP的自动指纹识别系统%Automatic Fingerprint Identification System Based on DSP

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    According to real - time ,accuracy ,low power consumption ,small volume ,portable request of the security access control system ,this paper proposed automatic fingerprint identification system based on DSP TMS320VC5509A and fingerprint acquisition sensor MBF310 .The system uses finger-print image matching algorithm based on fingerprint ridge line difference degree .The algorithm was used in DSP TMS320VC5509A successfully ,and improved the detection accuracy . The experiments show that the system has a higher intelligent and good stability .%  针对安全访问控制系统的实时性、精确性、低功耗、体积小、便携式等要求,提出一种基于 DSP TMS320VC5509A和指纹采集传感器 MBF310的自动指纹识别系统。该系统使用的基于指纹脊线差异度的指纹图像匹配算法应用于 DSP之中,提高了检测的准确性。实验表明,整个系统具有良好的智能化和稳定性。

  7. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID of Penguins.

    Directory of Open Access Journals (Sweden)

    Vsevolod Afanasyev

    Full Text Available A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  8. Large data analysis: automatic visual personal identification in a demography of 1.2 billion persons

    Science.gov (United States)

    Daugman, John

    2014-05-01

    The largest biometric deployment in history is now underway in India, where the Government is enrolling the iris patterns (among other data) of all 1.2 billion citizens. The purpose of the Unique Identification Authority of India (UIDAI) is to ensure fair access to welfare benefits and entitlements, to reduce fraud, and enhance social inclusion. Only a minority of Indian citizens have bank accounts; only 4 percent possess passports; and less than half of all aid money reaches its intended recipients. A person who lacks any means of establishing their identity is excluded from entitlements and does not officially exist; thus the slogan of UIDAI is: To give the poor an identity." This ambitious program enrolls a million people every day, across 36,000 stations run by 83 agencies, with a 3-year completion target for the entire national population. The halfway point was recently passed with more than 600 million persons now enrolled. In order to detect and prevent duplicate identities, every iris pattern that is enrolled is first compared against all others enrolled so far; thus the daily workflow now requires 600 trillion (or 600 million-million) iris cross-comparisons. Avoiding identity collisions (False Matches) requires high biometric entropy, and achieving the tremendous match speed requires phase bit coding. Both of these requirements are being delivered operationally by wavelet methods developed by the author for encoding and comparing iris patterns, which will be the focus of this Large Data Award" presentation.

  9. Multispectral hypercolorimetry and automatic guided pigment identification: some masterpieces case studies

    Science.gov (United States)

    Melis, Marcello; Miccoli, Matteo; Quarta, Donato

    2013-05-01

    A couple of years ago we proposed, in this same session, an extension to the standard colorimetry (CIE '31) that we called Hypercolorimetry. It was based on an even sampling of the 300-1000nm wavelength range, with the definition of 7 hypercolor matching functions optimally shaped to minimize the methamerism. Since then we consolidated the approach through a large number of multispectral analysis and specialized the system to the non invasive diagnosis for paintings and frescos. In this paper we describe the whole process, from the multispectral image acquisition to the final 7 bands computation and we show the results on paintings from Masters of the colour. We describe and propose in this paper a systematic approach to the non invasive diagnosis that is able to change a subjective analysis into a repeatable measure indipendent from the specific lighting conditions and from the specific acquisition system. Along with the Hypercolorimetry and its consolidation in the field of non invasive diagnosis, we developed also a standard spectral reflectance database of pure pigments and pigments painted with different bindings. As we will see, this database could be compared to the reflectances of the painting to help the diagnostician in identifing the proper matter. We used a Nikon D800FR (Full Range) camera. This is a 36megapixel reflex camera modified under a Nikon/Profilocolore common project, to achieve a 300-1000nm range sensitivity. The large amount of data allowed us to perform very accurate pixels comparisions, based on their spectral reflectance. All the original pigments and their binding have been provided by the Opificio delle Pietre Dure, Firenze, Italy, while the analyzed masterpieces belong to the collection of the Pinacoteca Nazionale of Bologna, Italy.

  10. Automatic identification and placement of measurement stations for hydrological discharge simulations at basin scale

    Science.gov (United States)

    Grassi, P. R.; Ceppi, A.; Cancarè, F.; Ravazzani, G.; Mancini, M.; Sciuto, D.

    2012-04-01

    corresponding data is used, and false that it is not used. Using this definition of the solution space it is possible to apply various optimization algorithms such as genetics and simulated annealing. Iterating on a large set of possible configurations these algorithms provide the set of Pareto-optimal solutions, i.e. the number of measuring points is minimized while the forecasting accuracy is maximised. The identified Pareto curve is approximate, since the identification of the complete Pareto curve is practically impossible due to the large amount of possible configurations. From the experimental results, as expected, we notice that a certain set of weather data are essential for hydrological simulations while other are negligible. Combining the outcome of different optimization algorithms is possible to extract a reliable set of rules to place measurement stations for forecasting monitoring.

  11. Resolving Quasi-Synonym Relationships in Automatic Thesaurus Construction Using Fuzzy Rough Sets and an Inverse Term Frequency Similarity Function

    Science.gov (United States)

    Davault, Julius M., III.

    2009-01-01

    One of the problems associated with automatic thesaurus construction is with determining the semantic relationship between word pairs. Quasi-synonyms provide a type of equivalence relationship: words are similar only for purposes of information retrieval. Determining such relationships in a thesaurus is hard to achieve automatically. The term…

  12. 彩色比特码自动识别技术在图书馆中的应用研究%Application Research on Color Bit Code Automatic Identification technology in the Libraries

    Institute of Scientific and Technical Information of China (English)

    李海华

    2012-01-01

    This paper introduces the basic identification principles and features of Color Bit Code Automatic Identification technology the application of the technology in many fields of foreign, analyzes the feasibility of domestic libraries by using Color Bit Code Automatic Identification technology. In the end of this paper, the paper has put forward the format of Color Bit Code Automatic Identification technology protocol.%对彩色比特码的定义及工作原理进行了概述,简单介绍了该技术在国外各领域的应用现状,分析了目前国内在图书馆领域应用彩色比特码自动识别技术的可行性,并提出了一种基于彩色比特码技术的图书管理协议格式。

  13. Automatic Assessment of Global Craniofacial Differences between Crouzon mice and Wild-type mice in terms of the Cephalic Index

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Oubel, Estanislao; Frangi, Alejandro F.;

    2006-01-01

    This paper presents the automatic assessment of differences between Wild-Type mice and Crouzon mice based on high-resolution 3D Micro CT data. One factor used for the diagnosis of Crouzon syndrome in humans is the cephalic index, which is the skull width/length ratio. This index has traditionally...... been computed by time-consuming manual measurements that prevent large-scale populational studies. In this study, an automatic method to estimate cephalic index for this mouse model of Crouzon syndrome is presented. The method is based on constructing a craniofacial atlas of Wild-type mice...... and then registering each mouse to the atlas using affine transformations. The skull length and width are then measured on the atlas and propagated to all subjects to obtain automatic measurements of the cephalic index. The registration accuracy was estimated by RMS landmark errors. Even though the accuracy...

  14. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    Science.gov (United States)

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery.

  15. Application of US Military Logistics Automatic Identification Technology and Its Enlightenments%美军后勤自动识别技术的应用及启示

    Institute of Scientific and Technical Information of China (English)

    王淼; 李孟研; 陈宝锋

    2011-01-01

    The paper presents the main types and technology trait of the automatic identification technology,analyzes its selection and application in the US military logistics,it provides enlightenments on Chinese military logistics automatic identification technology.%概述了目前自动识别技术的主要类型与技术特点,分析了美军后勤领域对自动识别技术的选择及其应用情况,为我军后勤自动识别技术的发展建设提出了启示。

  16. Computer Domain Term Automatic Extraction and Hierarchical Structure Building%计算机领域术语的自动获取与层次构建

    Institute of Scientific and Technical Information of China (English)

    林源; 陈志泊; 孙俏

    2011-01-01

    This paper presents a computer domain term automatic extraction method based on roles and statistics.It uses computer book titles from Amazon.com website as corpus, data are preprocessed by words splitting, stop words and special characters filtering.Terms are extracted by a set of rules and frequency statistics and inserted into a word tree from ODP to build the hierarchical structure.Experimental results show high precision and recall of the automatically extracted results compared with manual tagged terms.%设计一种能够自动获取计算机领域术语的方案,提出基于规则与统计相结合的抽取方法,使用亚马逊网站的计算机类图书作为语料库,通过分词、去停止词预处理以及词频统计的方法提取出计算机类领域术语,并插入到由ODP构建的树中,形成计算机领域术语的层次结构.实验结果表明,与人工标注结果相比,使用该方法自动获取的术语有很高的准确率与召回率.

  17. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    Science.gov (United States)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.

  18. Proliferating cell nuclear antigen (PCNA) allows the automatic identification of follicles in microscopic images of human ovarian tissue

    CERN Document Server

    Kelsey, Thomas W; Castillo, Luis; Wallace, W Hamish B; Gonzálvez, Francisco Cóppola; 10.2147/PLMI.S11116

    2010-01-01

    Human ovarian reserve is defined by the population of nongrowing follicles (NGFs) in the ovary. Direct estimation of ovarian reserve involves the identification of NGFs in prepared ovarian tissue. Previous studies involving human tissue have used hematoxylin and eosin (HE) stain, with NGF populations estimated by human examination either of tissue under a microscope, or of images taken of this tissue. In this study we replaced HE with proliferating cell nuclear antigen (PCNA), and automated the identification and enumeration of NGFs that appear in the resulting microscopic images. We compared the automated estimates to those obtained by human experts, with the "gold standard" taken to be the average of the conservative and liberal estimates by three human experts. The automated estimates were within 10% of the "gold standard", for images at both 100x and 200x magnifications. Automated analysis took longer than human analysis for several hundred images, not allowing for breaks from analysis needed by humans. O...

  19. Automatic de-identification of electronic medical records using token-level and character-level conditional random fields.

    Science.gov (United States)

    Liu, Zengjian; Chen, Yangxin; Tang, Buzhou; Wang, Xiaolong; Chen, Qingcai; Li, Haodi; Wang, Jingfeng; Deng, Qiwen; Zhu, Suisong

    2015-12-01

    De-identification, identifying and removing all protected health information (PHI) present in clinical data including electronic medical records (EMRs), is a critical step in making clinical data publicly available. The 2014 i2b2 (Center of Informatics for Integrating Biology and Bedside) clinical natural language processing (NLP) challenge sets up a track for de-identification (track 1). In this study, we propose a hybrid system based on both machine learning and rule approaches for the de-identification track. In our system, PHI instances are first identified by two (token-level and character-level) conditional random fields (CRFs) and a rule-based classifier, and then are merged by some rules. Experiments conducted on the i2b2 corpus show that our system submitted for the challenge achieves the highest micro F-scores of 94.64%, 91.24% and 91.63% under the "token", "strict" and "relaxed" criteria respectively, which is among top-ranked systems of the 2014 i2b2 challenge. After integrating some refined localization dictionaries, our system is further improved with F-scores of 94.83%, 91.57% and 91.95% under the "token", "strict" and "relaxed" criteria respectively.

  20. An MRI-derived definition of MCI-to-AD conversion for long-term, automatic prognosis of MCI patients.

    Directory of Open Access Journals (Sweden)

    Yaman Aksu

    Full Text Available Alzheimer's disease (AD and mild cognitive impairment (MCI are of great current research interest. While there is no consensus on whether MCIs actually "convert" to AD, this concept is widely applied. Thus, the more important question is not whether MCIs convert, but what is the best such definition. We focus on automatic prognostication, nominally using only a baseline brain image, of whether an MCI will convert within a multi-year period following the initial clinical visit. This is not a traditional supervised learning problem since, in ADNI, there are no definitive labeled conversion examples. It is not unsupervised, either, since there are (labeled ADs and Controls, as well as cognitive scores for MCIs. Prior works have defined MCI subclasses based on whether or not clinical scores significantly change from baseline. There are concerns with these definitions, however, since, e.g., most MCIs (and ADs do not change from a baseline CDR = 0.5 at any subsequent visit in ADNI, even while physiological changes may be occurring. These works ignore rich phenotypical information in an MCI patient's brain scan and labeled AD and Control examples, in defining conversion. We propose an innovative definition, wherein an MCI is a converter if any of the patient's brain scans are classified "AD" by a Control-AD classifier. This definition bootstraps design of a second classifier, specifically trained to predict whether or not MCIs will convert. We thus predict whether an AD-Control classifier will predict that a patient has AD. Our results demonstrate that this definition leads not only to much higher prognostic accuracy than by-CDR conversion, but also to subpopulations more consistent with known AD biomarkers (including CSF markers. We also identify key prognostic brain region biomarkers.

  1. System identification of mGluR-dependent long-term depression.

    Science.gov (United States)

    Tambuyzer, Tim; Ahmed, Tariq; Taylor, C James; Berckmans, Daniel; Balschun, Detlef; Aerts, Jean-Marie

    2013-03-01

    Recent advances have started to uncover the underlying mechanisms of metabotropic glutamate receptor (mGluR)-dependent long-term depression (LTD). However, it is not completely clear how these mechanisms are linked, and it is believed that several crucial mechanisms remain to be revealed. In this study, we investigated whether system identification (SI) methods can be used to gain insight into the mechanisms of synaptic plasticity. SI methods have been shown to be an objective and powerful approach for describing how sensory neurons encode information about stimuli. However, to our knowledge, it is the first time that SI methods have been applied to electrophysiological brain slice recordings of synaptic plasticity responses. The results indicate that the SI approach is a valuable tool for reverse-engineering of mGluR-LTD responses. We suggest that such SI methods can aid in unraveling the complexities of synaptic function.

  2. Analysis of Long-Term Station Blackout without automatic depressurization at Peach Bottom using MELCOR (Version 1.8)

    Energy Technology Data Exchange (ETDEWEB)

    Madni, I.K. [Brookhaven National Lab., Upton, NY (United States)

    1994-05-01

    This report documents the results from MELCOR calculations of the Long-Term Station Blackout Accident Sequence, with failure to depressurize the reactor vessel, at the Peach Bottom (BWR Mark I) plant, and presents comparisons with Source Term Code Package calculations of the same sequence. STCP has calculated the transient out to 13.5, hours after core uncovery. Most of the MELCOR calculations presented have been carried out to between 15 and 16.7 hours after core uncovery. The results include the release of source terms to the environment. The results of several sensitivity calculations with MELCOR are also presented, which explore the impact of varying user-input modeling and timestep control parameters on the accident progression and release of source terms to the environment. Most of the calculations documented here were performed in FY1990 using MELCOR Version 1.8BC. However, the appendices also document the results of more recent calculations performed in FY1991 using MELCOR versions 1.8CZ and 1.8DNX.

  3. Automatic summarization method based on thematic term set%一种基于主题词集的自动文摘方法

    Institute of Scientific and Technical Information of China (English)

    刘兴林; 郑启伦; 马千里

    2011-01-01

    This paper proposed an automatic summarization method based on thematic tern set for automatic extracting abstracts from Chinese documents.According to the extracted thematic term set, the method calculated the sentence weights by the weights of the thematic terms, then got the corresponding total weight of each sentence, and selected several sentences with higher weight by percentage, and finally, output the summarization sentences by original order.Experiments were conducted on HIT IR-lab text summarization corpus, and utilized intrinsic automatic evaluation measures to evaluate the performance of the proposed method.Experimental results show that the proposed method achieves 66.07% upon the F-measure, which suggests it can generate higher quality summarization, nearly to the reference abstract, achieving very good performance.%提出一种基于主题词集的文本自动文摘方法,用于自动提取文档文摘.该方法根据提取到的主题词集,由主题词权重进行加权计算各主题词所在的句子权重,从而得出主题词集对应的每个句子的总权重,再根据自动文摘比例选取句子权重较大的几个句子,最后按原文顺序输出文摘.实验在哈工大信息检索研究室单文档自动文摘语料库上进行,使用内部评测自动评估方法对获得的文摘进行评价,总体F值达到了66.07%.实验结果表明,该方法所获得的文摘质量高,较接近于参考文摘,取得了良好的效果.

  4. Automatic identification of type 2 diabetes, hypertension, ischaemic heart disease, heart failure and their levels of severity from Italian General Practitioners' electronic medical records: a validation study

    Science.gov (United States)

    Schuemie, Martijn J; Mazzaglia, Giampiero; Lapi, Francesco; Francesconi, Paolo; Pasqua, Alessandro; Bianchini, Elisa; Montalbano, Carmelo; Roberto, Giuseppe; Barletta, Valentina; Cricelli, Iacopo; Cricelli, Claudio; Dal Co, Giulia; Bellentani, Mariadonata; Sturkenboom, Miriam; Klazinga, Niek

    2016-01-01

    Objectives The Italian project MATRICE aimed to assess how well cases of type 2 diabetes (T2DM), hypertension, ischaemic heart disease (IHD) and heart failure (HF) and their levels of severity can be automatically extracted from the Health Search/CSD Longitudinal Patient Database (HSD). From the medical records of the general practitioners (GP) who volunteered to participate, cases were extracted by algorithms based on diagnosis codes, keywords, drug prescriptions and results of diagnostic tests. A random sample of identified cases was validated by interviewing their GPs. Setting HSD is a database of primary care medical records. A panel of 12 GPs participated in this validation study. Participants 300 patients were sampled for each disease, except for HF, where 243 patients were assessed. Outcome measures The positive predictive value (PPV) was assessed for the presence/absence of each condition against the GP's response to the questionnaire, and Cohen's κ was calculated for agreement on the severity level. Results The PPV was 100% (99% to 100%) for T2DM and hypertension, 98% (96% to 100%) for IHD and 55% (49% to 61%) for HF. Cohen's kappa for agreement on the severity level was 0.70 for T2DM and 0.69 for hypertension and IHD. Conclusions This study shows that individuals with T2DM, hypertension or IHD can be validly identified in HSD by automated identification algorithms. Automatic queries for levels of severity of the same diseases compare well with the corresponding clinical definitions, but some misclassification occurs. For HF, further research is needed to refine the current algorithm. PMID:27940627

  5. Automatic Frequency Identification under Sample Loss in Sinusoidal Pulse Width Modulation Signals Using an Iterative Autocorrelation Algorithm

    Directory of Open Access Journals (Sweden)

    Alejandro Said

    2016-08-01

    Full Text Available In this work, we present a simple algorithm to calculate automatically the Fourier spectrum of a Sinusoidal Pulse Width Modulation Signal (SPWM. Modulated voltage signals of this kind are used in industry by speed drives to vary the speed of alternating current motors while maintaining a smooth torque. Nevertheless, the SPWM technique produces undesired harmonics, which yield stator heating and power losses. By monitoring these signals without human interaction, it is possible to identify the harmonic content of SPWM signals in a fast and continuous manner. The algorithm is based in the autocorrelation function, commonly used in radar and voice signal processing. Taking advantage of the symmetry properties of the autocorrelation, the algorithm is capable of estimating half of the period of the fundamental frequency; thus, allowing one to estimate the necessary number of samples to produce an accurate Fourier spectrum. To deal with the loss of samples, i.e., the scan backlog, the algorithm iteratively acquires and trims the discrete sequence of samples until the required number of samples reaches a stable value. The simulation shows that the algorithm is not affected by either the magnitude of the switching pulses or the acquisition noise.

  6. Automatic Virtual Entity Simulation of Conceptual Design Results-Part I:Symbolic Scheme Generation and Identification

    Institute of Scientific and Technical Information of China (English)

    WANG Yu-xin; LI Yu-tong

    2014-01-01

    The development of new products of high quality, low unit cost, and short lead time to market are the key elements required for any enterprise to obtain a competitive advantage. For shorting the lead time to market and improving the creativity and performances of the product, a rule-based conceptual design approach and a methodology to simulate the conceptual design results generated in conceptual design process in automatical virtual entity form are presented in this paper. This part of paper presents a rule-based conceptual design method for generating creative conceptual design schemes of mechanisms based on Yan’s kinematic chain regeneration creative design method. The design rules are adopted to describe the design requirements of the functional characteristics, the connection relationships and topological characteristics among mechanisms. Through the graphs-based reasoning process, the conceptual design space is expanded extremely, and the potential creative conceptual design results are then dug out. By refining the design rules, the solution exploration problem is avioded, and the tendentious conceptual design schemes are generated. Since mechanical, electrical and hydraulic subsystems can be transformed into general mechansims, the conceptual design method presented in this paper can also be applied in the conceptual design problem of complex mechatronic systems. And then the method to identify conceptual design schemes is given.

  7. Automatic and rapid identification of glycopeptides by nano-UPLC-LTQ-FT-MS and proteomic search engine.

    Science.gov (United States)

    Giménez, Estela; Gay, Marina; Vilaseca, Marta

    2017-01-30

    Here we demonstrate the potential of nano-UPLC-LTQ-FT-MS and the Byonic™ proteomic search engine for the separation, detection, and identification of N- and O-glycopeptide glycoforms in standard glycoproteins. The use of a BEH C18 nanoACQUITY column allowed the separation of the glycopeptides present in the glycoprotein digest and a baseline-resolution of the glycoforms of the same glycopeptide on the basis of the number of sialic acids. Moreover, we evaluated several acquisition strategies in order to improve the detection and characterization of glycopeptide glycoforms with the maximum number of identification percentages. The proposed strategy is simple to set up with the technology platforms commonly used in proteomic labs. The method allows the straightforward and rapid obtention of a general glycosylated map of a given protein, including glycosites and their corresponding glycosylated structures. The MS strategy selected in this work, based on a gas phase fractionation approach, led to 136 unique peptides from four standard proteins, which represented 78% of the total number of peptides identified. Moreover, the method does not require an extra glycopeptide enrichment step, thus preventing the bias that this step could cause towards certain glycopeptide species. Data are available via ProteomeXchange with identifier PXD003578.

  8. Automatized near-real-time short-term Probabilistic Volcanic Hazard Assessment of tephra dispersion before eruptions: BET_VHst for Vesuvius and Campi Flegrei during recent exercises

    Science.gov (United States)

    Selva, Jacopo; Costa, Antonio; Sandri, Laura; Rouwet, Dmtri; Tonini, Roberto; Macedonio, Giovanni; Marzocchi, Warner

    2015-04-01

    Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at mitigating the risk posed by volcanic activity at different time scales. The definition of the space-time window for PVHA is related to the kind of risk mitigation actions that are under consideration. Short temporal intervals (days to weeks) are important for short-term risk mitigation actions like the evacuation of a volcanic area. During volcanic unrest episodes or eruptions, it is of primary importance to produce short-term tephra fallout forecast, and frequently update it to account for the rapidly evolving situation. This information is obviously crucial for crisis management, since tephra may heavily affect building stability, public health, transportations and evacuation routes (airports, trains, road traffic) and lifelines (electric power supply). In this study, we propose a methodology named BET_VHst (Selva et al. 2014) for short-term PVHA of volcanic tephra dispersal based on automatic interpretation of measures from the monitoring system and physical models of tephra dispersal from all possible vent positions and eruptive sizes based on frequently updated meteorological forecasts. The large uncertainty at all the steps required for the analysis, both aleatory and epistemic, is treated by means of Bayesian inference and statistical mixing of long- and short-term analyses. The BET_VHst model is here presented through its implementation during two exercises organized for volcanoes in the Neapolitan area: MESIMEX for Mt. Vesuvius, and VUELCO for Campi Flegrei. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252

  9. Video-based automatic front-view human identification%视频下的正面人体身份自动识别

    Institute of Scientific and Technical Information of China (English)

    贲晛烨; 王科俊; 马慧

    2012-01-01

    A system was designed to automatically identify a person from a front-view angle in a video sequence, including the modules of Adaboost pedestrian detection, Adaboost face detection, complexion verification, gait preprocessing, period detection, feature extraction, and decision-making level amalgamation and identification. The face detection module and gait period detection module can be activated automatically by the pedestrian detection module. The experimental results show that the swinging arm region can be detected for obtaining the front-view gait period accurately with minimal computation, which is suitable for real-time gait recognition. Applying gait features assisted by face features to the decision-making level amalgamation method to solve human identification in a video sequence is a new idea. Even in gait recognition with a single sample per person, this proposed scheme can achieve an improvement in the correct recognition rate when face and gait information are integrated as opposed to using gait features alone.%为了能够实现视频下正面人体身份的自动识别,设计的系统包括Adaboost行人检测、Adaboost人脸检测、肤色验证、步态预处理、周期检测、特征提取以及决策级融合识别等模块.通过行人检测模块可以自动开启人脸检测模块和步态周期检测模块.实验结果表明,提出的根据下臂摇摆区域确定步态周期的方法对正面步态周期检测准确,计算量小,适用于实时的步态识别.采用人脸特征辅助步态特征在决策级的融合方法是解决视频下身份识别的新思路,在单样本的步态识别中,融合人脸特征可以提高识别精度.

  10. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  11. System identification and automatic tuning of the controller in hydro power plants; Systemidentifikation und Reglerselbsteinstellung in Wasserkraftanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Anz, R.

    2002-07-01

    In this work a method is presented to generate dynamic nonlinear models for speed and power controlled hydro Power plants. The models are identified automatically with measured data during operation. The models can be used for optimisation of the parameters of the controller. In this approach local linear neuro-fuzzy models are used. They seem very suitable for modelling nonlinear static and dynamic Systems. For a given set of measured data the structure and the parameters of the model are generated with the LOLIMOT-algorithm which is well known from literature. Several modifications of this algorithm are investigated during application on hydro power stations. Unfortunately not sufficient measured data from real power plants were available therefore theoretical models based on physical law and equations had to be used instead. The parameters for speed and power control are optimises using a global optimisation method. Other optimisation and design methods can be used and are discussed. The controllers which are optimised with the experimentally generated local linear neuro-fuzzy model are tested with the theoretical model. A clear improvement of the controller can be confirmed. (orig.) [German] In der vorliegenden Arbeit wird ein Verfahren vorgestellt, mit dem dynamische Modelle von drehzahl- und leistungsgeregelten Wasserkraftanlagen aus gemessenen Betriebsdaten automatisch bestimmt werden koennen. Diese Modelle koennen fuer den Entwurf oder zur Optimierung von Reglerparametern herangezogen werden. Bei dem dynamischen Modell handelt es sich um ein lokal lineares Neuro-Fuzzy Netz. Dieser Ansatz ist geeignet, nichtlineare statische und dynamische Systeme abzubilden. Fuer einen gegebenen Satz gemessener Daten erfolgt die Modellerstellung weitgehend automatisch mit dem aus der Literatur bekannten LOLIMOT-Algorithmus. Verschiedene Varianten und Abaenderungen des Verfahrens werden am Beispiel von Wasserkraftanlagen in dieser Arbeit untersucht. Leider standen fuer die

  12. Automatic classification and robust identification of vestibulo-ocular reflex responses: from theory to practice: introducing GNL-HybELS.

    Science.gov (United States)

    Ghoreyshi, Atiyeh; Galiana, Henrietta

    2011-10-01

    The Vestibulo-Ocular Reflex (VOR) stabilizes images of the world on our retinae when our head moves. Basic daily activities are thus impaired if this reflex malfunctions. During the past few decades, scientists have modeled and identified this system mathematically to diagnose and treat VOR deficits. However, traditional methods do not analyze VOR data comprehensively because they disregard the switching nature of nystagmus; this can bias estimates of VOR dynamics. Here we propose, for the first time, an automated tool to analyze entire VOR responses (slow and fast phases), without a priori classification of nystagmus segments. We have developed GNL-HybELS (Generalized NonLinear Hybrid Extended Least Squares), an algorithmic tool to simultaneously classify and identify the responses of a multi-mode nonlinear system with delay, such as the horizontal VOR and its alternating slow and fast phases. This algorithm combines the procedures of Generalized Principle Component Analysis (GPCA) for classification, and Hybrid Extended Least Squares (HybELS) for identification, by minimizing a cost function in an optimization framework. It is validated here on clean and noisy VOR simulations and then applied to clinical VOR tests on controls and patients. Prediction errors were less than 1 deg for simulations and ranged from .69 deg to 2.1 deg for the clinical data. Nonlinearities, asymmetries, and dynamic parameters were detected in normal and patient data, in both fast and slow phases of the response. This objective approach to VOR analysis now allows the design of more complex protocols for the testing of oculomotor and other hybrid systems.

  13. 智能建筑区门禁系统自动化识别技术分析%Analysis on Automatic Identification Technology of Intelligent Building Access Control System

    Institute of Scientific and Technical Information of China (English)

    张卉

    2015-01-01

    门禁系统是智能建筑区必备设施,可对建筑区提供安全防护、自动调控等多方面功能.指纹识别系统是人工智能改造的新系统,为门禁系统自动识别提供了科技化措施.本文分析了智能建筑发展趋势及指纹识别系统的基本构成,介绍了智能建筑门禁系统自动化识别技术的应用方法.%The access control system of intelligent building is a necessary facility, which provides security protection, automatic control and so on. Fingerprint identification system is a new artificial intelligence system, providing technological measures for the automatic identification of access control system. This paper analyzes the development trend of intelligent building and the basic structure of fingerprint identification system, introduces the application of automatic recognition technology in intelligent building access control system.

  14. The Availability Of Automatic Identification System (AIS Based On Latency Position Reports In The Gulf Of Gdansk

    Directory of Open Access Journals (Sweden)

    Jaskólski Krzysztof

    2014-06-01

    Full Text Available The problem of determining geographic position considered only in terms of measurement error, seems to be solved on a global scale. In view of the above, from the nineties, the operational characteristics of radio-navigation systems are equally important. The integrated navigation system operate in a multi-sensor environment and it is important to determinate a temporal validity of data to make it usable in data fusion process. In the age of digital data processing, the requirements for continuity, availability, reliability and integrity information are already grown. This article analyses the problem of time stamp discrepancies of dynamic position reports. For this purpose, the statistical summary of Latency Position Reports has been presented. The navigation data recordings were conducted during 30 days of March 2014 from 19 vessels located in area of Gulf of Gdansk. On the base of Latency Position Reports it is possible to designate the availability of AIS system.

  15. 基于船舶AIS信息的可疑船只监测研究%Monitoring of Intrusive Vessels Based on an Automatic Identification System (AIS)

    Institute of Scientific and Technical Information of China (English)

    郭浩; 张晰; 安居白; 李冠宇

    2013-01-01

    中国海洋资源丰富,邻国船只时常非法航入中国领海或经济专属区.为了有效地保护和开发海洋资源,利用船舶自动识别系统(AIS)提供的船位、船速及航向等动态信息与船名、呼号、吃水及危险货物等静态信息,对某邻国船只于2012年4月在其专属经济区以及中国海域航行特征和船只特征进行分析.%Vessels from the neighboring countries often enter into the territorial waters and exclusive economic zone of China illegally.In order to protect our marine resources,this paper analyzes the characteristics and sailing features of ships from one neighboring country of China that entered the exclusive economic zone and the sea of China in April 2012.In particular,an automatic identification system (AIS) is used to collect the related information regarding ships,such as position,speed,heading,name,call sign,draft and dangerous goods carried,etc.Then,the geographic distribution,velocity and regular route pattern of vessels are used to develop a ship traffic information database.This paper provides an effective way for monitoring intrusive vessels,in order to protect China's marine rights.

  16. 无向简单图与无向连通图自动识别系统%Undirected Simple Graphs and Undirected Connected Graph Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    张娟

    2012-01-01

    近年来,图论越来越受到全世界数学界和其它科学界的广泛重视.图的理论及其在物理、化学、运筹学、计算机科学、电子学、信息论、控制论、网络理论、社会科学及经济管理等几乎所有学科领域中各方面的应用研究都迅速发展.无向图作为图论的重要组成部分,研究无向图的连通性问题具有很重要的意义.本文介绍了无向简单图与无向连通图自动识别系统的设计与实现过程.%In recent years, more and more attention was paid to the graph theory in Mathematics and other scientific fields; there is a great development in graph theory and the applying research of graph theory in the physical, chemical, operations research, computer science, electronics, information theory, cybernetics, network theory, social science and economic management and so on. As an important part of graph theory, it is very important to do the research about the connectivity of undirected simple graph. This paper introduces the designing and applying process of automatic identification system of undirected simple graph and undirected connected graph.

  17. Identification of Biocontrol Bacteria against Soybean Root Rot with Biolog Automatic Microbiology Analysis System%拮抗大豆根腐病细菌的Biolog鉴定

    Institute of Scientific and Technical Information of China (English)

    许艳丽; 刘海龙; 李春杰; 潘凤娟; 李淑娴; 刘新晶

    2012-01-01

    In order to identify the systematic position of taxonomy of two biocontrol bacteria against soybean root rot. Traditional morphological identification and BIOLOG automatic microbiology analysis system were used to identify strain B021a and B04b. The results showed that similarity value of strain B021a with Vibrio tubiashii was 0. 634, possibility to 86% and genetic distance to 4.00,and similarity value of strain B04b with Pasteurella trehalosi was 0. 610,probability to 75% and genetic distance to 2. 77. Strain B021a was identified as Vibrio tubiashii and strain B04b as Pasteurella trehalosi by colony morphological propertie and BIOLOC analysis system.%为明确2株生防细菌的分类地位,采用传统形态学方法结合Biolog微生物自动分析系统,鉴定了大豆根腐病的2株生防细菌.结果表明,菌株B021a与塔式弧菌相似度值为0.634,可能性是86%,遗传距离为4.00.菌株B04b与海藻巴斯德菌相似度值为0.610,可能性是75%,遗传距离为2.77.综合形态学和Biolog鉴定结果,认为菌株B021a是塔式弧菌,菌株B04b是海藻巴斯德菌.

  18. 利用Identifiler分型系统推断同胞关系%Research siblings identification by Identiffler system and automatic STR genetic

    Institute of Scientific and Technical Information of China (English)

    郭燕霞; 郝金萍; 刘路; 叶健; 徐小玉; 欧元; 张建; 林小健; 王华; 翟亚森; 米瑞华; 康艳荣; 李万水; 陈松; 张国臣; 刘开会; 郭燕东; 李嘉丽; 郭红玲

    2009-01-01

    目的 探讨自主开发的同胞关系鉴定自动分析软件(ASI)对ldentifiler分型系统进行同胞关系鉴定的可行性.方法 应用本课题组所开发的软件ASI,对151对同胞及31224对人工模拟拟无关个体进行Identifiler系统的15个常染色体STR基因座分型进行分析,计算亲权指数(PI)、同胞关系概率(W_ (FS))和等位基因匹配情况,所获数据进行统计分析,自动计算排序.结果 当W_ (FS)大于99.999%时,同胞个体占39.07%,无关个体占0%,两组具有显著差异,可以推断两个体同胞关系.当W_(FS)介于1%~99.999%范围内,同胞个体和无关个体有部分重叠,同胞个体占60.93%,无关个体占21.3%,两者具有一定差异,可以通过增加检测STR基因座,再结合案情加作Y-STR、mtDNA检测.以推断两个体是否具有同胞关系.当W_(FS)小于1%时,同胞个体占0%,无关个体占78.7%,可以推断两个体不具有同胞关系.个体间等位基因匹配结果表明:在检测Identifiler体系15个STR基因座时,当两个体常染色体STR基因座的全相同数目≥5时,或伞不同数目≤1时,提示为同胞关系;当两个体全不同数日≥6时,或伞相同数目≤1时.提示为无关个体,以此作为预测有无同胞关系的界值.结论 Identifiler系统及同胞关系鉴定自动分析软件ASI可用于推断同胞关系.%Objective To evaluate the probability of siblings identification in Identifiler system by using the software of automatic analysis.Methods Using the software of automatic analysis in siblings jdentification.STP genetic typing of 151 pairs of full siblings and 31224 pairs of unrelated individuals from manual simulation were analyzed in 15 STR loci of ldentifiler system.Results Kin probability(W_(FS))of 39.07% full siblings were more than 99.999% while W_(FS) of unrelated individual pairs were 0% .W_(FS) of 60.93% full siblings and 21.3% unrelated individual pairs were all at the range from 99.999% to 1% .W_(FS) of 78.7% unrelated

  19. Automatic identification method of multiple argumentation information relationship in group decision-making%群体决策中多种研讨信息关系的自动识别方法

    Institute of Scientific and Technical Information of China (English)

    李欣苗; 张朋柱; 李靖

    2012-01-01

    决策是管理的核心,决策贯穿管理的全过程.群体决策中会产生海量的研讨信息,研讨信息与决策方案之间存在多种关系.论文研究了群体决策中多种研讨信息关系的自动识别方法,构建了多种研讨信息关系自动识别模型,并应用于实际的群体决策过程.应用结果表明,该模型较好地实现了研讨信息与决策方案之间强烈支持、一般支持、中立、一般反对和强烈反对关系的自动识别,可以辅助人对群体研讨信息的整理和分析,提高了群体决策过程信息组织的效率.%Decision making is essential to management. A lot of argumentation information is produced in group decision-making. There are multiple relationships between the argumentation information and decision solution. In this paper, a automatic identification method of multiple argumentation information relationship in group decision-making is researched and put forward. A automatic identification model of the argumentation information relationship in group decision-making is built. Furthermore, the method is applied to actual group decision process. The results of the application show that the method realizes the automatic identification of the strongly supportive, supportive, neutral, strongly against, and against relationship between decision solution and argumentation information effectively. It can help group members to organize the large amount of argumentation information effectively and increase the efficiency of information organizing in group decision-making.

  20. REMI and ROUSE: Quantitative models for long-term priming in perceptual identification.

    NARCIS (Netherlands)

    E.J.M. Wagenmakers; R. Zeelenberg; D. Huber; J.G.W. Raaijmakers; R.M. Shiffrin; L.J. Schooler

    2003-01-01

    (from the chapter) The REM model originally developed for recognition memory (R. M. Shiffrin and M. Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. The authors discuss 2 REM models based on Bayesian principles: a model for lo

  1. Multi-Innovation Stochastic Gradient Identification Algorithm for Hammerstein Controlled Autoregressive Autoregressive Systems Based on the Key Term Separation Principle and on the Model Decomposition

    Directory of Open Access Journals (Sweden)

    Huiyi Hu

    2013-01-01

    speed of the stochastic gradient algorithm. The key term separation principle can simplify the identification model of the input nonlinear system, and the decomposition technique can enhance computational efficiencies of identification algorithms. The simulation results show that the proposed algorithm is effective for estimating the parameters of IN-CARAR systems.

  2. Molecular identification of Taenia specimens after long-term preservation in formalin.

    Science.gov (United States)

    Jeon, Hyeong-Kyu; Kim, Kyu-Heon; Eom, Keeseon S

    2011-06-01

    The majority of Taenia tapeworm specimens in the museum collections are usually kept in a formalin fixative for permanent preservation mainly for use in morphological examinations. This study aims to improve Taenia tapeworm identification even of one preserved in formalin for a maximum of 81 years. Taenia tapeworms were collected by the parasite collection unit of the Swiss Natural History Museum and from units in Indonesia, Japan and Korea. A small amount of formalin-fixed tissue (100 mg) was crushed in liquid nitrogen and then soaked in a Tris-EDTA buffer for 3-5h. The sample was then digested in SDS and proteinase K (20 mg/ml) for 3-5h at 56 °C. After the addition of proteinase K (20mg/ml), SDS and hexadecyl-trimethyl-ammonium bromide (CTAB), incubation was continued for another 3h at 65 °C. A maximum yield of genomic DNA was obtained from this additional step and the quality of genomic DNA obtained with this extraction method seemed to be independent of the duration of storage time in the formalin fixative. The molecular identification of Taenia tapeworms was performed by using PCR and DNA sequences corresponding to position 80-428 of cox1 gene. T. asiatica was detected in the isolates of Indonesia, Japan and Korea. Improvements in the genomic DNA extraction method from formalin fixed museum collections will help in the molecular identification of parasites.

  3. 农业自动化喷雾机械标靶害虫自动识别系统的研究%Research on Automatic Identification System of Target Pests in Agricultural Automation Spraying Machine

    Institute of Scientific and Technical Information of China (English)

    张震; 高雄; 陈铁英; 王海超

    2016-01-01

    农业喷雾对象的识别和定位是农业自动化喷雾机械研究中的核心技术之一. 对病虫害甘蓝进行精准喷洒农药,实现病虫害准确自动识别成为关键. 为此,利用机器视觉的欧氏距离甘蓝夜蛾虫害自动识别检测系统,结合由Qualityspec 光谱仪组成的光谱成像系统,对甘蓝正常叶片和遭受甘蓝夜蛾虫害的甘蓝叶片的颜色特征和光谱特征进行分析,并采用机器视觉分割阈值选取中的Otsu算法和自适应波段选择方法提取出了颜色差异的最佳几何阈值和两种叶片的特征波段. 试验结果表明:综合机器视觉和光谱技术能够实现甘蓝夜蛾虫害的自动且准确的识别,准确率可达94%. 因此,建立机器视觉和光谱技术综合识别体系,可为农作物病虫害自动防治喷雾机器人的研制奠定基础,以达到农作物病虫害实时识别和及时治理的目的.%The spray object recognition and localization is one of the core technology of automatic spray mechanization re -search .For precision spraying pesticide plant diseases and insect pests of cabbage , the accurate and automatic identifica-tion of plant diseases and insect pests of cabbage becomes the key .Therefore , using machine vision automatic identifica-tion of the Euclidean distance of cabbage moth pests detection system , combined with spectral imaging system composed of qualityspec spectrometer , Cabbage normal blade and suffer from the cabbage moth pests of cabbage leaf color features and spectral characteristics were analyzed .The best geometric threshold of color difference and characteristic bands of two kinds of leaves were extracted , using the Otsu threshold value image segmentation algorithm and adaptive band selection method.The test results show that the technology compositing image processing with spectrum can realize automatic and accurate identification of Cabbage moth pests , accuracy reaching 94%.Therefore, the establishment of

  4. Paraphrase Identification using Semantic Heuristic Features

    Directory of Open Access Journals (Sweden)

    Zia Ul-Qayyum

    2012-11-01

    Full Text Available Paraphrase Identification (PI problem is to classify that whether or not two sentences are close enough in meaning to be termed as paraphrases. PI is an important research dimension with practical applications in Information Extraction (IE, Machine Translation, Information Retrieval, Automatic Identification of Copyright Infringement, Question Answering Systems and Intelligent Tutoring Systems, to name a few. This study presents a novel approach of paraphrase identification using semantic heuristic features envisaging improving the accuracy compared to state-of-the-art PI systems. Finally, a comprehensive critical analysis of misclassifications is carried out to provide insightful evidence about the proposed approach and the corpora used in the experiments.

  5. Hemotype全自动血型分析系统在血型鉴定中的应用研究%Research on hemotype fully automatic blood grouping instrument for identification of blood groups

    Institute of Scientific and Technical Information of China (English)

    马晓军; 李海军; 李凌波

    2012-01-01

    目的 探讨全自动血型分析系统在ABO、RhD血型鉴定中的应用.方法 使用微板法自动血型分析系统,在一次性U形板上通过加样、离心、振荡、判读完成血型检测并发送报告.结果 8013份献血者ABO血型一次性准确定型率99.91%(8006/8013),7例正反定型不一致(含O细胞凝集),Rh(D)阴性4例.60例乳糜标本ABO血型正反定型一致.结论 全自动血型分析系统进行ABO正反定型及RhD血型检测更安全可靠,实验操作实现了标准化、自动化,检测结果可永久保存、便于查询.%Objective To evaluate fully automatic blood grouping instrument for identification of blood groups of ABO and RhD. Methods Using automatic detection system distribute red cell or plasma in microplate. After reacting with blood grouping reagent, observe the results and report on LAN (Local Area Network). Results The rates of first proper ABO grouping were 99. 91% (8006/8013).7 out of 8013 O-group donor blood samples showed agglutination in the reverse ABO typing. 1 sample was found to RhD negative. The rates of first proper ABO grouping were 100% ( 60/60) of chylemia. Conclusion Fully automatic blood grouping instrument for detecting blood groups of ABO and RhD is safer,Standardization and automatic of blood grouping experiments has been achieved. The results can be stored timelessly, and are convenient for query.

  6. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  7. Automatic structural modeling method based on process manufacturing system identifications%流程制造系统辨识的自动结构性建模方法

    Institute of Scientific and Technical Information of China (English)

    韩中; 赵升吨; 张贵成; 阮卫平; 李建平; 沈红立

    2014-01-01

    模型能够解决系统工程中的许多问题,为此,提出了一种新的流程制造系统辨识的自动结构性建模方法。通过对系统的结构组成和单元关系进行辨识,提炼出模型的结构性数据,并以此自动地形成系统仿真模型。建模采用了图论作为工业系统的数学表达形式。研究对系统单元进行规则性编码,并根据系统结构所具有的特性定义了建模的辨识函数。实例证明了提出的方法是可行的,并能够满足系统建模的有用性、高效性、准确性的要求。%Models can solve many problems in systems engineering, so a new automatic structural modeling technology based on process manufacturing system identifications is presented. Through carrying out identifications to system compo-sitions and unit relationships, model structure data is refined, and a system simulation model is automatically generated. The graph theory is adapted and regarded as a math expression format of the industrial system in the modeling. In addi-tion, the rule codes are implemented for system units in the research, the identification functions are defined according to system composition properties. The auto-modeling processes are achieved through iterative computation. Finally, the example is given to verify that the presented method is feasible, and can satisfy these requirements of the availability, efficiency and accuracy.

  8. Automatic Tools for Diagnosis Support of Total Hip Replacement Follow-up

    Directory of Open Access Journals (Sweden)

    SULTANA, A.

    2011-11-01

    Full Text Available Total hip replacement is a common procedure in today orthopedics, with high rate of long-term success. Failure prevention is based on a regular follow-up aimed at checking the prosthesis fit and state by means of visual inspection of radiographic images. It is our purpose to provide automatic means for aiding medical personnel in this task. Therefore we have constructed tools for automatic identification of the component parts of the radiograph, followed by analysis of interactions between the bone and the prosthesis. The results form a set of parameters with obvious interest in medical diagnosis.

  9. Records, record linkage, and the identification of long term environmental hazards

    Energy Technology Data Exchange (ETDEWEB)

    Acheson, E.D.

    1978-11-15

    Long-term effects of toxic substances in man which have been recognized so far have been noticed because they have involved gross relative risks, or bizarre effects, or have been stumbled upon by chance or because of special circumstances. These facts and some recent epidemiological evidence together suggest that a systematic approach with more precise methods and data would almost certainly reveal the effects of many more toxic substances, particularly in workers exposed in manufacturing industry. Additional ways are suggested in which record linkage techniques might be used to identify substances with long-term toxic effects. Obstacles to further progress in the field of monitoring for long-term hazards in man are: lack of a public policy dealing with confidentiality and informed consent in the use of identifiable personal records, which balances the needs of bona fide research workers with proper safeguards for the privacy of the individual, and lack of resources to improve the quality, accessibility and organization of the appropriate data. (PCS)

  10. Hydrogeochemical and isotopic tracers for identification of seasonal and long-term over-exploitation of the Pleistocene thermal waters.

    Science.gov (United States)

    Rman, Nina

    2016-04-01

    The aim of the study was to develop and test an optimal and cost-effective regional quality monitoring system in depleted transboundary low-temperature Neogene geothermal aquifers in the west Pannonian basin. Potential tracers for identification of seasonal and long-term quality changes of the Pleistocene thermal waters were investigated at four multiple-screened wells some 720 to 1570 m deep in Slovenia. These thermal waters are of great balneological value owing to their curative effects and were sampled monthly between February 2014 and January 2015. Linear correlation and regression analyses, ANOVA and Kolmogorov-Smirnov two-sample test for two independent samples were used to determine their seasonal and long-term differences. Temperature, pH, electrical conductivity, redox potential and dissolved oxygen did not identify varying inflow conditions; however, they provided sufficient information to distinguish between the four end-members. Characteristic (sodium) and conservative (chloride) tracers outlined long-term trends in changes in quality but could not differentiate between the seasons. Stable isotopes of δ (18)O and δ (2)H were used to identify sequential monthly and long-term trends, and origin and mixing of waters, but failed to distinguish the difference between the seasons. A new local paleo-meteoric water line (δ (2)H = 9.2*δ (18)O + 26.3) was outlined for the active regional groundwater flow system in the Pannonian to Pliocene loose sandstone and gravel. A new regression line (δ (2)H = 2.3*δ (18)O-45.2) was calculated for thermomineral water from the more isolated Badenian to Lower Pannonian turbiditic sandstone, indicating dilution of formation water. Water composition was generally stable over the 1-year period, but long-term trends indicate that changes in quality occur, implying deterioration of the aquifers status.

  11. Screening local Lactobacilli from Iran in terms of production of lactic acid and identification of superior strains

    Directory of Open Access Journals (Sweden)

    Fatemeh Soleimanifard

    2015-12-01

    Full Text Available Introduction: Lactobacilli are a group of lactic acid bacteria that their final product of fermentation is lactic acid. The objective of this research is selection of local Lactobacilli producing L (+ lactic acid. Materials and methods: In this research the local strains were screened based on the ability to produce lactic acid. The screening was performed in two stages. The first stage was the titration method and the second stage was the enzymatic method. The superior strains obtained from titration method were selected to do enzymatic test. Finally, the superior strains in the second stage (enzymatic which had the ability to produce L(+ lactic acid were identified by biochemical tests. Then, molecular identification of strains was performed by using 16S rRNA sequencing. Results: In this study, the ability of 79 strains of local Lactobacilli in terms of production of lactic acid was studied. The highest and lowest rates of lactic acid production was 34.8 and 12.4 mg/g. Superior Lactobacilli in terms of production of lactic acid ability of producing had an optical isomer L(+, the highest levels of L(+ lactic acid were with 3.99 and the lowest amount equal to 1.03 mg/g. The biochemical and molecular identification of superior strains showed that strains are Lactobacillus paracasei. Then the sequences of 16S rRNA of superior strains were reported in NCBI with accession numbers KF735654، KF735655، KJ508201and KJ508202. Discussion and conclusion: The amounts of lactic acid production by local Lactobacilli were very different and producing some of these strains on available reports showed more products. The results of this research suggest the use of superior strains of Lactobacilli for production of pure L(+ lactic acid.

  12. Identification and localization of netrin-4 and neogenin in human first trimester and term placenta.

    Science.gov (United States)

    Dakouane-Giudicelli, M; Duboucher, C; Fortemps, J; Salama, S; Brulé, A; Rozenberg, P; de Mazancourt, P

    2012-09-01

    We describe here for the first time the characterization of family member of netrins, netrin-4 and its receptor neogenin, during the development of the placenta. By using western blots and RT-PCR, we demonstrated the presence of netrin-4 and its receptor neogenin protein as well as their transcripts. Using immunohistochemistry, we studied the distribution of netrin-4 and neogenin in both the first trimester and term placenta. We observed staining of netrin-4 in villous and extravillous cytotrophoblasts, syncytiotrophoblast, and endothelial cells whereas staining in stromal cells was faint. In decidua, we observed netrin-4 labelling in glandular epithelial cells, perivascular decidualized cells, and endothelial cells. However, neogenin was absent in villous and extravillous cytotrophoblasts and was expressed only on syncytiotrophoblast and placental stromal cells in the first trimester and at term placenta. The pattern of distribution suggests that a functional netrin-4-neogenin pathway might be restricted to syncytiotrophoblasts, mesenchymal cells, and villous endothelial cells. This pathway function might vary with its localization in the placenta. It is possibly involved in angiogenesis, morphogenesis, and differentiation.

  13. Comparison of Short-Term Estrogenicity Tests for Identification of Hormone-Disrupting Chemicals

    Science.gov (United States)

    Andersen, Helle Raun; Andersson, Anna-Maria; Arnold, Steven F.; Autrup, Herman; Barfoed, Marianne; Beresford, Nicola A.; Bjerregaard, Poul; Christiansen, Lisette B.; Gissel, Birgitte; Hummel, René; Jørgensen, Eva Bonefeld; Korsgaard, Bodil; Le Guevel, Remy; Leffers, Henrik; McLachlan, John; Møller, Anette; Bo Nielsen, Jesper; Olea, Nicolas; Oles-Karasko, Anita; Pakdel, Farzad; Pedersen, Knud L.; Perez, Pilar; Skakkebœk, Niels Erik; Sonnenschein, Carlos; Soto, Ana M.; Sumpter, John P.; Thorpe, Susan M.; Grandjean, Philippe

    1999-01-01

    The aim of this study was to compare results obtained by eight different short-term assays of estrogenlike actions of chemicals conducted in 10 different laboratories in five countries. Twenty chemicals were selected to represent direct-acting estrogens, compounds with estrogenic metabolites, estrogenic antagonists, and a known cytotoxic agent. Also included in the test panel were 17β-estradiol as a positive control and ethanol as solvent control. The test compounds were coded before distribution. Test methods included direct binding to the estrogen receptor (ER), proliferation of MCF-7 cells, transient reporter gene expression in MCF-7 cells, reporter gene expression in yeast strains stably transfected with the human ER and an estrogen-responsive reporter gene, and vitellogenin production in juvenile rainbow trout. 17β-Estradiol, 17α-ethynyl estradiol, and diethylstilbestrol induced a strong estrogenic response in all test systems. Colchicine caused cytotoxicity only. Bisphenol A induced an estrogenic response in all assays. The results obtained for the remaining test compounds—tamoxifen, ICI 182.780, testosterone, bisphenol A dimethacrylate, 4-n-octylphenol, 4-n-nonylphenol, nonylphenol dodecylethoxylate, butylbenzylphthalate, dibutylphthalate, methoxychlor, o,p′-DDT, p,p′-DDE, endosulfan, chlomequat chloride, and ethanol—varied among the assays. The results demonstrate that careful standardization is necessary to obtain a reasonable degree of reproducibility. Also, similar methods vary in their sensitivity to estrogenic compounds. Thus, short-term tests are useful for screening purposes, but the methods must be further validated by additional interlaboratory and interassay comparisons to document the reliability of the methods. ImagesFigure 2Figure 5Figure 6Figure 7 PMID:10229711

  14. Automatic identification of fault zone head waves and direct P waves and its application in the Parkfield section of the San Andreas Fault, California

    Science.gov (United States)

    Li, Zefeng; Peng, Zhigang

    2016-06-01

    Fault zone head waves (FZHWs) are observed along major strike-slip faults and can provide high-resolution imaging of fault interface properties at seismogenic depth. In this paper, we present a new method to automatically detect FZHWs and pick direct P waves secondary arrivals (DWSAs). The algorithm identifies FZHWs by computing the amplitude ratios between the potential FZHWs and DSWAs. The polarities, polarizations and characteristic periods of FZHWs and DSWAs are then used to refine the picks or evaluate the pick quality. We apply the method to the Parkfield section of the San Andreas Fault where FZHWs have been identified before by manual picks. We compare results from automatically and manually picked arrivals and find general agreement between them. The obtained velocity contrast at Parkfield is generally 5-10 per cent near Middle Mountain while it decreases below 5 per cent near Gold Hill. We also find many FZHWs recorded by the stations within 1 km of the background seismicity (i.e. the Southwest Fracture Zone) that have not been reported before. These FZHWs could be generated within a relatively wide low velocity zone sandwiched between the fast Salinian block on the southwest side and the slow Franciscan Mélange on the northeast side. Station FROB on the southwest (fast) side also recorded a small portion of weak precursory signals before sharp P waves. However, the polarities of weak signals are consistent with the right-lateral strike-slip mechanisms, suggesting that they are unlikely genuine FZHW signals.

  15. 通道行人集聚型异常事件自动识别算法设计%Design of Automatic Identification Algorithm for Pedestrian Clustering in Channel

    Institute of Scientific and Technical Information of China (English)

    李鑫; 陈艳艳; 陈宁; 刘小明; 冯国臣

    2016-01-01

    为了对城市轨道交通枢纽通道内的集聚型异常事件进行合理的疏导和客流组织,保障城市轨道交通枢纽的安全、高效运行,本文提出了一种通道内行人集聚型异常事件的自动识别算法.该算法首先通过对通道客流基础数据平稳性和突变性的分析,创建了一种兼具平稳性和突变性特征的新数据类型,然后基于双截面客流数据设计了自动识别算法的关键参数—偏移空间差值.最后通过对关键参数变化特征的分析,建立了通道行人集聚型异常事件自动识别算法.仿真试验结果显示:该算法的检测精度为100%,反应时间均值为65 s,表明该算法对通道行人集聚事件有极强的自动检测能力和较短的反应时间.%In order to carry out reasonable guidance and passenger flow organization in the traffic hub channel of urban rail transit, ensure the safe and efficient operation of urban rail transit hub, we put forward an algorithm that can recognize the abnormal events of crowds gathering in the transfer channel automatically. Basic information like stability and mutability of pedestrian volume is analysed firstly, creating a new type data set characterized by stability and mutability based on the calculated result, and then the key parameter-difference of space offset of automatic identification algorithm is designed based on the double-section pedestrian volume, and variation characteristics analysis of the key parameter will help to establish the algorithm for automatic identifying crowds gathering abnormal events. The simulation experiment result shows that the detection accuracy of the algorithm is 100%, and the reaction time is 65 s, which shows that the algorithm has a strong automatic detection ability and a shorter reaction time for the pedestrian clustering events.

  16. Identification of bacteria utilizing biphenyl, benzoate, and naphthalene in long-term contaminated soil.

    Directory of Open Access Journals (Sweden)

    Ondrej Uhlik

    Full Text Available Bacteria were identified associated with biodegradation of aromatic pollutants biphenyl, benzoate, and naphthalene in a long-term polychlorinated biphenyl- and polyaromatic hydrocarbon-contaminated soil. In order to avoid biases of culture-based approaches, stable isotope probing was applied in combination with sequence analysis of 16 S rRNA gene pyrotags amplified from (13C-enriched DNA fractions. Special attention was paid to pyrosequencing data analysis in order to eliminate the errors caused by either generation of amplicons (random errors caused by DNA polymerase, formation of chimeric sequences or sequencing itself. Therefore, sample DNA was amplified, sequenced, and analyzed along with the DNA of a mock community constructed out of 8 bacterial strains. This warranted that appropriate tools and parameters were chosen for sequence data processing. (13C-labeled metagenomes isolated after the incubation of soil samples with all three studied aromatics were largely dominated by Proteobacteria, namely sequences clustering with the genera Rhodanobacter Burkholderia, Pandoraea, Dyella as well as some Rudaea- and Skermanella-related ones. Pseudomonads were mostly labeled by (13C from naphthalene and benzoate. The results of this study show that many biphenyl/benzoate-assimilating bacteria derive carbon also from naphthalene, pointing out broader biodegradation abilities of some soil microbiota. The results also demonstrate that, in addition to traditionally isolated genera of degradative bacteria, yet-to-be cultured bacteria are important players in bioremediation. Overall, the study contributes to our understanding of biodegradation processes in contaminated soil. At the same time our results show the importance of sequencing and analyzing a mock community in order to more correctly process and analyze sequence data.

  17. Hazard identification of inhaled nanomaterials: making use of short-term inhalation studies.

    Science.gov (United States)

    Klein, Christoph L; Wiench, Karin; Wiemann, Martin; Ma-Hock, Lan; van Ravenzwaay, Ben; Landsiedel, Robert

    2012-07-01

    A major health concern for nanomaterials is their potential toxic effect after inhalation of dusts. Correspondingly, the core element of tier 1 in the currently proposed integrated testing strategy (ITS) is a short-term rat inhalation study (STIS) for this route of exposure. STIS comprises a comprehensive scheme of biological effects and marker determination in order to generate appropriate information on early key elements of pathogenesis, such as inflammatory reactions in the lung and indications of effects in other organs. Within the STIS information on the persistence, progression and/or regression of effects is obtained. The STIS also addresses organ burden in the lung and potential translocation to other tissues. Up to now, STIS was performed in research projects and routine testing of nanomaterials. Meanwhile, rat STIS results for more than 20 nanomaterials are available including the representative nanomaterials listed by the Organization for Economic Cooperation and Development (OECD) working party on manufactured nanomaterials (WPMN), which has endorsed a list of representative manufactured nanomaterials (MN) as well as a set of relevant endpoints to be addressed. Here, results of STIS carried out with different nanomaterials are discussed as case studies. The ranking of different nanomaterials potential to induce adverse effects and the ranking of the respective NOAEC are the same among the STIS and the corresponding subchronic and chronic studies. In another case study, a translocation of a coated silica nanomaterial was judged critical for its safety assessment. Thus, STIS enables application of the proposed ITS, as long as reliable and relevant in vitro methods for the tier 1 testing are still missing. Compared to traditional subacute and subchronic inhalation testing (according to OECD test guidelines 412 and 413), STIS uses less animals and resources and offers additional information on organ burden and progression or regression of potential effects.

  18. Comparison of Multi-shot Models for Short-term Re-identification of People using RGB-D Sensors

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Bahnsen, Chris; Moeslund, Thomas B.

    2015-01-01

    This work explores different types of multi-shot descriptors for re-identification in an on-the-fly enrolled environment using RGB-D sensors. We present a full re-identification pipeline complete with detection, segmentation, feature extraction, and re-identification, which expands on previous work...

  19. Identification of long-term trends in vegetation dynamics in the Guinea savannah region of Nigeria

    Science.gov (United States)

    Osunmadewa, Babatunde A.; Wessollek, Christine; Karrasch, Pierre

    2014-10-01

    The availability of newly generated data from Advanced Very High Resolution Radiometer (AVHRR) covering the last three decades has broaden our understanding of vegetation dynamics (greening) from global to regional scale through quantitative analysis of seasonal trends in vegetation time series and climatic variability especially in the Guinea savannah region of Nigeria where greening trend is inconsistent. Due to the impact of changes in global climate and sustainability of means of human livelihood, increasing interest on vegetation productivity has become important. The aim of this study is to examine association between NDVI and rainfall using remotely sensed data, since vegetation dynamics (greening) has a high degree of association with weather parameters. This study therefore analyses trends in regional vegetation dynamics in Kogi state, Nigeria using bi-monthly AVHRR GIMMS 3g (Global Inventory Modelling and Mapping Studies) data and TAMSAT (Tropical Applications of Meteorology Satellite) monthly data both from 1983 to 2011 to identify changes in vegetation greenness over time. Analysis of changes in the seasonal variation of vegetation greenness and climatic drivers was conducted for selected locations to further understand the causes of observed interannual changes in vegetation dynamics. For this study, Mann-Kendall (MK) monotonic method was used to analyse long-term inter-annual trends of NDVI and climatic variable. The Theil-Sen median slope was used to calculate the rate of change in slopes between all pair wise combination and then assessing the median over time. Trends were also analysed using a linear model method, after seasonality had been removed from the original NDVI and rainfall data. The result of the linear model are statistically significant (p <0.01) in all the study location which can be interpreted as increase in vegetation trend over time (greening). Also the result of the NDVI trend analysis using Mann-Kendall test shows an increasing

  20. A 100-m Fabry–Pérot Cavity with Automatic Alignment Controls for Long-Term Observations of Earth’s Strain

    Directory of Open Access Journals (Sweden)

    Akiteru Takamori

    2014-08-01

    Full Text Available We have developed and built a highly accurate laser strainmeter for geophysical observations. It features the precise length measurement of a 100-m optical cavity with reference to a stable quantum standard. Unlike conventional laser strainmeters based on simple Michelson interferometers that require uninterrupted fringe counting to track the evolution of ground deformations, this instrument is able to determine the absolute length of a cavity at any given time. The instrument offers advantage in covering a variety of geophysical events, ranging from instantaneous earthquakes to crustal deformations associated with tectonic strain changes that persist over time. An automatic alignment control and an autonomous relocking system have been developed to realize stable performance and maximize observation times. It was installed in a deep underground site at the Kamioka mine in Japan, and an effective resolution of 2 × (10−8 − 10−7 m was achieved. The regular tidal deformations and co-seismic strain changes were in good agreement with those from a theoretical model and a co-located conventional laser strainmeter. Only the new instrument was able to record large strain steps caused by a nearby large earthquake because of its capability of absolute length determination.

  1. Rapid identification of bacteria from positive blood culture bottles by MALDI-TOF MS following short-term incubation on solid media.

    Science.gov (United States)

    Altun, Osman; Botero-Kleiven, Silvia; Carlsson, Sarah; Ullberg, Måns; Özenci, Volkan

    2015-11-01

    Rapid identification of bacteria from blood cultures enables early initiation of appropriate antibiotic treatment in patients with bloodstream infections (BSI). The objective of the present study was to evaluate the use of matrix-associated laser desorption ionization-time of flight (MALDI-TOF) MS after a short incubation on solid media for rapid identification of bacteria from positive blood culture bottles. MALDI-TOF MS was performed after 2.5 and 5.5 h plate incubation of samples from positive blood cultures. Identification scores with values ≥ 1.7 were accepted as successful identification if the results were confirmed by conventional methods. Conventional methods included MALDI-TOF MS, Vitek 2, and diverse biochemical and agglutination tests after overnight culture. In total, 515 positive blood cultures with monomicrobial bacterial growth representing one blood culture per patient were included in the study. There were 229/515 (44.5%) and 286/515 (55.5%) blood culture bottles with Gram-negative bacteria (GNB) and Gram-positive bacteria (GPB), respectively. MALDI-TOF MS following short-term culture could accurately identify 300/515 (58.3%) isolates at 2.5 h, GNB being identified in greater proportion (180/229; 78.6%) than GPB (120/286; 42.0%). In an additional 124/515 bottles (24.1%), identification was successful at 5.5 h, leading to accurate identification of bacteria from 424/515 (82.3%) blood cultures after short-term culture. Interestingly, 11/24 of the isolated anaerobic bacteria could be identified after 5.5 h. The present study demonstrates, in a large number of clinical samples, that MALDI-TOF MS following short-term culture on solid medium is a reliable and rapid method for identification of bacteria from blood culture bottles with monomicrobial bacterial growth.

  2. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo;

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  3. Automatized near-real-time short-term Probabilistic Volcanic Hazard Assessment of tephra dispersion before and during eruptions: BET_VHst for Mt. Etna

    Science.gov (United States)

    Selva, Jacopo; Scollo, Simona; Costa, Antonio; Brancato, Alfonso; Prestifilippo, Michele

    2015-04-01

    Tephra dispersal, even in small amounts, may heavily affect public health and critical infrastructures, such as airports, train and road networks, and electric power supply systems. Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at managing and mitigating the risk posed by activity during volcanic crises and during eruptions. Short-term PVHA (over time intervals in the order of hours to few days) must account for rapidly changing information coming from the monitoring system, as well as, updated wind forecast, and they must be accomplished in near-real-time. In addition, while during unrest the primary goal is to forecast potential eruptions, during eruptions it is also fundamental to correctly account for the real-time status of the eruption and of tephra dispersal, as well as its potential evolution in the short-term. Here, we present a preliminary application of BET_VHst model (Selva et al. 2014) for Mt. Etna. The model has its roots into present state deterministic procedure, and it deals with the large uncertainty that such procedures typically ignore, like uncertainty on the potential position of the vent and eruptive size, on the possible evolution of volcanological input during ongoing eruptions, as well as, on wind field. Uncertainty is treated by making use of Bayesian inference, alternative modeling procedures for tephra dispersal, and statistical mixing of long- and short-term analyses. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252

  4. 低空运动目标的多传感器自动识别和实时跟踪%Automatic identification and real-time tracking based on multiple sensors for low-altitude moving targets

    Institute of Scientific and Technical Information of China (English)

    张作楠; 刘国栋; 娄建

    2011-01-01

    This paper discussed a method for low altitude moving target detection and tracking in TV tracking system. In order to increase the ability of automatic tracking and anti-interferene, based on a variety of sensors and electronic measuring devices, such as acoustic sensors, image sensors and laser range finder,proposed a multi-sensor integrated automatic identification and real-time servo algorithm. Firstly located the target initially by the positive acoustic localization technology, secondly used the dynamic and static image features as well as the sound source characteristics of the target in target classification and recognition. According to video tracking and trajectory prediction algorithm, the desired target error signal control servo for precise tracking was used to control the servo mechanism to track precisely. Experiments show thattthe algorithm is simple and effective to achieve enough precision and reliability, and also validate the feasibility for multiple sensors being used in full-automatic intelligent tracking system.%讨论了一种用于低空运动目标检测和跟踪的电视跟踪系统.为了提高系统自动跟踪和抗干扰能力,基于声—光—电多种传感器和测量装置如声波传感器、图像传感器和激光测距仪等,提出一种多传感器综合的自动目标识别和实时跟踪算法.该方法将被动声定位技术用于目标初定位,结合目标图像动静态特征和目标声源特征用于目标的特征提取和自动识别,根据视频跟踪和轨迹预测算法,得出期望的目标误差信号控制伺服机构进行精确跟踪.实验结果表明该算法简捷有效、精度和可靠性达到要求,验证了多传感器应用于全自动智能跟踪系统的可行性.

  5. compMS2Miner: An Automatable Metabolite Identification, Visualization, and Data-Sharing R Package for High-Resolution LC-MS Data Sets.

    Science.gov (United States)

    Edmands, William M B; Petrick, Lauren; Barupal, Dinesh K; Scalbert, Augustin; Wilson, Mark J; Wickliffe, Jeffrey K; Rappaport, Stephen M

    2017-04-04

    A long-standing challenge of untargeted metabolomic profiling by ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS) is efficient transition from unknown mass spectral features to confident metabolite annotations. The compMS(2)Miner (Comprehensive MS(2) Miner) package was developed in the R language to facilitate rapid, comprehensive feature annotation using a peak-picker-output and MS(2) data files as inputs. The number of MS(2) spectra that can be collected during a metabolomic profiling experiment far outweigh the amount of time required for pain-staking manual interpretation; therefore, a degree of software workflow autonomy is required for broad-scale metabolite annotation. CompMS(2)Miner integrates many useful tools in a single workflow for metabolite annotation and also provides a means to overview the MS(2) data with a Web application GUI compMS(2)Explorer (Comprehensive MS(2) Explorer) that also facilitates data-sharing and transparency. The automatable compMS(2)Miner workflow consists of the following steps: (i) matching unknown MS(1) features to precursor MS(2) scans, (ii) filtration of spectral noise (dynamic noise filter), (iii) generation of composite mass spectra by multiple similar spectrum signal summation and redundant/contaminant spectra removal, (iv) interpretation of possible fragment ion substructure using an internal database, (v) annotation of unknowns with chemical and spectral databases with prediction of mammalian biotransformation metabolites, wrapper functions for in silico fragmentation software, nearest neighbor chemical similarity scoring, random forest based retention time prediction, text-mining based false positive removal/true positive ranking, chemical taxonomic prediction and differential evolution based global annotation score optimization, and (vi) network graph visualizations, data curation, and sharing are made possible via the compMS(2)Explorer application. Metabolite identities and

  6. 16S rRNA Gene Sequence-Based Identification of Bacteria in Automatically Incubated Blood Culture Materials from Tropical Sub-Saharan Africa.

    Directory of Open Access Journals (Sweden)

    Hagen Frickmann

    Full Text Available The quality of microbiological diagnostic procedures depends on pre-analytic conditions. We compared the results of 16S rRNA gene PCR and sequencing from automatically incubated blood culture materials from tropical Ghana with the results of cultural growth after automated incubation.Real-time 16S rRNA gene PCR and subsequent sequencing were applied to 1500 retained blood culture samples of Ghanaian patients admitted to a hospital with an unknown febrile illness after enrichment by automated culture.Out of all 1500 samples, 191 were culture-positive and 98 isolates were considered etiologically relevant. Out of the 191 culture-positive samples, 16S rRNA gene PCR and sequencing led to concordant results in 65 cases at species level and an additional 62 cases at genus level. PCR was positive in further 360 out of 1309 culture-negative samples, sequencing results of which suggested etiologically relevant pathogen detections in 62 instances, detections of uncertain relevance in 50 instances, and DNA contamination due to sample preparation in 248 instances. In two instances, PCR failed to detect contaminants from the skin flora that were culturally detectable. Pre-analytical errors caused many Enterobacteriaceae to be missed by culture.Potentially correctable pre-analytical conditions and not the fastidious nature of the bacteria caused most of the discrepancies. Although 16S rRNA gene PCR and sequencing in addition to culture led to an increase in detections of presumably etiologically relevant blood culture pathogens, the application of this procedure to samples from the tropics was hampered by a high contamination rate. Careful interpretation of diagnostic results is required.

  7. Automatic identification of origins of left and right coronary arteries in CT angiography for coronary arterial tree tracking and plaque detection

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Chightai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Agarwal, Prachi; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    Automatic tracking and segmentation of the coronary arterial tree is the basic step for computer-aided analysis of coronary disease. The goal of this study is to develop an automated method to identify the origins of the left coronary artery (LCA) and right coronary artery (RCA) as the seed points for the tracking of the coronary arterial trees. The heart region and the contrast-filled structures in the heart region are first extracted using morphological operations and EM estimation. To identify the ascending aorta, we developed a new multiscale aorta search method (MAS) method in which the aorta is identified based on a-priori knowledge of its circular shape. Because the shape of the ascending aorta in the cCTA axial view is roughly a circle but its size can vary over a wide range for different patients, multiscale circularshape priors are used to search for the best matching circular object in each CT slice, guided by the Hausdorff distance (HD) as the matching indicator. The location of the aorta is identified by finding the minimum HD in the heart region over the set of multiscale circular priors. An adaptive region growing method is then used to extend the above initially identified aorta down to the aortic valves. The origins at the aortic sinus are finally identified by a morphological gray level top-hat operation applied to the region-grown aorta with morphological structuring element designed for coronary arteries. For the 40 test cases, the aorta was correctly identified in 38 cases (95%). The aorta can be grown to the aortic root in 36 cases, and 36 LCA origins and 34 RCA origins can be identified within 10 mm of the locations marked by radiologists.

  8. 基于遗传-BP神经网络的沉积微相自动识别%Automatic sedimentary facies identification method based on genetic-BP neural networks

    Institute of Scientific and Technical Information of China (English)

    许少华; 陈可为; 梁久祯; 郑生民

    2001-01-01

    提出了一种基于神经网络与图象处理技术相结合的沉积微相自动识别方法.该方法是先将数字化测井曲线和地层参数预处理转化为二值点阵图象模式,经过点阵数据编码压缩提取和记忆曲线所表征的地层模式特征,然后利用超线性BP算法与遗传算法相结合的方法训练多层前馈神经网络.所得神经网络稳定、学习收敛速度快,同时具有很强的记忆能力和推广能力,此模型对解决沉积微相自动识别问题具有良好的适应性.%We propose an automatic sedimentary facies identification methodbased on combination of neural network with image process technology. First, we translate digital well logging curves and stratum parameters into binary image modes. Second, through contracting binary data codes, we distill and store stratum mode characters token by well logging curves. Last, we combine BP algorithm with genetic algorithm to train a multilayers forward neural network. The neural network has the advantages of being stable, fast learning, awfully memorable and generalized ability. This model is suitable to solve problems of sedimentary facies identification.

  9. Automatic coal-gangue identification based on gray level co-occurrence matrix%基于灰度共生矩阵的煤与矸石自动识别研究

    Institute of Scientific and Technical Information of China (English)

    吴开兴; 宋剑

    2016-01-01

    To improve the coal-gangue identification rate, an automatic identification method of coal and gangue texture feature is proposed based on gray level co-occurrence matrix ( GLCM) . The basic principle of the GLCM, characteristic parameters are analyzed, and GLCM is utilized to extract texture features of coal and gangue image, including angular second moment, correlation, contrast and entropy, which are identified using support vector machine ( SVM ) . The method has been simulated with MATLAB, and the results show that, texture features extraction with GLCM, SVM recognition method can effectively describe the texture characteristics of coal and gangue.%为提高煤与矸石识别率,提出了一种基于灰度共生矩阵的煤与矸石纹理特征自动识别方法。分析灰度共生矩阵的基本原理、特征参数,利用灰度共生矩阵提取煤与矸石图像的角二阶距、相关性、对比度和熵这四个特征作为纹理特征,用支持向量机进行识别,并在MATLAB上仿真实现。研究结果表明:用灰度共生矩阵提取纹理特征、用支持向量机识别的方法能有效的描述煤与矸石的纹理特征,为煤与矸石的识别和分选提供重要参考依据。

  10. Checker pattern improvement and fully-automatic identification for camera calibration%摄像机标定的棋盘格模板的改进和自动识别

    Institute of Scientific and Technical Information of China (English)

    张浩鹏; 王宗义; 吴攀超; 林欣堂

    2012-01-01

    为了克服在摄像机标定过程中需要使用者给出标定模板的附加信息,或全自动标定点识别算法在遮挡、不均匀照明、大视角和摄像机镜头畸变情况下不能检测出标定点的缺点,提出一种改进的基于基准点标记的棋盘格模板以及相应的全自动识别算法.新的摄像机标定模板以基准点标记代替传统棋盘格的黑白方块,从而使全自动识别算法识别出标记的位置.利用模板中标记按照标记ID从小到大的顺序排列的先验知识,估计丢失的标定点位置.为了提高丢失标定点在图像中初始位置的估计,算法估计径向畸变参数,从而克服了畸变对识别的影响.为了提高标定点的定位精度,利用高精度的鞍点检测器,从而标定点的定位精度小于0.05像素.为了检测鞍点的有效性,算法提出2种滤波准则,最终得到有效的标定点.识别算法是有效的且不需要任何参数.实验结果表明,对于同样的摄像机和背景,使用改进的棋盘格模板及其识别算法获得的标定点进行摄像机标定的投影误差比ARTag减少70%.%In order to overcome the shortcomings that in camera calibration the user needs to give additional information of calibration pattern or fully-automatic identification algorithm of calibration points can not detect calibration points under the conditions of significant occlusions, uneven illumination, observation with extremely viewing angles and lens distortion, an improved checker pattern based on fiducial markers is designed, and the corresponding fully-automatic identification algorithm of the calibration points is proposed. The new camera calibration pattern replaces the black and white squares in traditional checker pattern with the fiducial markers, so the fully-automatic identification algorithm can locate the positions of the markers. Using the priori knowledge that the markers are arranged sequentially in the calibration pattern according to

  11. Rapid Identification of Microorganisms from Positive Blood Culture by MALDI-TOF MS After Short-Term Incubation on Solid Medium.

    Science.gov (United States)

    Curtoni, Antonio; Cipriani, Raffaella; Marra, Elisa Simona; Barbui, Anna Maria; Cavallo, Rossana; Costa, Cristina

    2017-01-01

    Matrix-assisted laser-desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a useful tool for rapid identification of microorganisms. Unfortunately, its direct application to positive blood culture is still lacking standardized procedures. In this study, we evaluated an easy- and rapid-to-perform protocol for MALDI-TOF MS direct identification of microorganisms from positive blood culture after a short-term incubation on solid medium. This protocol was used to evaluate direct identification of microorganisms from 162 positive monomicrobial blood cultures; at different incubation times (3, 5, 24 h), MALDI-TOF MS assay was performed from the growing microorganism patina. Overall, MALDI-TOF MS concordance with conventional methods at species level was 60.5, 80.2, and 93.8% at 3, 5, and 24 h, respectively. Considering only bacteria, the identification performances at species level were 64.1, 85.0, and 94.1% at 3, 5, and 24 h, respectively. This protocol applied to a commercially available MS typing system may represent, a fast and powerful diagnostic tool for pathogen direct identification and for a promptly and pathogen-driven antimicrobial therapy in selected cases.

  12. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  13. 全自动微生物分析系统对布鲁杆菌属和种鉴定效果的研究%Identification effects of automatic microbial analysis system on brucella genus and species

    Institute of Scientific and Technical Information of China (English)

    肖春霞; 赵鸿雁; 侯临平; 荣蓉; 刘熹; 赵赤鸿; 朴东日; 赵娜; 姜海

    2015-01-01

    Objective To identify and analyse the biochemical characterization of brucella and to evaluate its clinical application by VITEK2 COMPACT automatic microbial identification analyzer.Methods Seventeen strains of standard strains and 121 strains of experimental strains were from bacteria storehouse of brucella disease,Institute of Infectious Diseases Prevention and Control,China Center for Disease Control and Prevention.Experimental strains were from 26 provinces (municipalities and autonomous regions) from 1957 to 2014,including all previous strains from patients and goats,antelope,sheep,cattle,and pig.Reference standard strains and experimental strains were analyzed using the GN identification card on VITEK2 COMPACT automatic microbial identification analyzer,and biochemical identification of brucella strains was done.Identified abnormal strains were rechecked by traditional test methods,including oxidase experiment,urease experiment,semisolid experiment,determination of hydrogen sulfide experiment,basic fuchsin susceptibility experiment,phage lysis experiment,and A/M single-phase specific serum agglutination experiment.Results Of the 138 strains of brucella analyzed by the automatic microbial identification system,the results showed that the main identification indicators of brucella genus were:L-proline arylamidase (ProA),tyrosine arylamidase (TyrA),urease (URE),glycine arylamidase (GlyA),L-lactate alkalinisation (1LATK),and ELLMAN (ELLM).Compared with the system values,all strains biochemical function similar rate was 97.99% (135.23/138),including standard strains was 96.71% (16.44/17),experimental strains was 98.17% (118.79/ 121);time required for strains identification was 6.1-7.7 h,including standard strains was 7.3 h,experimental strains was 6.9 h.Identification indicators for distinguish brucella species were:ProA,TyrA,URE,and GlyA;for distinguish brucella melitensis was ELLM;for distinguish brucella abortus was 1LATK;for distinguish brucella suis was

  14. 应用改进遗传神经网络识别种蛋蛋形试验%Experiment on automatic shape identification of hatching eggs based on improved genetic algorithm neural network

    Institute of Scientific and Technical Information of China (English)

    郁志宏; 王栓巧; 张平; 贾超

    2009-01-01

    Shape inspection of hatching eggs is an important and hard work in farms, manual inspection lacks the objectivity and is time-consuming. In order to solve problems mentioned above, an automatic shape identification method was proposed based on machine vision, moment technique and improved genentic algorithm-neural network (GA-NN) algorithm. Egg shape index and radius differences were extracted as eggs shape feature parameters. An improved immune genentic algorithm was put forward to optimize topology structure of levenberg-marquardt back progagation-neural network (LMBP-NN). After egg shape index was identified , radius differences were used as inputs of LMBP-NN and its outputs were used to determine the hatching egg shape normal or not. The results indicated that the classification accuracy of this method reached 97.1% for longer eggs, 95.59% for shorter eggs, 94.87% for abnormal eggs and 95.75% for normal eggs' respectively. It is significant for shape identification of hatching eggs automatically, which can improve detection accuracy and efficiency. The neural network system for shape identification of hatching eggs has high accuracy and generalization ability, and the algorithm is feasible and robust.%针对人工检测种蛋蛋形劳动强度大,缺乏客观性,检测效率低,研究了自动快速、准确地识别鸡种蛋蛋形的方法.以蛋形指数和蛋径差为形状特征参数,利用机器视觉技术、矩技术和提出的改进遗传神经网络算法剔除畸形蛋.基于机器视觉和矩技术提取种蛋的长短径,剔除蛋形指数不合格种蛋后,再通过构建合理的遗传神经网络模型,以蛋径差作为神经网络输入参数,根据网络输出值识别种蛋蛋形.对过圆蛋、过尖蛋、畸形蛋和正常蛋检测准确率分别达到了97.10%、95.59%、94.87%和95.75%.研究种蛋蛋形自动识别方法对提高种蛋蛋形检测准确率和工作效率具有重要意义,试验结果表明提出的种蛋蛋形评价

  15. Automatic Thesaurus Construction Using Bayesian Networks.

    Science.gov (United States)

    Park, Young C.; Choi, Key-Sun

    1996-01-01

    Discusses automatic thesaurus construction and characterizes the statistical behavior of terms by using an inference network. Highlights include low-frequency terms and data sparseness, Bayesian networks, collocation maps and term similarity, constructing a thesaurus from a collocation map, and experiments with test collections. (Author/LRW)

  16. 长效型主动融雪沥青混合料路用性能试验%Experiment of Road Performance of Asphalt Mixture with Automatic Long-term Snowmelt Agent

    Institute of Scientific and Technical Information of China (English)

    李福普; 王志军

    2012-01-01

    为全面验证Mafilon材料置换矿粉后沥青混合料的路用性能,进行了室内试验并现场观测了该沥青混合料的融冰化雪能力.对比不同Mafilon掺量的沥青混合料的性能差异,结果表明:在相同矿料级配条件下,长效型主动融雪沥青混合料高温性能优良,Mafilon掺量对沥青混合料的低温性能基本没有影响.3种沥青混合料的水稳定性满足规范要求,但是水稳定性能随Mafilon掺量的增加而降低.从室内和试验路长效型主动融雪沥青混合料的除冰雪效果看,Mafilon置换矿粉比例越高,其除冰雪效果越明显.%The asphalt mixture with Mafilon which in lieu of mineral fines and helps to deice is automatic long-term snow-melt asphalt mixture. In order to prove the performance of asphalt mixture with Mafilon content, some indoor tests were conducted, the field deicing capacity of the asphalt mixture was observed, and the performance of several asphalt mixtures with different contents of Mafilon were compared. The results show that (1 ) with the same gradation, automatic long-term snow melting asphalt mixtures has better anti-rutting performance, and the content of Mafilon has little effect on the low temperature performance of asphalt mixture; (2) the moisture stability of three kinds of asphalt mixture can meet the requirement of the specification, but it would decrease with the increase of Mafilon content. According to the indoor and outdoor deicing effects of the subjected asphalt mixture, it is better to deice pavement with more Mafilon content in the asphalt mixture.

  17. 12th Portuguese Conference on Automatic Control

    CERN Document Server

    Soares, Filomena; Moreira, António

    2017-01-01

    The biennial CONTROLO conferences are the main events promoted by The CONTROLO 2016 – 12th Portuguese Conference on Automatic Control, Guimarães, Portugal, September 14th to 16th, was organized by Algoritmi, School of Engineering, University of Minho, in partnership with INESC TEC, and promoted by the Portuguese Association for Automatic Control – APCA, national member organization of the International Federation of Automatic Control – IFAC. The seventy-five papers published in this volume cover a wide range of topics. Thirty-one of them, of a more theoretical nature, are distributed among the first five parts: Control Theory; Optimal and Predictive Control; Fuzzy, Neural and Genetic Control; Modeling and Identification; Sensing and Estimation. The papers go from cutting-edge theoretical research to innovative control applications and show expressively how Automatic Control can be used to increase the well being of people. .

  18. 根据露天矿长期计划自动形成短期计划的0-1整数规划方法%Automatic formation of short-term plan based on the long-term plan on open-pit mine using O-1 integer programming

    Institute of Scientific and Technical Information of China (English)

    孙效玉; 张维国; 陈毓; 王侠; 孙梦红

    2012-01-01

    针对露天矿生产不均衡产生的长期计划与短期计划严重脱节问题,通过对露天矿时空发展关系的分析,分时段建立露天矿短期计划的0-1整数规划模型,提出了超级组合块的概念,论述了短期计划优化处理逻辑。并在Surpac平台上采用TCL语言二次开发完成长期计划制作、台阶条块划分与动态显示功能,在VC++环境下通过调用LindoAPI数学软件自动优化形成短期计划。结果只需十几分钟到几小时的时间,即可实现传统方法根本无力解决的根据长期计划自动形成短期计划、进而验证长期计划的难题,实践证明这种方法稳定性强、工作效率高。%To solve the confliction between long-term and short-term production plan on open-pit mine, established a 0- 1 integer programming model of short-term plans by time period applying with space-time development, and a new con- cept of "super combo blocks" was proposed and the progress logic of short-term plan was dealt with. It implemented the design of long-term plan, dividing blocks from strip benches and dynamic displaying by secondary development using the TCL language on the Surpac software platform. The short-term plan could be automatically optimized by call- ing LindoAPI mathematical software under Visual C + + environment. It solved the problem that the short-term plan cannot be drawn out from automatic optimization based on long-term plan, but the program running time could be mi- nutes or few hours. The reliability and efficiency of the method is proved in the work field.

  19. 自动识别环境下车辆的出行矩阵估计新方法%A New Method of OD Estimation Based On Automatic Vehicle Identification Data

    Institute of Scientific and Technical Information of China (English)

    孙剑; 冯羽

    2011-01-01

    鉴于以视频牌照识别系统为代表的车辆自动识别(automatic vehicle identification,AVI)技术在我国逐步应用的现实,提出了利用AVI检测信息估计高精度车辆起讫点矩阵(OD- matrix)的新方法.该方法首先将检测的车辆信息分为4类(起讫点已知、起点或终点及部分路径已知、仅知起点或终点、仅知部分路径),然后利用第1类信息根据AVI检测误差直接扩样更新基础OD矩阵;利用第2,3,4类信息,参照粒子滤波算法思想,基于贝叶斯估计理论修正更新路段-路径流量关系,进而用蒙特卡罗随机过程确定可能路径以及OD;最后根据AVI获得的路径流量信息反向验算校正OD.根据上海市目前视频牌照识别系统的应用现状,选择以南北高架快速路为研究对象,根据牌照识别系统检测的动态车辆信息,对布设9个视频检测器的南北高架沿线17个出入口的OD进行了估计应用.结果表明,在路网仿真模型误差≤15%、AVI设施覆盖率为27.2%以及检测误差在10%的前提下,运用本方法,OD估计的总体平均相对误差仅为11.09%.该方法能充分利用AVI检测的个体车辆不完整路径信息,且计算效率高,可满足实际动态交通管理的需求.%With the development and application of video license plate recognition system which represented the automatic vehicle identification (AVI) technologies in China,a novel high resolution OD estimation method was proposed based on AVI detector information. 4 detected categories (Ox + Dy, Ox/Dy + (8), Ox/Dy、 P(8)) were divided at the first step. Then the initial OD matrix was updated by using the Ox + Dy sample information considering the AVI detector errors. Referenced by particle filter, the link-path relationship data were revised by using the last 3 categories information based on Bayesian inference and the possible trajectory and OD were determined with the Monte Carlo random process. Then the OD was corrected

  20. Establishment of identification database of six common dermatophytes using Biolog automatic analyzer for microbes%Biolog微生物自动分析系统建立六种皮肤癣菌的鉴定数据库

    Institute of Scientific and Technical Information of China (English)

    萧伊伦; 陈驰宇; 章强强

    2010-01-01

    Objective To investigate the application prospect of Biolog automatic analyzer for microbes in the identification of common dermatophytes. Methods Clinical isolates of dermatophyte were identified to species level based on phenotypes and DNA sequence. The strains of Trichophyton rubrum, Trichophyton mentagrophyte, Trichophyton tonsurans, Microsporum canis, Microsporum gypseum and Epidermophyton floccosum were inoculated into FF microplates, and the utilization of 95 different carbon sources were recorded.The growth and reaction spectrum of these strains were described and identification database was set up. Results There was a great difference in the utilization of carbon sources among different fungal species. The utilization of raffinose could differentiate Trichophyton mentagrophyte and Trichophyton tonsurans from the other four Trichophyton. Sebacic acid could differentiate Trichophyton mentagrophyte from Trichophyton tonsurans.Meanwhile, Trichophyton rubrum could be differentiated from Microsporum gypseum, Epidermophyton floccosum and Microsporum canis by utilization of fumarate and succinate. Microsporum gypseum could be identified by use of alanine and phenylalanine. The utilization of dextrin could distinguish Epidermophyton floccosum from Microsporum canis. Conclusion The Biolog automatic analyzer for microbes has the ability to identify common dermatophytes to species level based on their specific phenotype.%目的 探讨Biolog微生物自动分析系统鉴定皮肤癣菌的应用前景.方法 采用表型及DNA测序的方法,将临床收集的菌株鉴定至种;选取红色毛癣菌、须癣毛癣菌、断发毛癣菌、犬小孢子菌、石膏样小孢子菌和絮状表皮癣菌6种常见皮肤癣菌接种于FF微量板,记录皮肤癣菌对95种不同碳源的利用情况,描述其各自的生长反应谱,建立鉴定数据库.结果 6种皮肤癣菌对一些碳源的利用具有明显的差别,通过是否利用棉子糖可以将须癣毛癣

  1. 一种船舶自动识别系统甚高频信道的低成本设计%Low-cost Design of VHF Channel in Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    张翼周

    2014-01-01

    With increasing competition in Automatic Identification System( AIS) market, low-cost design is the inevitable trend in AIS product development. This paper presents a low-cost Very High Frequency (VHF) channel design scheme of AIS system according to the International Maritime Organization(IMO) standards. The working principle of AIS and working process of its VHF channel are introduced briefly, a concise and practical transmitter and receiver design is given. And the design method,components selec-tion of some core units,such as modulator,VHF power-amplifier and demodulator are elaborated. The test and field practice show that all indexes of the VHF channel meet the specified requirements,further more, the VHF channel has low hardware cost and has reliable performance.%随着船舶自动识别系统( AIS)市场竞争的日益激烈,低成本化是AIS产品研制的必然趋势。根据国际海事组织( IMO)有关标准,给出了一种AIS甚高频( VHF)信道的低成本设计方案。介绍了AIS工作原理及其VHF信道的工作过程,给出了简洁实用的发射机和接收机设计,并对调制器、VHF功放、解调器等核心单元设计方法、器件选择进行了详细阐述。测试和外场实用表明该VHF信道各项指标均已达到规定要求,并且该VHF信道硬件成本低,性能可靠。

  2. Automatic identification of transliterated name based on co-occurrence frequency statistics of words%基于用字共现频率统计的外国译名自动识别

    Institute of Scientific and Technical Information of China (English)

    陈阳; 赵跃华; 程显毅

    2012-01-01

    为了减少分词的负面效果,提出了基于用字共现频率统计的外国译名自动识别方法.对译名的用字特征进行了统计,提出译名共现字串的概念,并由译名用字表与汉语常用字表得到了非译名用字表.在上述工作的基础上定义了译名的边界,在边界定义的基础上设计了一种对分词错误的调整方法.对开放语料的测试结果表明,与最大词频分词算法相比,该算法在译名识别中的准确率、召回率、F值均有所提高.%To reduce the negative impact of segmentation, an automatic recognition algorithm for transliterated name recogni-tion based on co-occurrence frequency statistics of words is presented. Firstly, the statistical features of word of transliterated name are summarized and then the concept of co-occurrence string is proposed. The character table of non-translated name is obtained through the character table of transliterated name and the common Chinese character table. Secondly, the boundary of transliterated name is defined based on these above. Finally, an adjustment method is designed to deal with errors of segmenta-tion based on the definition of boundary. The result of experiment is satisfied in comparison with maximum word frequency seg-mentation algorithm. The recall rate, precision rate and F values of identification are enhanced.

  3. 基于定点数据的道路瓶颈拥挤自动识别算法%Automatic Identification Algorithm for Road Bottlenecks Based on Detector Data

    Institute of Scientific and Technical Information of China (English)

    弓晋丽; 彭贤武

    2013-01-01

    为研究道路瓶颈处的交通拥挤现象,掌握由道路瓶颈引发的常发性拥挤的分布特点和变化规律,提出了道路瓶颈拥挤的自动识别算法.基于检测线圈历史数据,将交通状态定性划分为畅通和拥挤2种,根据瓶颈拥挤原理,识别道路瓶颈所在,并同时对由其引发的拥挤持续时长和拥挤范围进行鉴别.算法运算结果包含瓶颈定位及由其引发的拥挤持续时长和空间影响范围.以上海市南北高架路东侧10 d线圈检测数据为例,验证了算法的有效性和实用性.%To study the problem of traffic congestion at road bottlenecks and know the distribution properties as to change regulation of the Recurrent Congestion, a new automatic identification algorithm for road bottlenecks was proposed. Based on historical traffic data from the dual-loop detectors on road, the algorithm differentiates uncongested traffic state from congested state, thereby identifying traffic bottlenecks according to principle of congestion. It will also compute the congestion duration and influence range caused by bottlenecks at the same time. By using field data from dual-loop detectors on Shanghai North-South elevated road for 10 days, the effectiveness and practicality of the algorithm have been verified.

  4. Consideration Of The Change Of Material Emission Signatures Due To Long-term Emissions For Enhancing Voc Source Identification

    DEFF Research Database (Denmark)

    Han, K. H.; Zhang, J. S.; Knudsen, H. N.

    2011-01-01

    The objectives of this study were to characterize the changes of VOC material emission profiles over time and develop a method to account for such changes in order to enhance a source identification technique that is based on the measurements of mixed air samples and the emission signatures of in...

  5. An Automatic Clustering Technique for Optimal Clusters

    CERN Document Server

    Pavan, K Karteeka; Rao, A V Dattatreya; 10.5121/ijcsea.2011.1412

    2011-01-01

    This paper proposes a simple, automatic and efficient clustering algorithm, namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate nearly optimal clusters for the given datasets automatically. The AMOC is an extension to standard k-means with a two phase iterative procedure combining certain validation techniques in order to find optimal clusters with automation of merging of clusters. Experiments on both synthetic and real data have proved that the proposed algorithm finds nearly optimal clustering structures in terms of number of clusters, compactness and separation.

  6. Automatically ordering events and times in text

    CERN Document Server

    Derczynski, Leon R A

    2017-01-01

    The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning so...

  7. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  8. Dismount Threat Recognition through Automatic Pose Identification

    Science.gov (United States)

    2012-03-01

    camera and joint estimation software of the Kinect for Xbox 360. A threat determination is made based on the pose identified by the network. Ac- curacy...mapping produced by the Kinect sensor [3]. . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5. Test subject and generated model. The subject is...16 3.2. Joint position estimates extracted from Kinect . Example of col- lecting orthogonal poses

  9. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  10. MassToMI - a Mathematica package for an automatic Mass Insertion expansion

    CERN Document Server

    Rosiek, Janusz

    2015-01-01

    We present a Mathematica package designed to automatize the expansion of QFT transition amplitudes calculated in the mass eigenstates basis (i.e. expressed in terms of physical masses and mixing matrices) into series of "mass insertions", defined as off-diagonal entries of mass matrices in Lagrangian before diagonalization and identification of the physical states. The algorithm implemented in this package is based on the general "Flavor Expansion Theorem" proven in Ref.~\\cite{FET}. The supplied routines are able to automatically analyze the structure of the amplitude, identify the parts which could be expanded and expand them to any required order. They are capable of dealing with amplitudes depending on both scalar or vector (Hermitian) and Dirac or Majorana fermion (complex) mass matrices. The package can be downloaded from the address www.fuw.edu.pl/masstomi.

  11. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  12. Study on Ground Automatic Identification Technology for Intelligent Vehicle Based on Vision Sensor%基于视觉传感器的自主车辆地面自动辨识技术研究

    Institute of Scientific and Technical Information of China (English)

    崔根群; 余建明; 赵娴; 赵丛琳

    2011-01-01

    The ground automatic identification technology for intelligent vehicle is iaking Leobor-Edu autonomous vehicle as a test vector and using DH-HV2003UC-T vision sensor to collect image infarmaiion of five common lane roads( cobbled road, concrete road, dirt road, grass road, tile road) , then using MATLAB image processing module to perform coding compression, recovery reconstruction, smoothing, sharpening, enhancement, feature extraction and other related processing,then using MATLAB BP neural network module to carry on pattern recognition.Through analyzing the pattern recognition result, lt shows that the objective error is 20%, the road recognition rate has reached the intended requirement in the system,and it can be universally applied in the smart vehicle or robots and other related fields.%谊自主车辆地面自动辨识技术是以Leobot-Edu自主车辆作为试验载体,并应用DH-HV2003UC-T视觉传感器对常见的5种行车路面(石子路面、水泥路面、土壤路面、草地路面、砖地路面)进行图像信息的采集,应用Matlab图像处理模块对其依次进行压缩编码、复原重建、平滑、锐化、增强、特征提取等相关处理后,再应用Matlab BP神经网络模块进行模式识别.通过对模式识别结果分析可知,网络训练目标的函数误差为20%,该系统路面识别率达到预定要求,可以在智能车辆或移动机器人等相关领域普及使用.

  13. Diatom Identification : a Double Challenge Called ADIAC

    NARCIS (Netherlands)

    Buf, Hans du; Bayer, Micha; Droop, Stephen; Head, Ritchie; Juggins, Steve; Fischer, Stefan; Bunke, Horst; Wilkinson, Michael; Roerdink, Jos; Pech-Pacheco, José; Cristóbal, Gabriel; Shahbazkia, Hamid; Ciobanu, Adrian

    1999-01-01

    This paper introduces the project ADIAC (Automatic Diatom Identification and Classification), which started in May 1998 and which is financed by the European MAST (Marine Science and Technology) programme. The main goal is to develop algorithms for an automatic identification of diatoms using image

  14. Mediation and Automatization.

    Science.gov (United States)

    Hutchins, Edwin

    This paper discusses the relationship between the mediation of task performance by some structure that is not inherent in the task domain itself and the phenomenon of automatization, in which skilled performance becomes effortless or phenomenologically "automatic" after extensive practice. The use of a common simple explicit mediating…

  15. Digital automatic gain control

    Science.gov (United States)

    Uzdy, Z.

    1980-01-01

    Performance analysis, used to evaluated fitness of several circuits to digital automatic gain control (AGC), indicates that digital integrator employing coherent amplitude detector (CAD) is best device suited for application. Circuit reduces gain error to half that of conventional analog AGC while making it possible to automatically modify response of receiver to match incoming signal conditions.

  16. Automatic Differentiation Package

    Energy Technology Data Exchange (ETDEWEB)

    2007-03-01

    Sacado is an automatic differentiation package for C++ codes using operator overloading and C++ templating. Sacado provide forward, reverse, and Taylor polynomial automatic differentiation classes and utilities for incorporating these classes into C++ codes. Users can compute derivatives of computations arising in engineering and scientific applications, including nonlinear equation solving, time integration, sensitivity analysis, stability analysis, optimization and uncertainity quantification.

  17. Automatic stereoscopic system for person recognition

    Science.gov (United States)

    Murynin, Alexander B.; Matveev, Ivan A.; Kuznetsov, Victor D.

    1999-06-01

    A biometric access control system based on identification of human face is presented. The system developed performs remote measurements of the necessary face features. Two different scenarios of the system behavior are implemented. The first one assumes the verification of personal data entered by visitor from console using keyboard or card reader. The system functions as an automatic checkpoint, that strictly controls access of different visitors. The other scenario makes it possible to identify visitors without any person identifier or pass. Only person biometrics are used to identify the visitor. The recognition system automatically finds necessary identification information preliminary stored in the database. Two laboratory models of recognition system were developed. The models are designed to use different information types and sources. In addition to stereoscopic images inputted to computer from cameras the models can use voice data and some person physical characteristics such as person's height, measured by imaging system.

  18. Presentation video retrieval using automatically recovered slide and spoken text

    Science.gov (United States)

    Cooper, Matthew

    2013-03-01

    Video is becoming a prevalent medium for e-learning. Lecture videos contain text information in both the presentation slides and lecturer's speech. This paper examines the relative utility of automatically recovered text from these sources for lecture video retrieval. To extract the visual information, we automatically detect slides within the videos and apply optical character recognition to obtain their text. Automatic speech recognition is used similarly to extract spoken text from the recorded audio. We perform controlled experiments with manually created ground truth for both the slide and spoken text from more than 60 hours of lecture video. We compare the automatically extracted slide and spoken text in terms of accuracy relative to ground truth, overlap with one another, and utility for video retrieval. Results reveal that automatically recovered slide text and spoken text contain different content with varying error profiles. Experiments demonstrate that automatically extracted slide text enables higher precision video retrieval than automatically recovered spoken text.

  19. Exploring Automaticity in Text Processing: Syntactic Ambiguity as a Test Case

    Science.gov (United States)

    Rawson, Katherine A.

    2004-01-01

    A prevalent assumption in text comprehension research is that many aspects of text processing are automatic, with automaticity typically defined in terms of properties (e.g., speed and effort). The present research advocates conceptualization of automaticity in terms of underlying mechanisms and evaluates two such accounts, a…

  20. Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension

    Science.gov (United States)

    Rawson, Katherine A.; Middleton, Erica L.

    2009-01-01

    A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the…

  1. Uterine electromyography for identification of first-stage labor arrest in term nulliparous women with spontaneous onset of labor

    NARCIS (Netherlands)

    Vasak, Blanka; Graatsma, Elisabeth M.; Hekman-Drost, Elske; Eijkemans, Marinus J.; van Leeuwen, Jules H. Schagen; Visser, Gerard H.; Jacod, Benoit C.

    2013-01-01

    OBJECTIVE: We sought to study whether uterine electromyography (EMG) can identify inefficient contractions leading to first-stage labor arrest followed by cesarean delivery in term nulliparous women with spontaneous onset of labor. STUDY DESIGN: EMG was recorded during spontaneous labor in 119 nulli

  2. Second-Language Learners' Identification of Target-Language Phonemes: A Short-Term Phonetic Training Study

    Science.gov (United States)

    Cebrian, Juli; Carlet, Angelica

    2014-01-01

    This study examined the effect of short-term high-variability phonetic training on the perception of English /b/, /v/, /d/, /ð/, /ae/, /? /, /i/, and /i/ by Catalan/Spanish bilinguals learning English as a foreign language. Sixteen English-major undergraduates were tested before and after undergoing a four-session perceptual training program…

  3. Automatic basal slice detection for cardiac analysis

    Science.gov (United States)

    Paknezhad, Mahsa; Marchesseau, Stephanie; Brown, Michael S.

    2016-03-01

    Identification of the basal slice in cardiac imaging is a key step to measuring the ejection fraction (EF) of the left ventricle (LV). Despite research on cardiac segmentation, basal slice identification is routinely performed manually. Manual identification, however, has been shown to have high inter-observer variability, with a variation of the EF by up to 8%. Therefore, an automatic way of identifying the basal slice is still required. Prior published methods operate by automatically tracking the mitral valve points from the long-axis view of the LV. These approaches assumed that the basal slice is the first short-axis slice below the mitral valve. However, guidelines published in 2013 by the society for cardiovascular magnetic resonance indicate that the basal slice is the uppermost short-axis slice with more than 50% myocardium surrounding the blood cavity. Consequently, these existing methods are at times identifying the incorrect short-axis slice. Correct identification of the basal slice under these guidelines is challenging due to the poor image quality and blood movement during image acquisition. This paper proposes an automatic tool that focuses on the two-chamber slice to find the basal slice. To this end, an active shape model is trained to automatically segment the two-chamber view for 51 samples using the leave-one-out strategy. The basal slice was detected using temporal binary profiles created for each short-axis slice from the segmented two-chamber slice. From the 51 successfully tested samples, 92% and 84% of detection results were accurate at the end-systolic and the end-diastolic phases of the cardiac cycle, respectively.

  4. Automatic polar ice thickness estimation from SAR imagery

    Science.gov (United States)

    Rahnemoonfar, Maryam; Yari, Masoud; Fox, Geoffrey C.

    2016-05-01

    Global warming has caused serious damage to our environment in recent years. Accelerated loss of ice from Greenland and Antarctica has been observed in recent decades. The melting of polar ice sheets and mountain glaciers has a considerable influence on sea level rise and altering ocean currents, potentially leading to the flooding of the coastal regions and putting millions of people around the world at risk. Synthetic aperture radar (SAR) systems are able to provide relevant information about subsurface structure of polar ice sheets. Manual layer identification is prohibitively tedious and expensive and is not practical for regular, longterm ice-sheet monitoring. Automatic layer finding in noisy radar images is quite challenging due to huge amount of noise, limited resolution and variations in ice layers and bedrock. Here we propose an approach which automatically detects ice surface and bedrock boundaries using distance regularized level set evolution. In this approach the complex topology of ice and bedrock boundary layers can be detected simultaneously by evolving an initial curve in radar imagery. Using a distance regularized term, the regularity of the level set function is intrinsically maintained that solves the reinitialization issues arising from conventional level set approaches. The results are evaluated on a large dataset of airborne radar imagery collected during IceBridge mission over Antarctica and Greenland and show promising results in respect to hand-labeled ground truth.

  5. Research on automatic Chinese-English term extraction based on order and position feature of words%基于语序位置特征的汉英术语对自动抽取研究

    Institute of Scientific and Technical Information of China (English)

    张莉; 刘昱显

    2015-01-01

    双语的术语抽取和对齐在跨语言检索、构建双语词典和机器翻译等研究领域有着重要的作用。提出一种基于语序位置特征信息的汉英术语对自动对齐算法。该算法对双语术语抽取两步走策略中的术语对齐部分进行了改进,将基于短语的机器翻译中的语序位置特征融合进术语对齐算法中,通过对基准方法的对比,新方法显著提高了术语对齐的精确率,特别在术语翻译概率较低时提高更为明显,同时又避免了基于短语的机器翻译中计算效率低的缺陷。%With the explosion of information and in current society,knowledge is spreading among information in various areas and also in different languages.The characteristic of knowledge spreading brings people tremendous obstacles in understanding,retrieving and exchanging their thinking.Bilingual terminology is an important language resource for natural language processing tasks such as machine translation,data mining and bilingual information re-trieval.The collecting of bilingual terminology is often challenging and time-consuming because texts to be aligned are usually in different languages such as Chinese and English and there are significant differences in many cases. Thus bilingual terminology extraction and alignment becomes more important and brings more and more attention in the information processing and it plays an important role in cross-language retrieval,building bilingual dictionaries and machine translation research.The development of bilingual terminology extraction and alignment will benefit the building of translation memory in the field of machine-assisted translation and it can improve the quality of the machine translations while adding the bilingual terminology information.We propose an automatic Chinese-English terminology alignment algorithm based on the order and position feature information of words.The algorithm improves the terminology alignment of two

  6. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 2 is a collection of papers that discusses the controversy about the suitability of COBOL as a common business oriented language, and the development of different common languages for scientific computation. A couple of papers describes the use of the Genie system in numerical calculation and analyzes Mercury autocode in terms of a phrase structure language, such as in the source language, target language, the order structure of ATLAS, and the meta-syntactical language of the assembly program. Other papers explain interference or an ""intermediate

  7. Iris Pattern Segmentation using Automatic Segmentation and Window Technique

    OpenAIRE

    Swati Pandey; Prof. Rajeev Gupta

    2013-01-01

    A Biometric system is an automatic identification of an individual based on a unique feature or characteristic. Iris recognition has great advantage such as variability, stability and security. In thispaper, use the two methods for iris segmentation -An automatic segmentation method and Window method. Window method is a novel approach which comprises two steps first finds pupils' center andthen two radial coefficients because sometime pupil is not perfect circle. The second step extract the i...

  8. Identification of intrinsic and reflexive contributions to low-back stiffness: medium-term reliability and construct validity.

    Science.gov (United States)

    Larivière, Christian; Ludvig, Daniel; Kearney, Robert; Mecheri, Hakim; Caron, Jean-Maxime; Preuss, Richard

    2015-01-21

    This study aimed at testing the reliability and construct validity of a trunk perturbation protocol (TPP) that estimates the intrinsic and reflexive contributions to low-back stiffness. The TPP consists of a series of pseudorandom position-controlled trunk perturbations in an apparatus measuring forces and displacements at the harness surrounding the thorax. Intrinsic and reflexive contributions to low-back stiffness were estimated using a system identification procedure, leading to 12 parameters. Study 1 methods (reliability): 30 subjects performed five 75-s trials, on each of two separate days (eight weeks apart). Reliability was assessed using the generalizability theory, which allowed computing indexes of dependability (ϕ, analogous to intraclass correlation coefficient) and standard errors of measurement (SEM). Study 2 methods (validity): 20 healthy subjects performed three 75-s trials for each of five experimental conditions assumed to provide different lumbar stiffness; testing the construct validity of the TPP using four conditions with different lumbar belt designs and one control condition without. Study 1 results (reliability): Learning was seen between the first and following trials. Consequently, reliability analyses were performed without the first trial. Simulations showed that averaging the scores of three trials can lead to acceptable reliability results for some TPP parameters. Study 2 results (validity): All lumbar belt designs increased low-back intrinsic stiffness, while only some of them decreased reflex stiffness, which support the construct validity of the TPP. Overall, these findings support the use of the TPP to test the effect of rehabilitation or between-groups differences with regards to trunk stiffness.

  9. Automatic Identification of Digital Label Assembly Drawings of Mechanical Parts Based on Computer Vision Technology%基于计算机视觉技术的机械零件装配图数字标号的自动识别

    Institute of Scientific and Technical Information of China (English)

    江能兴

    2011-01-01

    In order to realize precisely the automatic identification of numeric characters in the assembly drawings of mechanical parts, the technology of Open Computer Vision libraries (OpenCV) are developed. This paper not only introduces the basic framework of OpenCV and its typical application areas, also, it compares and analyses the numeric characters in the assembly drawings of mechanical parts, which has great significance to the improvement on the current development in the area of the automatic identification of digital label assembly drawings of mechanical parts.%为精准快速地对机械零件装配图中的数字字符进行自动识别,提出一种基于开源计算机视觉库OpenCV的模板匹配方法.本文介绍OpenCV的基本框架、典型运用领域和利用OpenCV开发库对机械零件装配图中的数字字符进行自动识别的比较分析,该项工作对改进目前对机械图进行人工数字识别的现状具有重要的意义.

  10. Automatic identification of pump unit axis orbit based on invariant moments features and neural networks%基于不变矩和神经网络的泵机组轴心轨迹自动识别

    Institute of Scientific and Technical Information of China (English)

    陈坚; 叶渊杰; 陈抒; 陈光大; 于永海; 王建明

    2011-01-01

    To meet the needs of signal processing on pump unit fault diagnosis, the principle of invariant moment theory was introduced. In addition, the neural network modeling as well as the sample acquisition in detail was discussed. As the shape of axis orbit responded the pump unit operation is related to a variety of fault, the real-time detection swing signals of axis on invariant moment were processed according to the invariant features of translation, scaling and rotation of invariant moment. And then the shape of axis orbit was determined by using BP neural network on pattern recognize. The combination of numerical simulation and on-site test were used to compensate the shortage of neural network training samples. All samples of both processed on invariant moment and the corresponding actual shape of the samples are of the neural network training ones. After network training completed, the output was compared with the actual shape of axis loci to validate this method. Taken the fault detection and diagnosis of Dayudu Pump Station in Shanxi for example, 10 sets of data of the sample were selectd to be compared, and the results show that the neural network recognition of the results are accurate. The method can provide the basis for orbit shape automatic identification and realizing fault diagnosis system intellectualization of pump unit.%基于泵机组故障信号处理的需要,介绍了不变矩原理,同时对神经网络建模,包括其样本获取进行了详细讨论;由于泵机组的多种故障与表征其运行状态的轴心轨迹形状有关,根据不变矩的平移、伸缩和旋转不变性特征,对实时检测的轴心摆度信号进行不变矩处理,利用BP型神经网络对其进行模式识别,进而判断出轴心轨迹的形状.为了弥补泵机组用于神经网络训练样本的不足,采用数值模拟与现场测试相结合的方法,将获取的所有样本进行求不变矩处理,并连同样本对应的实际形状作为神经网络

  11. Word Automaticity of Tree Automatic Scattered Linear Orderings Is Decidable

    CERN Document Server

    Huschenbett, Martin

    2012-01-01

    A tree automatic structure is a structure whose domain can be encoded by a regular tree language such that each relation is recognisable by a finite automaton processing tuples of trees synchronously. Words can be regarded as specific simple trees and a structure is word automatic if it is encodable using only these trees. The question naturally arises whether a given tree automatic structure is already word automatic. We prove that this problem is decidable for tree automatic scattered linear orderings. Moreover, we show that in case of a positive answer a word automatic presentation is computable from the tree automatic presentation.

  12. Enhancing Automaticity through Task-Based Language Learning

    Science.gov (United States)

    De Ridder, Isabelle; Vangehuchten, Lieve; Gomez, Marta Sesena

    2007-01-01

    In general terms automaticity could be defined as the subconscious condition wherein "we perform a complex series of tasks very quickly and efficiently, without having to think about the various components and subcomponents of action involved" (DeKeyser 2001: 125). For language learning, Segalowitz (2003) characterised automaticity as a…

  13. Automatic Syntactic Analysis of Free Text.

    Science.gov (United States)

    Schwarz, Christoph

    1990-01-01

    Discusses problems encountered with the syntactic analysis of free text documents in indexing. Postcoordination and precoordination of terms is discussed, an automatic indexing system call COPSY (context operator syntax) that uses natural language processing techniques is described, and future developments are explained. (60 references) (LRW)

  14. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  15. Automatic Identification Method of Micro-blog Messages Containing Geographical Events%蕴含地理事件微博客消息的自动识别方法

    Institute of Scientific and Technical Information of China (English)

    仇培元; 陆锋; 张恒才; 余丽

    2016-01-01

    微博客文本蕴含类型丰富的地理事件信息,能够弥补传统定点监测手段的不足,提高事件应急响应质量。然而,由于大规模标注语料的普遍匮乏,无法利用监督学习过程识别蕴含地理事件信息的微博客文本。为此,本文提出一种蕴含地理事件微博客消息的自动识别方法,通过快速获取的语料资源增强识别效果。该方法利用主题模型具有提取文档中主题集合的优势,通过主题过滤候选语料文本,实现地理事件语料的自动提取。同时,将分布式表达词向量模型引入事件相关性计算过程,借助词向量隐含的语义信息丰富微博客短文本的上下文内容,进一步增强事件消息的识别效果。通过以新浪微博为数据源开展的实验分析表明,本文提出的蕴含地理事件信息微博客消息识别方法,识别来自事件微博话题的消息文本的F-1值可达到71.41%,比经典的基于SVM模型的监督学习方法提高了10.79%。在模拟真实微博环境的500万微博客数据集上的识别准确率达到60%。%Micro-blogs usually contain abundant types of geographical event information, which could compensate for the shortco-mings of traditional fixed point monitoring technologies and improve the quality of emergency response. Identify the micro-blog messages that containing the geographical event information is the prerequisite for fully utilizing this data source. The trigger-based and the supervised machine learning methods are commonly adopted to identify the event related texts. Comparatively, the super-vised machine learning methods have better performance than the trigger-based ones for unrestricted texts. Unfortunately, the lack of large-scale tagged corpuses cause the supervised machine learning methods cannot be implemented to identify the geographical event related messages. In this paper, we propose an automatic method for recognizing micro-blogs that are

  16. Automatic Control of Personal Rapid Transit Vehicles

    Science.gov (United States)

    Smith, P. D.

    1972-01-01

    The requirements for automatic longitudinal control of a string of closely packed personal vehicles are outlined. Optimal control theory is used to design feedback controllers for strings of vehicles. An important modification of the usual optimal control scheme is the inclusion of jerk in the cost functional. While the inclusion of the jerk term was considered, the effect of its inclusion was not sufficiently studied. Adding the jerk term will increase passenger comfort.

  17. Metaphor identification in large texts corpora.

    Science.gov (United States)

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus.

  18. Automatic Program Development

    DEFF Research Database (Denmark)

    by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers......Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...

  19. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  20. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  1. The Role of Attentional Resources in Automatic Detection.

    Science.gov (United States)

    1981-01-01

    short-term memory is fully occupied with an attended message serves as a basic experimental separation of the two different processing stages. Atkinson ...the letters are potential targets (Schneider and Shiffrin , 1977). It predicts that the slope of the RT vs. memory set size function should be greater...short-term memory . The "automatic attention response" described by Shiffrin and Schneider (1977) suggests that controlled and automatic detection may

  2. 基于Matlab实现音乐识别与自动配置和声的功能%The realization of identification of music and configuration of harmony automatically based on matlab

    Institute of Scientific and Technical Information of China (English)

    杨若芳; 项顶

    2011-01-01

    This paper deals with a set of methods for how to use Matlab to carry on the recognition for the musical performance, and the automatic disposition corresponding harmony. It analyzes the entire process, which the harmony of many timbre comes automatically from a section of existing piano single-tone melody finally, and synthesizes physics, mathematics, automation and music and so on related knowledge, finally realizes the function of configuration of harmony automatically.%给出了如何使用Matlab对现实演奏的音乐进行识别、并自动配置相应的和声的一套方法.详细分析了,从一段现有的钢琴单音旋律到最终自动产生和谐的多音色和卢的整个过程.综合了物理、数学、自动化、音乐等领域的相关知识,最终实现自动给音乐配和声的功能.

  3. State-dependent doubly weighted stochastic simulation algorithm for automatic characterization of stochastic biochemical rare events.

    Science.gov (United States)

    Roh, Min K; Daigle, Bernie J; Gillespie, Dan T; Petzold, Linda R

    2011-12-21

    In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)]. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA--the state-dependent doubly weighted SSA (sdwSSA)--that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.

  4. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  5. Exploring Automatization Processes.

    Science.gov (United States)

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  6. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  7. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  8. Automaticity and Reading: Perspectives from the Instance Theory of Automatization.

    Science.gov (United States)

    Logan, Gordon D.

    1997-01-01

    Reviews recent literature on automaticity, defining the criteria that distinguish automatic processing from non-automatic processing, and describing modern theories of the underlying mechanisms. Focuses on evidence from studies of reading and draws implications from theory and data for practical issues in teaching reading. Suggests that…

  9. Profiling School Shooters: Automatic Text-Based Analysis

    Directory of Open Access Journals (Sweden)

    Yair eNeuman

    2015-06-01

    Full Text Available School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various charateristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by six school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/priorization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology.

  10. MINUTIAE EXTRACTION BASED ON ARTIFICIAL NEURAL NETWORKS FOR AUTOMATIC FINGERPRINT RECOGNITION SYSTEMS

    Directory of Open Access Journals (Sweden)

    Necla ÖZKAYA

    2007-01-01

    Full Text Available Automatic fingerprint recognition systems are utilised for personal identification with the use of comparisons of local ridge characteristics and their relationships. Critical stages in personal identification are to extract features automatically, fast and reliably from the input fingerprint images. In this study, a new approach based on artificial neural networks to extract minutiae from fingerprint images is developed and introduced. The results have shown that artificial neural networks achieve the minutiae extraction from fingerprint images with high accuracy.

  11. Automatic summarising factors and directions

    CERN Document Server

    Jones, K S

    1998-01-01

    This position paper suggests that progress with automatic summarising demands a better research methodology and a carefully focussed research strategy. In order to develop effective procedures it is necessary to identify and respond to the context factors, i.e. input, purpose, and output factors, that bear on summarising and its evaluation. The paper analyses and illustrates these factors and their implications for evaluation. It then argues that this analysis, together with the state of the art and the intrinsic difficulty of summarising, imply a nearer-term strategy concentrating on shallow, but not surface, text analysis and on indicative summarising. This is illustrated with current work, from which a potentially productive research programme can be developed.

  12. Improved Support Vector Machine Approach Based on Determining Thresholds Automatically

    Institute of Scientific and Technical Information of China (English)

    WANG Xiao-hua; YAN Xue-mei; WANG Xiao-guang

    2007-01-01

    To improve the training speed of support vector machine (SVM), a method called improved center distance ratio method (ICDRM) with determining thresholds automatically is presented here without reduce the identification rate. In this method border vectors are chosen from the given samples by comparing sample vectors with center distance ratio in advance. The number of training samples is reduced greatly and the training speed is improved. This method is used to the identification for license plate characters. Experimental results show that the improved SVM method-ICDRM does well at identification rate and training speed.

  13. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and......This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychological......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  14. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    Medical ultrasound has been a widely used imaging modality in healthcare platforms for examination, diagnostic purposes, and for real-time guidance during surgery. However, despite the recent advances, medical ultrasound remains the most operator-dependent imaging modality, as it heavily relies...... on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...... on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...

  15. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  16. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  17. Automatization of lexicographic work

    Directory of Open Access Journals (Sweden)

    Iztok Kosem

    2013-12-01

    Full Text Available A new approach to lexicographic work, in which the lexicographer is seen more as a validator of the choices made by computer, was recently envisaged by Rundell and Kilgarriff (2011. In this paper, we describe an experiment using such an approach during the creation of Slovene Lexical Database (Gantar, Krek, 2011. The corpus data, i.e. grammatical relations, collocations, examples, and grammatical labels, were automatically extracted from 1,18-billion-word Gigafida corpus of Slovene. The evaluation of the extracted data consisted of making a comparison between the time spent writing a manual entry and a (semi-automatic entry, and identifying potential improvements in the extraction algorithm and in the presentation of data. An important finding was that the automatic approach was far more effective than the manual approach, without any significant loss of information. Based on our experience, we would propose a slightly revised version of the approach envisaged by Rundell and Kilgarriff in which the validation of data is left to lower-level linguists or crowd-sourcing, whereas high-level tasks such as meaning description remain the domain of lexicographers. Such an approach indeed reduces the scope of lexicographer’s work, however it also results in the ability of bringing the content to the users more quickly.

  18. Automatic Caption Generation for Electronics Textbooks

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2014-12-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  19. Exciter system identification and automatic tuning of linear combination-type power system stabilizers Prony analysis; Prony kaiseki ni motozuku reijikei no dotei to hiritsu kasangata PSS no jido sekkei hoho

    Energy Technology Data Exchange (ETDEWEB)

    Amano, M.; Watanabe, M.; Banjo, M. [Hitachi, Ltd., Tokyo (Japan)

    1997-07-01

    The objective of this paper is to present anew automatic tuning method of power system stabilizers using Prony analysis. Irony analysis is used for detecting oscillation frequency, damping, phase, and amplitude from power oscillation waveform data. By applying the method to the waveform data of stabilizing signal and internal induced voltage, exciter system phase lag and oscillation frequency can be identified, and control parameters are decided using the identified values. Linear combination-type power system stabilizers are effective for damping low frequency oscillations using two control input signals, generator power and bus voltage frequency. The control parameters can be directly derived from the oscillation frequency and the excitation system phase lag without using phase compensation. Simulation results show that the proposed method is effective both in one machine-infinite bus system and in a multimachine system. The method can be used for off-line controller design and also for on-line adaptive control. 9 refs., 15 figs., 4 tabs.

  20. Automatic identification and real-time tracking based on multiple sensors for low-altitude moving targets%一种多传感器反直升机智能雷伺服跟踪系统

    Institute of Scientific and Technical Information of China (English)

    张作楠; 刘国栋; 王婷婷

    2011-01-01

    讨论一种基于多传感器的反直升机智能雷AHM(Anti-Helicopter Mine)系统.为了提高智能雷的全自动智能跟踪能力和打击精度,在传统的被动声探测技术的基础上,结合图像传感器的视觉信息和激光测距仪的深度信息,提出一种基于声-光-电多传感器联合的自动目标探测、识别、跟踪算法.首先将五元十字声源定位技术用于低空目标探测和初始定位,然后对目标进行图像处理与特征提取,最后基于图像特征的视觉伺服跟踪算法得出伺服机构的旋转角以实现精确跟踪.%Discussed a tracking system for anti-helicopter mine (AHM) tracking system based on multi-sensors, in order to increase the ability of automatic tracking and the higher firing accuracy. Based on the traditional passive acoustic localization technology, a multi-sensor integrated automatic detection and real-time tracking algorithm is proposed with a variety of sensors and electronic measuring devices, such as acoustic sensors, image sensors and laser range finder. Firstly the target is initially located by the positive acoustic localization technology, then attract the target image feature by image processing, According to based-on-image visual servoing algorithm, the desired target error signal for precise tracking is used to control the servo mechanism to track precisely.

  1. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining

    Directory of Open Access Journals (Sweden)

    Hettne Kristina M

    2010-03-01

    Full Text Available Abstract Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1 The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2 Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely

  2. Automatic Configuration in NTP

    Institute of Scientific and Technical Information of China (English)

    Jiang Zongli(蒋宗礼); Xu Binbin

    2003-01-01

    NTP is nowadays the most widely used distributed network time protocol, which aims at synchronizing the clocks of computers in a network and keeping the accuracy and validation of the time information which is transmitted in the network. Without automatic configuration mechanism, the stability and flexibility of the synchronization network built upon NTP protocol are not satisfying. P2P's resource discovery mechanism is used to look for time sources in a synchronization network, and according to the network environment and node's quality, the synchronization network is constructed dynamically.

  3. [The maintenance of automatic analysers and associated documentation].

    Science.gov (United States)

    Adjidé, V; Fournier, P; Vassault, A

    2010-12-01

    The maintenance of automatic analysers and associated documentation taking part in the requirements of the ISO 15189 Standard and the French regulation as well have to be defined in the laboratory policy. The management of the periodic maintenance and documentation shall be implemented and fulfilled. The organisation of corrective maintenance has to be managed to avoid interruption of the task of the laboratory. The different recommendations concern the identification of materials including automatic analysers, the environmental conditions to take into account, the documentation provided by the manufacturer and documents prepared by the laboratory including procedures for maintenance.

  4. Comparison of automatic control systems

    Science.gov (United States)

    Oppelt, W

    1941-01-01

    This report deals with a reciprocal comparison of an automatic pressure control, an automatic rpm control, an automatic temperature control, and an automatic directional control. It shows the difference between the "faultproof" regulator and the actual regulator which is subject to faults, and develops this difference as far as possible in a parallel manner with regard to the control systems under consideration. Such as analysis affords, particularly in its extension to the faults of the actual regulator, a deep insight into the mechanism of the regulator process.

  5. Automatic Fixture Planning

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Fixture planning is a crucial problem in the field of fixture design. In this paper, the research scope and research methods of the computer-aided fixture planning are presented. Based on positioning principles of typical workparts, an ANN algorithm, namely Hopfield algorithm, is adopted for the automatic fixture planning. Also, this paper leads a deep research into the selection of positioning and clamping surfaces (or points) on workparts using positioning-clamping-surface-selecting rules and matrix evaluation of deterministic workpart positioning. In the end of this paper, the methods to select positioning and clamping elements from database and the layout algorithm to assemble the selected fixture elements into a tangible fixture are developed.

  6. Automatic identification of address description in unstructured Chinese natural lan-guage%非结构化中文自然语言地址描述的自动识别

    Institute of Scientific and Technical Information of China (English)

    赵卫锋; 张勤

    2016-01-01

    The texts of address description in natural language, which are massive and available on the Internet, imply a wealth of spatial information. Considering its unstructured characteristics, a two-step approach is proposed in this paper to automatically extract the information of words and syntaxes from the corpus of address description in Chinese natural lan-guage, for further discovery of associated spatial knowledge. In the first step, an gazetteer-independent word segmentation algorithm for Chinese is designed, according to statistical regularities of the co-occurrence of character strings in the address corpus. In this algorithm, a predefined list comprised of common words used for indicating or restricting others in address statements, could be introduced to improve segmentation effect and facilitate part-of-speech tagging. In the second step, a finite state machine model is built to represent common syntaxes of Chinese address description, and then applied to automatically match and recognize the syntactic structures of segmented and tagged address statements. On the basis of the abundant address corpus collected from Internet, the experiments for statistical segmentation and syntactic recognition demonstrate the effectiveness and availability of this approach.%互联网中存在海量易获取的自然语言形式地址描述文本,其中蕴含丰富的空间信息。针对其非结构化特点,提出了自动提取中文自然语言地址描述中词语和句法信息的方法,以便深度挖掘空间知识。首先,根据地址语料中字串共现的统计规律设计一种不依赖地名词典的中文分词算法,并利用在地址文本中起指示、限定作用的常见词语组成的预定义词表改善分词效果及辅助词性标注。分词完成后,定义能够表达中文地址描述常用句法的有限状态机模型,进而利用其自动匹配与识别地址文本的句法结构。最后,基于大规模真实语料的统计

  7. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  8. Photo-identification methods reveal seasonal and long-term site-fidelity of Risso’s dolphins (Grampus griseus) in shallow waters (Cardigan Bay, Wales)

    NARCIS (Netherlands)

    Boer, de M.N.; Leopold, M.F.; Simmonds, M.P.; Reijnders, P.J.H.

    2013-01-01

    A photo-identification study on Risso’s dolphins was carried out off Bardsey Island in Wales (July to September, 1997-2007). Their local abundance was estimated using two different analytical techniques: 1) mark-recapture of well-marked dolphins using a “closed-population” model; and 2) a census tec

  9. Automatization of hardware configuration for plasma diagnostic system

    Science.gov (United States)

    Wojenski, A.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R. D.; Zabolotny, W.; Linczuk, P.; Chernyshova, M.; Czarski, T.; Malinowski, K.

    2016-09-01

    Soft X-ray plasma measurement systems are mostly multi-channel, high performance systems. In case of the modular construction it is necessary to perform sophisticated system discovery in parallel with automatic system configuration. In the paper the structure of the modular system designed for tokamak plasma soft X-ray measurements is described. The concept of the system discovery and further automatic configuration is also presented. FCS application (FMC/ FPGA Configuration Software) is used for running sophisticated system setup with automatic verification of proper configuration. In order to provide flexibility of further system configurations (e.g. user setup), common communication interface is also described. The approach presented here is related to the automatic system firmware building presented in previous papers. Modular construction and multichannel measurements are key requirement in term of SXR diagnostics with use of GEM detectors.

  10. Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

    OpenAIRE

    Turroni, Francesco

    2012-01-01

    The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerp...

  11. Isolation of Bacteria and Yeasts from Daqu Samples and its Identification by Biolog Automatic Analyzer for Microbes%大曲中细菌和酵母菌的分离及其Biolog微生物系统分析鉴定

    Institute of Scientific and Technical Information of China (English)

    聂凌鸿; 樊璐; 季方

    2012-01-01

    [Objective] The study aimed to discuss the application of Biolog Automatic Analyzer for Microbes in the identification of the bacteria and yeast in Daqu. [ Method] By the dilution-plate method,the bacteria and yeast in the Daqu samples brewed by the sing kinds of the food were made for the isolated culture resp. By using the nutrition agar medium and malt juice medium. The isolated and purified bacteria and yeast were I-dentified by the Biolog Automatic Analyzer for Microbes after the microscopic observation on the colony. [Result] 7 purified bacteria strains and 4 purified yeast strains were isolated from Daqu single colony samples. Through the identification by Biolog Automatic Analyzer for Microbes,5 strains in seven bacteria strains were confirmed as Bacillus pumilu(M1 strain), Bacillus qingdaonensu ( M2 strain), Bacillus megaterium(M3 strain),Bacillus amyloliquefaciens(M4 strain),Bacillus megaterium(M5 strain); 2 strains in 4 yeast strains were confirmed as Zygosaccharomyces cidri(N1 strain) ,Saccharomyces boulardii(N2 strain). [Conclusion] The study laid the foundation for the analysis of the microbes in Daqu.%[目的]初步探讨Biolog微生物自动分析系统在大曲细菌、酵母菌鉴定中的应用.[方法]利用稀释平板法,采用营养琼脂培养基和麦芽汁培养基,分别对单粮型大曲样品中的细菌和酵母菌进行分离培养.对分离纯化的细菌和酵母菌,经菌落观察和镜检后,利用Biolog微生物自动分析系统对其进行鉴定.[结果]从大曲单个菌落样品中分别分离出7株纯化的细菌菌株和4株纯化的酵母菌株,经Biolog微生物自动分析系统,确定其中5株细菌分别为:M1为短小芽孢杆菌(Bacillus pumilus)、M2为青岛芽孢杆菌(Bacillus qingdaonensis)、M3为巨大芽孢杆菌(Bacillus megaterium)、M4为解淀粉芽孢杆菌(Bacillus amyloliquefaciens)、M5为巨大芽孢杆菌(Bacillus megaterium);确定其中2株酵母菌分别为:N1

  12. Automatic aircraft recognition

    Science.gov (United States)

    Hmam, Hatem; Kim, Jijoong

    2002-08-01

    Automatic aircraft recognition is very complex because of clutter, shadows, clouds, self-occlusion and degraded imaging conditions. This paper presents an aircraft recognition system, which assumes from the start that the image is possibly degraded, and implements a number of strategies to overcome edge fragmentation and distortion. The current vision system employs a bottom up approach, where recognition begins by locating image primitives (e.g., lines and corners), which are then combined in an incremental fashion into larger sets of line groupings using knowledge about aircraft, as viewed from a generic viewpoint. Knowledge about aircraft is represented in the form of whole/part shape description and the connectedness property, and is embedded in production rules, which primarily aim at finding instances of the aircraft parts in the image and checking the connectedness property between the parts. Once a match is found, a confidence score is assigned and as evidence in support of an aircraft interpretation is accumulated, the score is increased proportionally. Finally a selection of the resulting image interpretations with the highest scores, is subjected to competition tests, and only non-ambiguous interpretations are allowed to survive. Experimental results demonstrating the effectiveness of the current recognition system are given.

  13. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  14. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  15. The Automatic Telescope Network (ATN)

    CERN Document Server

    Mattox, J R

    1999-01-01

    Because of the scheduled GLAST mission by NASA, there is strong scientific justification for preparation for very extensive blazar monitoring in the optical bands to exploit the opportunity to learn about blazars through the correlation of variability of the gamma-ray flux with flux at lower frequencies. Current optical facilities do not provide the required capability.Developments in technology have enabled astronomers to readily deploy automatic telescopes. The effort to create an Automatic Telescope Network (ATN) for blazar monitoring in the GLAST era is described. Other scientific applications of the networks of automatic telescopes are discussed. The potential of the ATN for science education is also discussed.

  16. Shift schedule of dual clutch automatic transmission based on driver type identification%基于驾驶员类型识别的双离合自动变速器换挡规律研究

    Institute of Scientific and Technical Information of China (English)

    刘玺; 何仁; 程秀生

    2015-01-01

    为了使车辆驾驶性能满足驾驶员需求,提出了基于数据融合决策的驾驶员类型识别方法并建立了基于驾驶员类型的换挡规律.首先基于驾驶员的驾驶行为和驾驶意图,对驾驶员类型进行分析,制定了基于驾驶风格的驾驶员类型识别方案.选定能表征驾驶员驾驶风格的有效工况及相应的表征信号后,先采用BP神经网络分类器对驾驶风格进行辨识,再采用贝叶斯融合决策方法先后对同类操纵的驾驶风格辨识结果和所有操纵类型驾驶风格辨识结果进行数据融合决策,最终辨识出驾驶员类型.根据驾驶员类型,引入动力性系数,通过不同类型驾驶员对应的动力性系数值的改变,实现换挡规律中动力性因素和经济性因素所占比例的调整,最终形成基于驾驶员类型的 DCT 换挡规律.最后,以搭载 6DCT的某试验车为对象,对不同驾驶员的换挡过程进行仿真实验,结果表明基于驾驶员类型的DCT换挡规律能够适应不同类型的驾驶员需求.该研究为驾驶员类型识别和智能型换挡规律的制定提供了参考.%Shift schedule is one of the major factors for drivability. When using traditional method to establish shift schedule, it considers power performance and fuel economy, but neglects driver characteristics. Speed and throttle in traditional two-parameter shift schedule may reflect vehicle performance for driver to some extent, but driving characteristics of different drivers can't be considered. In this paper, a shift schedule method based on driver type was proposed for making vehicle maneuverability meet drivers' characteristics. In order to obtain the drive type, driving behavior and intention were analyzed according to drivers' operations in driving process, different driver characteristics were obtained, and then drivers could be classified into conservative and sport type. So identification scheme of driver type was proposed. Driver

  17. Building an Automatic Thesaurus to Enhance Information Retrieval

    Directory of Open Access Journals (Sweden)

    Essam Said Hanandeh

    2013-01-01

    Full Text Available One of the major problems of modern Information Retrieval (IR systems is the vocabulary Problem that concerns with the discrepancies between terms used for describing documents and the terms used by the researcher to describe their information need. We have implemented an automatic thesurs, the system was built using Vector Space Model (VSM. In this model, we used Cosine measure similarity. In this paper we use selected 242 Arabic abstract documents. All these abstracts involve computer science and information system. The main goal of this paper is to design and build automatic Arabic thesauri using term-term similarity that can be used in any special field or domain to improve the expansion process and to get more relevance documents for the user's query. The study concluded that the similarl thesaurus improved the recall and precision more than traditional information retrieval system in terms of recall and precision level.

  18. Automatic modeling of the linguistic values for database fuzzy querying

    Directory of Open Access Journals (Sweden)

    Diana STEFANESCU

    2007-12-01

    Full Text Available In order to evaluate vague queries, each linguistic term is considered according to its fuzzy model. Usually, the linguistic terms are defined as fuzzy sets, during a classical knowledge acquisition off-line process. But they can also be automatically extracted from the actual content of the database, by an online process. In at least two situations, automatically modeling the linguistic values would be very useful: first, to simplify the knowledge engineer’s task by extracting the definitions from the database content; and second, where mandatory, to dynamically define the linguistic values in complex criteria queries evaluation. Procedures to automatically extract the fuzzy model of the linguistic values from the existing data are presented in this paper.

  19. MicroflexTM MALDI-TOF MS 和 Vitek 2 Compact 全自动微生物分析系统对肠杆菌科细菌鉴定能力的比较%Comparison on the identification abilities of MicroflexTM MALDI-TOF MS and Vitek 2 Compact automatic microbial analysis system for Enterobacteriaceae

    Institute of Scientific and Technical Information of China (English)

    刘瑛; 俞静; 陈峰; 刘婧娴; 李媛睿; 皇甫昱婵; 沈立松

    2015-01-01

    目的:比较MicroflexTM基质辅助激光解吸电离飞行时间质谱(MALDI-TOF MS)和Vitek 2 Compact全自动微生物分析系统(简称 Vitek 2 Compact)对肠杆菌科细菌的鉴定能力。方法采用 MicroflexTM MALDI-TOF MS 和 Vitek 2 Compact 同时对545株肠杆菌科质控菌株和临床分离菌株进行鉴定,鉴定结果不一致者采用沙门菌血清凝集试验或细菌16S rDNA 基因测序予以确证。结果MicroflexTM MALDI-TOF MS 对545株细菌的种和属的鉴定率分别为97.1%、2.9%。Vitek 2 Compact 对545株细菌的种、群、属的鉴定率分别为83.3%、13.9%和2.2%,鉴定错误率为0.2%,未鉴定率为0.4%。结论MicroflexTM MALDI-TOF MS 对肠杆菌科细菌的鉴定符合率高于 Vitek 2 Compact,且操作快速、简便,成本低,可用于临床肠杆菌科细菌的常规快速鉴定。%Objective To compare the identification abilities of MicroflexTM matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS)and Vitek 2 Compact automatic microbial analysis system (Vitek 2 Compact)for Enterobacteriaceae.Methods A total of 545 isolates of Enterobacteriaceae from a variety of clinical and quality control isolates were identified by MicroflexTM MALDI-TOF MS and Vitek 2 Compact.The isolates were confirmed by 1 6S rDNA gene sequencing,and Salmonella was identified by serum agglutination test,if the identification results of the 2 systems didn′t match.Results The coincidence rates of 545 isolates of Enterobacteriaceae identified by MicroflexTM MALDI-TOF MS were 97.1 % to the species level and 2.9% to the genus level,respectively. The coincidence rates of 545 isolates of Enterobacteriaceae identified by Vitek 2 Compact were 83.3%,1 3.9% and 2.2% to the species,group and genus levels,respectively.The false rate was 0.2%,and the un-identification rate was 0.4%.Conclusions The identification coincidence rate of Enterobacteriaceae by Microflex

  20. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  1. Automatic Coarse Graining of Polymers

    OpenAIRE

    Faller, Roland

    2003-01-01

    Several recently proposed semi--automatic and fully--automatic coarse--graining schemes for polymer simulations are discussed. All these techniques derive effective potentials for multi--atom units or super--atoms from atomistic simulations. These include techniques relying on single chain simulations in vacuum and self--consistent optimizations from the melt like the simplex method and the inverted Boltzmann method. The focus is on matching the polymer structure on different scales. Several ...

  2. Automatic Sarcasm Detection: A Survey

    OpenAIRE

    Joshi, Aditya; Bhattacharyya, Pushpak; Carman, Mark James

    2016-01-01

    Automatic sarcasm detection is the task of predicting sarcasm in text. This is a crucial step to sentiment analysis, considering prevalence and challenges of sarcasm in sentiment-bearing text. Beginning with an approach that used speech-based features, sarcasm detection has witnessed great interest from the sentiment analysis community. This paper is the first known compilation of past work in automatic sarcasm detection. We observe three milestones in the research so far: semi-supervised pat...

  3. Prospects for de-automatization.

    Science.gov (United States)

    Kihlstrom, John F

    2011-06-01

    Research by Raz and his associates has repeatedly found that suggestions for hypnotic agnosia, administered to highly hypnotizable subjects, reduce or even eliminate Stroop interference. The present paper sought unsuccessfully to extend these findings to negative priming in the Stroop task. Nevertheless, the reduction of Stroop interference has broad theoretical implications, both for our understanding of automaticity and for the prospect of de-automatizing cognition in meditation and other altered states of consciousness.

  4. The automatization of journalistic narrative

    Directory of Open Access Journals (Sweden)

    Naara Normande

    2013-06-01

    Full Text Available This paper proposes an initial discussion about the production of automatized journalistic narratives. Despite being a topic discussed in specialized sites and international conferences in communication area, the concepts are still deficient in academic research. For this article, we studied the concepts of narrative, databases and algorithms, indicating a theoretical trend that explains this automatized journalistic narratives. As characterization, we use the cases of Los Angeles Times, Narrative Science and Automated Insights.

  5. Process automatization in system administration

    OpenAIRE

    Petauer, Janja

    2013-01-01

    The aim of the thesis is to present automatization of user management in company Studio Moderna. The company has grown exponentially in recent years, that is why we needed to find faster, easier and cheaper way of man- aging user accounts. We automatized processes of creating, changing and removing user accounts within Active Directory. We prepared user interface inside of existing application, used Java Script for drop down menus, wrote script in scripting programming langu...

  6. Design and Implementation of Automatic Indexing for Information Retrieval with Arabic Documents.

    Science.gov (United States)

    Hmeidi, Ismail; Kanaan, Ghassan; Evens, Martha

    1997-01-01

    Describes automatic information retrieval system designed and built to handle Arabic data. Discusses cost-effectiveness of automatic indexing. Compares retrieval results using words as index terms versus stems and roots. Includes 19 tables; 60 queries using full words and relevance judgments are appended. (JAK)

  7. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  8. Choosing Actuators for Automatic Control Systems of Thermal Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Gorbunov, A. I., E-mail: gor@tornado.nsk.ru [JSC “Tornado Modular Systems” (Russian Federation); Serdyukov, O. V. [Siberian Branch of the Russian Academy of Sciences, Institute of Automation and Electrometry (Russian Federation)

    2015-03-15

    Two types of actuators for automatic control systems of thermal power plants are analyzed: (i) pulse-controlled actuator and (ii) analog-controlled actuator with positioning function. The actuators are compared in terms of control circuit, control accuracy, reliability, and cost.

  9. Comparison of yeast identification ability between Bruker Microflex MALDI-TOF MS and Vitek 2 Compact automatic microbial analysis system%比较 Bruker Microflex MALDI-TOF MS和 Vitek 2 Compact 全自动分析系统对酵母菌的鉴定能力

    Institute of Scientific and Technical Information of China (English)

    刘瑛; 俞静; 陈峰; 刘婧娴; 皇甫昱婵; 沈立松

    2015-01-01

    Objective To evaluate and compare yeast identification ability between Bruker Microflex matrix-assisted laser desorption ionization-time of flight mass spectrometry( MALDI-TOF MS) and Vitek 2 Compact automatic microbial analysis system.Methods Retrospective study.Totally 742 strains of yeast isolated from clinical specimens during March 2013 to March 2014 in Xinhua Hospital, Shanghai Jiao Tong University School of Medicine were identified by Bruker Microflex MALDI-TOF MS and Vitek 2 Compact automatic microbial analysis system simultaneously.The strains with discordant results were validated by gene sequencing.Results The coincidence rate of 699 Candida identified by Bruker Microflex MALDI-TOF MS or Vitek 2 Compact system was 100.0%(699/699) and 99.6%(696/699) to the species level, respectively and the coincidence rate of 43 yeast-like fungi strains identified was 90.7%(39/43) and 79.1%(34/43) to the species level, respectively.Penicillium marneffei could not be identified by both two instruments, but protein profile of Penicillium marneffei by MALDI-TOF MS was established.Conclusions The coincidence rate of yeast identified by Bruker Microflex MALDI-TOF MS is higher than that of Vitek 2 Compact system.Using Bruker Microflex MALDI-TOF MS to identify yeast especially Candida and yeast-like fungus is fast, simple, low-cost, accurate, and it can be used in routine work of ordinary yeast identification in clinical microbiology laboratory.%目的:比较Bruker Microflex MALDI-TOF MS和Vitek 2 Compact全自动微生物分析系统对酵母菌的鉴定能力。方法回顾性研究。利用 Bruker Microflex MALDI-TOF MS 和 Vitek 2 Compact全自动微生物分析系统同时对2013年3月至2014年3月上海交通大学医学院附属新华医院临床标本分离得到的742株酵母菌进行鉴定,结果不符菌株用基因测序予以确证。结果 Bruker Microflex MALDI-TOF MS和Vitek 2 Compact全自动微生物分析系统对699株念珠菌的

  10. Exploring Behavioral Markers of Long-Term Physical Activity Maintenance: A Case Study of System Identification Modeling within a Behavioral Intervention

    Science.gov (United States)

    Hekler, Eric B.; Buman, Matthew P.; Poothakandiyil, Nikhil; Rivera, Daniel E.; Dzierzewski, Joseph M.; Aiken Morgan, Adrienne; McCrae, Christina S.; Roberts, Beverly L.; Marsiske, Michael; Giacobbi, Peter R., Jr.

    2013-01-01

    Efficacious interventions to promote long-term maintenance of physical activity are not well understood. Engineers have developed methods to create dynamical system models for modeling idiographic (i.e., within-person) relationships within systems. In behavioral research, dynamical systems modeling may assist in decomposing intervention effects…

  11. Child vocalization composition as discriminant information for automatic autism detection.

    Science.gov (United States)

    Xu, Dongxin; Gilkerson, Jill; Richards, Jeffrey; Yapanel, Umit; Gray, Sharmi

    2009-01-01

    Early identification is crucial for young children with autism to access early intervention. The existing screens require either a parent-report questionnaire and/or direct observation by a trained practitioner. Although an automatic tool would benefit parents, clinicians and children, there is no automatic screening tool in clinical use. This study reports a fully automatic mechanism for autism detection/screening for young children. This is a direct extension of the LENA (Language ENvironment Analysis) system, which utilizes speech signal processing technology to analyze and monitor a child's natural language environment and the vocalizations/speech of the child. It is discovered that child vocalization composition contains rich discriminant information for autism detection. By applying pattern recognition and machine learning approaches to child vocalization composition data, accuracy rates of 85% to 90% in cross-validation tests for autism detection have been achieved at the equal-error-rate (EER) point on a data set with 34 children with autism, 30 language delayed children and 76 typically developing children. Due to its easy and automatic procedure, it is believed that this new tool can serve a significant role in childhood autism screening, especially in regards to population-based or universal screening.

  12. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  13. Algorithms for skiascopy measurement automatization

    Science.gov (United States)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  14. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  15. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS......). In the second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable...

  16. Early automatic detection of Parkinson's disease based on sleep recordings

    DEFF Research Database (Denmark)

    Kempfner, Jacob; Sorensen, Helge B D; Nikolic, Miki;

    2014-01-01

    SUMMARY: Idiopathic rapid-eye-movement (REM) sleep behavior disorder (iRBD) is most likely the earliest sign of Parkinson's Disease (PD) and is characterized by REM sleep without atonia (RSWA) and consequently increased muscle activity. However, some muscle twitching in normal subjects occurs...... the number of outliers during REM sleep was used as a quantitative measure of muscle activity. RESULTS: The proposed method was able to automatically separate all iRBD test subjects from healthy elderly controls and subjects with periodic limb movement disorder. CONCLUSION: The proposed work is considered...... during REM sleep. PURPOSE: There are no generally accepted methods for evaluation of this activity and a normal range has not been established. Consequently, there is a need for objective criteria. METHOD: In this study we propose a full-automatic method for detection of RSWA. REM sleep identification...

  17. Automatic denoising of single-trial evoked potentials.

    Science.gov (United States)

    Ahmadi, Maryam; Quian Quiroga, Rodrigo

    2013-02-01

    We present an automatic denoising method based on the wavelet transform to obtain single trial evoked potentials. The method is based on the inter- and intra-scale variability of the wavelet coefficients and their deviations from baseline values. The performance of the method is tested with simulated event related potentials (ERPs) and with real visual and auditory ERPs. For the simulated data the presented method gives a significant improvement in the observation of single trial ERPs as well as in the estimation of their amplitudes and latencies, in comparison with a standard denoising technique (Donoho's thresholding) and in comparison with the noisy single trials. For the real data, the proposed method largely filters the spontaneous EEG activity, thus helping the identification of single trial visual and auditory ERPs. The proposed method provides a simple, automatic and fast tool that allows the study of single trial responses and their correlations with behavior.

  18. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  19. Automatic Induction of Rule Based Text Categorization

    Directory of Open Access Journals (Sweden)

    D.Maghesh Kumar

    2010-12-01

    Full Text Available The automated categorization of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuingneed to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. This paper describes, a novel method for the automatic induction of rule-based text classifiers. This method supports a hypothesis language of the form "if T1, … or Tn occurs in document d, and none of T1+n,... Tn+m occurs in d, then classify d under category c," where each Ti is a conjunction of terms. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. Issues pertaining tothree different problems, namely, document representation, classifier construction, and classifier evaluation were discussed in detail.

  20. Identification and quantification of phytochelatins in roots of rice to long-term exposure: evidence of individual role on arsenic accumulation and translocation.

    Science.gov (United States)

    Batista, Bruno Lemos; Nigar, Meher; Mestrot, Adrien; Rocha, Bruno Alves; Barbosa Júnior, Fernando; Price, Adam H; Raab, Andrea; Feldmann, Jörg

    2014-04-01

    Rice has the predilection to take up arsenic in the form of methylated arsenic (o-As) and inorganic arsenic species (i-As). Plants defend themselves using i-As efflux systems and the production of phytochelatins (PCs) to complex i-As. Our study focused on the identification and quantification of phytochelatins by HPLC-ICP-MS/ESI-MS, relating them to the several variables linked to As exposure. GSH, 11 PCs, and As-PC complexes from the roots of six rice cultivars (Italica Carolina, Dom Sofid, 9524, Kitrana 508, YRL-1, and Lemont) exposed to low and high levels of i-As were compared with total, i-As, and o-As in roots, shoots, and grains. Only Dom Sofid, Kitrana 508, and 9524 were found to produce higher levels of PCs even when exposed to low levels of As. PCs were only correlated to i-As in the roots (r=0.884, P <0.001). However, significant negative correlations to As transfer factors (TF) roots-grains (r= -0.739, P <0.05) and shoots-grains (r= -0.541, P <0.05), suggested that these peptides help in trapping i-As but not o-As in the roots, reducing grains' i-As. Italica Carolina reduced i-As in grains after high exposure, where some specific PCs had a special role in this reduction. In Lemont, exposure to elevated levels of i-As did not result in higher i-As levels in the grains and there were no significant increases in PCs or thiols. Finally, the high production of PCs in Kitrana 508 and Dom Sofid in response to high As treatment did not relate to a reduction of i-As in grains, suggesting that other mechanisms such as As-PC release and transport seems to be important in determining grain As in these cultivars.

  1. Automatic quantification of iris color

    DEFF Research Database (Denmark)

    Christoffersen, S.; Harder, Stine; Andersen, J. D.;

    2012-01-01

    An automatic algorithm to quantify the eye colour and structural information from standard hi-resolution photos of the human iris has been developed. Initially, the major structures in the eye region are identified including the pupil, iris, sclera, and eyelashes. Based on this segmentation, the ...

  2. Trevi Park: Automatic Parking System

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    TreviPark is an underground, multi-story stacking system that holds cars efficiently, thus reducing the cost of each parking space, as a fully automatic parking system intended to maximize space utilization in parking structures. TreviPark costs less than the price of a conventional urban garage and takes up half the volume and 80% of the depth.

  3. Automatic agar tray inoculation device

    Science.gov (United States)

    Wilkins, J. R.; Mills, S. M.

    1972-01-01

    Automatic agar tray inoculation device is simple in design and foolproof in operation. It employs either conventional inoculating loop or cotton swab for uniform inoculation of agar media, and it allows technician to carry on with other activities while tray is being inoculated.

  4. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  5. Automatic milking : a better understanding

    NARCIS (Netherlands)

    Meijering, A.; Hogeveen, H.; Koning, de C.J.A.M.

    2004-01-01

    In 2000 the book Robotic Milking, reflecting the proceedings of an International Symposium which was held in The Netherlands came out. At that time, commercial introduction of automatic milking systems was no longer obstructed by technological inadequacies. Particularly in a few west-European countr

  6. CRISPR Recognition Tool (CRT): a tool for automatic detection ofclustered regularly interspaced palindromic repeats

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Charles; Ramsey, Teresa L.; Sabree, Fareedah; Lowe,Micheal; Brown, Kyndall; Kyrpides, Nikos C.; Hugenholtz, Philip

    2007-05-01

    Clustered Regularly Interspaced Palindromic Repeats (CRISPRs) are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT also demonstrated superior performance, especially for genomes containing large numbers of repeats. In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, was shown to be a significant improvement over the current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n) in space and O(nm/l) in time.

  7. Strengthen the Supervision over Pharmaceuticals via Modern Automatic Identification

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Fake pharmaceuticals inflicts severely upon people(?)~-s health through its circulation in markets.To strengthen the supervision of the pharmaceutical market,China is improving and is perfecting its national coding system in the field of pharmaceuticals. Bar-code tag and IC tag are available to the coding system.This paper summarizes the significance of IC tag to the supervision of pharmaceuticals and gives us a strategically general prospect of pharmaceutical supervision.

  8. Automatic Identification System (AIS) Transmit Testing in Louisville Phase 2

    Science.gov (United States)

    2014-08-01

    also a concern – can these be modeled?  Chain of locks canal mentioned as a key area, also Lock 2 in Arkansas.  Recommended to start with top-ten in...division multiple access LOMA Lock Operations Management Application LOS Line of sight LPMS Lock Performance Monitoring System LTM Linked text...hydrographic data, carriage of dangerous cargos, safety and security zones, status of locks and Aids to Navigation (AtoNs), and other port/waterway

  9. Tracking Next Generation Automatic Identification Technology into 2035

    Science.gov (United States)

    2010-12-01

    architectures (LF, HF, UHF, very high frequency [VH], etc.) and net- work architectures (Wi-Fi, Zigbee , ultra-wide band, mesh, ad hoc, cloud computing...Technology Concept of Operations, 4-4. 109. Silberglitt and Wong, Global Technology Revolution China, 77. 110. ZigBee Alliance, “Awarepoint with ZigBee ...2001. http://www .transcore.com/pdf/AIM%20shrouds_of_time.pdf. Legg, Gary. “ ZigBee : Wireless Technology for Low-Power Sensor Networks.” CommsDesign

  10. Automatic identification and restriction of the cointegration space

    NARCIS (Netherlands)

    Omtzigt, P.H.

    2003-01-01

    We automate the process of finding the cointegration relations in a cointegrated VAR. There is a rigorous separation between the theory part (search directions must be defined, a final model chosen) and the automated search. The decision rules are set in such a way that a theoretical upper limit can

  11. Automatical identification of secondary craters with crater spatial distribution

    Science.gov (United States)

    Kinoshita, T.; Honda, C.; Hirata, N.; Morota, T.

    2013-12-01

    We can estimate relative and absolute ages of geological units on the lunar surface with crater counting. This method is called as crater chronology and based on an assumption that each impact cratering occurs randomly to the surface. In contrast to these primary craters, secondary craters are impact craters formed by ejecta blocks and constitute clustering craters. As a result of the clustering, the secondary craters show a biased spatial distribution of craters. For the crater chronology, researchers have to exclude secondary craters and their regions from the surface image including primary and secondary craters based on his or her subjective views. We can identify most of secondary craters with unique shape and spatial distribution of craters. However, the secondary craters produced by high-velocity ejecta fragments are more circular and may be less clustered than the adjacent secondary craters, and it can therefor be difficult to distinguish from primary craters. So, it has been suggested that individual differences in the recognition of secondary craters exist. We propose an algorithm for evaluating spatial distribution of craters on the lunar images. We have developed two procedures. In these procedures, we evaluated the spatial distribution of craters by using the group average method in one of the hierarchical clustering, or by using the Voronoi diagram. In these procedures, we compare the result of evaluation for observed spatial distribution of craters with the result of evaluation for ideal random spatial distribution of craters. We demonstrated for some regions on the lunar surface. As a result, almost of clustered secondary craters are identified quantitatively by our algorithm.

  12. Automatic Identification of Nutritious Contexts for Learning Vocabulary Words

    Science.gov (United States)

    Mostow, Jack; Gates, Donna; Ellison, Ross; Goutam, Rahul

    2015-01-01

    Vocabulary knowledge is crucial to literacy development and academic success. Previous research has shown learning the meaning of a word requires encountering it in diverse informative contexts. In this work, we try to identify "nutritious" contexts for a word--contexts that help students build a rich mental representation of the word's…

  13. Automatic Identification of Inertial Sensors on the Human Body Segments

    NARCIS (Netherlands)

    Weenk, D.; Beijnum, van B.J.F.; Veltink, P.H.

    2011-01-01

    In the last few years, inertial sensors (accelerometers and gyroscopes) in combination with magnetic sensors was proven to be a suitable ambulatory alternative to traditional human motion tracking systems based on optical position measurements. While accurate full 6 degrees of freedom information is

  14. LTE中基于频率资源重选的混合自动重发反馈传输技术%Transmission technique of hybrid automatic repeat request based on frequency resources re-election in long term evolution

    Institute of Scientific and Technical Information of China (English)

    刘高华; 苏寒松; 张伟

    2012-01-01

    针对3GPP长期演进(LTE)系统中使用的混合自动重发反馈(HARQ)技术在数据重传时占用资源块的问题,提出了一种基于频率资源分组的数据重传方法.根据使用的频带宽度,将频率资源分解成资源块组,在原固定资源基础上引入组号判定和资源块选择,以获得更高的分集增益.建立数据重传方法的HARQ模型,结合增量冗余(IR)和Chase合并技术进行验证,仿真结果表明在信道条件较差时,所提方法的误块率(BLER)性能约提升了1.2 dB,并且减少了平均重传次数.%Concerning the optimization of resource blocks during the Hybrid Automatic Repeat reQuest (HARQ) process in Long Term Evolution (LTE) system, a new method of data retransmission based on grouping of frequency resources reelection was proposed. The frequency resources were grouped according to the used bandwidth, and the introduction of group number judgement and frequency resource block selection could obtain higher diversity gain. With the constructed HARQ model incorporating Incremental Redundancy (IR) and Chase combining technique, the simulation results indicate that the improved method can enhance the Block Error Ratio (BLER) performance with 1. 2 dB in poor channel condition and reduce the average number of retransmission.

  15. Identification of Typical Left Bundle Branch Block Contraction by Strain Echocardiography Is Additive to Electrocardiography in Prediction of Long-Term Outcome After Cardiac Resynchronization Therapy

    DEFF Research Database (Denmark)

    Risum, Niels; Tayal, Bhupendar; Hansen, Thomas F;

    2015-01-01

    BACKGROUND: Current guidelines suggest that patients with left bundle branch block (LBBB) be treated with cardiac resynchronization therapy (CRT); however, one-third do not have a significant activation delay, which can result in nonresponse. By identifying characteristic opposing wall contraction......, 2-dimensional strain echocardiography (2DSE) may detect true LBBB activation. OBJECTIVES: This study sought to investigate whether the absence of a typical LBBB mechanical activation pattern by 2DSE was associated with unfavorable long-term outcome and if this is additive to electrocardiographic...... (ECG) morphology and duration. METHODS: From 2 centers, 208 CRT candidates (New York Heart Association classes II to IV, ejection fraction ≤35%, QRS duration ≥120 ms) with LBBB by ECG were prospectively included. Before CRT implantation, longitudinal strain in the apical 4-chamber view determined...

  16. Identification of the growth hormone-releasing hormone analogue [Pro1, Val14]-hGHRH with an incomplete C-term amidation in a confiscated product.

    Science.gov (United States)

    Esposito, Simone; Deventer, Koen; Van Eenoo, Peter

    2014-01-01

    In this work, a modified version of the 44 amino acid human growth hormone-releasing hormone (hGHRH(1-44)) containing an N-terminal proline extension, a valine residue in position 14, and a C-terminus amidation (sequence: PYADAIFTNSYRKVVLGQLSARKLLQDIMSRQQGESNQERGARARL-NH2 ) has been identified in a confiscated product by liquid chromatography-high resolution mass spectrometry (LC-HRMS). Investigation of the product suggests also an incomplete C-term amidation. Similarly to other hGHRH analogues, available in black markets, this peptide can potentially be used as performance-enhancing drug due to its growth hormone releasing activity and therefore it should be considered as a prohibited substance in sport. Additionally, the presence of partially amidated molecule reveals the poor pharmaceutical quality of the preparation, an aspect which represents a big concern for public health as well.

  17. Long-term high frequency measurements of ethane, benzene and methyl chloride at Ragged Point, Barbados: Identification of long-range transport events

    Directory of Open Access Journals (Sweden)

    A.T. Archibald

    2015-09-01

    Full Text Available AbstractHere we present high frequency long-term observations of ethane, benzene and methyl chloride from the AGAGE Ragged Point, Barbados, monitoring station made using a custom built GC-MS system. Our analysis focuses on the first three years of data (2005–2007 and on the interpretation of periodic episodes of high concentrations of these compounds. We focus specifically on an exemplar episode during September 2007 to assess if these measurements are impacted by long-range transport of biomass burning and biogenic emissions. We use the Lagrangian Particle Dispersion model, NAME, run forwards and backwards in time to identify transport of air masses from the North East of Brazil during these events. To assess whether biomass burning was the cause we used hot spots detected using the MODIS instrument to act as point sources for simulating the release of biomass burning plumes. Excellent agreement for the arrival time of the simulated biomass burning plumes and the observations of enhancements in the trace gases indicates that biomass burning strongly influenced these measurements. These modelling data were then used to determine the emissions required to match the observations and compared with bottom up estimates based on burnt area and literature emission factors. Good agreement was found between the two techniques highlight the important role of biomass burning. The modelling constrained by in situ observations suggests that the emission factors were representative of their known upper limits, with the in situ data suggesting slightly greater emissions of ethane than the literature emission factors account for. Further analysis was performed concluding only a small role for biogenic emissions of methyl chloride from South America impacting measurements at Ragged Point. These results highlight the importance of long-term high frequency measurements of NMHC and ODS and highlight how these data can be used to determine sources of emissions

  18. Identification of several cytoplasmic HSP70 genes from the Mediterranean mussel (Mytilus galloprovincialis) and their long-term evolution in Mollusca and Metazoa.

    Science.gov (United States)

    Kourtidis, Antonis; Drosopoulou, Elena; Nikolaidis, Nikolas; Hatzi, Vasiliki I; Chintiroglou, Chariton C; Scouras, Zacharias G

    2006-04-01

    The HSP70 protein family consists one of the most conserved and important systems for cellular homeostasis under both stress and physiological conditions. The genes of this family are poorly studied in Mollusca, which is the second largest metazoan phylum. To study these genes in Mollusca, we have isolated and identified five HSP70 genes from Mytilus galloprovincialis (Mediterranean mussel) and investigated their short-term evolution within Mollusca and their long-term evolution within Metazoa. Both sequence and phylogenetic analyses suggested that the isolated genes belong to the cytoplasmic (CYT) group of the HSP70 genes. Two of these genes probably represent cognates, whereas the remaining probably represent heat-inducible genes. Phylogenetic analysis including several molluscan CYT HSP70s reveals that the cognate genes in two species have very similar sequences and form intraspecies phylogenetic clades, differently from most metazoan cognate genes studied thus far, implying either recent gene duplications or concerted evolution. The M. galloprovincialis heat-inducible genes show intraspecies phylogenetic clustering, which in combination with the higher amino acid than nucleotide identity suggests that both gene conversion and purifying selection should be responsible for their sequence homogenization. Phylogenetic analysis including several metazoan HSP70s suggests that at least two types of CYT genes were present in the common ancestor of vertebrates and invertebrates, the first giving birth to the heat-inducible genes of invertebrates, whereas the other to both the heat-inducible genes of vertebrates and the cognate genes of all metazoans. These analyses also suggest that inducible and cognate genes seem to undergo divergent evolution.

  19. Bilirubin nomograms for identification of neonatal hyperbilirubinemia in healthy term and late-preterm infants:a systematic review and meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Zhang-Bin Yu; Shu-Ping Han; Chao Chen

    2014-01-01

    Background: Hyperbilirubinemia occurs in most healthy term and late-preterm infants, and must be monitored to identify those who might develop severe hyperbilirubinemia. Total serum bilirubin (TSB) or transcutaneous bilirubin (TcB) nomograms have been developed and validated to identify neonatal hyperbilirubinemia. This study aimed to review previously published studies and compare the TcB nomograms with the TSB nomogram, and to determine if the former has the same predictive value for signifi cant hyperbilirubinemia as TSB nomogram does. Methods: A predefined search strategy and inclusion criteria were set up. We selected studies assessing the predictive ability of TSB/TcB nomograms to identify significant hyperbilirubinemia in healthy term and latepreterm infants. Two independent reviewers assessed the quality and extracted the data from the included studies. Meta-Disc 1.4 analysis software was used to calculate the pooled sensitivity, specificity, and positive likelihood ratio of TcB/TSB nomograms. A pooled summary of the receiver operating characteristic of the TcB/TSB nomograms was created. Results: After screening 187 publications from electronic database searches and reference lists of eligible articles, we included 14 studies in the systematic review and meta-analysis. Eleven studies were of medium methodological quality. The remaining three studies were of low methodological quality. Seven studies evaluated the TcB nomograms, and seven studies assessed TSB nomograms. There were no differences between the predictive abilities of the TSB and TcB nomograms (the pooled area under curve was 0.819 vs. 0.817). Conclusions: This study showed that TcB nomograms had the same predictive value as TSB nomograms, both of which could be used to identify subsequent signifi cant hyperbilirubinemia. But this result should be interpreted cautiously because some methodological limitations of these included studies were identifi ed in this review.

  20. Automatic female dehumanization across the menstrual cycle.

    Science.gov (United States)

    Piccoli, Valentina; Fantoni, Carlo; Foroni, Francesco; Bianchi, Mauro; Carnaghi, Andrea

    2016-11-30

    In this study, we investigate whether hormonal shifts during the menstrual cycle contribute to the dehumanization of other women and men. Female participants with different levels of likelihood of conception (LoC) completed a semantic priming paradigm in a lexical decision task. When the word 'woman' was the prime, animal words were more accessible in high versus low LoC whereas human words were more inhibited in the high versus low LoC. When the word 'man' was used as the prime, no difference was found in terms of accessibility between high and low LoC for either animal or human words. These results show that the female dehumanization is automatically elicited by menstrual cycle-related processes and likely associated with an enhanced activation of mate-attraction goals.

  1. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  2. Automatic feed system for ultrasonic machining

    Science.gov (United States)

    Calkins, Noel C.

    1994-01-01

    Method and apparatus for ultrasonic machining in which feeding of a tool assembly holding a machining tool toward a workpiece is accomplished automatically. In ultrasonic machining, a tool located just above a workpiece and vibrating in a vertical direction imparts vertical movement to particles of abrasive material which then remove material from the workpiece. The tool does not contact the workpiece. Apparatus for moving the tool assembly vertically is provided such that it operates with a relatively small amount of friction. Adjustable counterbalance means is provided which allows the tool to be immobilized in its vertical travel. A downward force, termed overbalance force, is applied to the tool assembly. The overbalance force causes the tool to move toward the workpiece as material is removed from the workpiece.

  3. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  4. Semi-automatic removal of foreground stars from images of galaxies

    CERN Document Server

    Frei, Z

    1996-01-01

    A new procedure, designed to remove foreground stars from galaxy profiles is presented. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well known stellar photometry packages, DAOPhot. Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since...

  5. An Automatic Proof of Euler's Formula

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2005-05-01

    Full Text Available In this information age, everything is digitalized. The encoding of functions and the automatic proof of functions are important. This paper will discuss the automatic calculation for Taylor expansion coefficients, as an example, it can be applied to prove Euler's formula automatically.

  6. Self-Compassion and Automatic Thoughts

    Science.gov (United States)

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  7. Automatic Control System for Neutron Laboratory Safety

    Institute of Scientific and Technical Information of China (English)

    ZHAO; Xiao; ZHANG; Guo-guang; FENG; Shu-qiang; SU; Dan; YANG; Guo-zhao; ZHANG; Shuai

    2015-01-01

    In order to cooperate with the experiment of neutron generator,and realize the automatic control in the experiment,a set of automatic control system for the safety of the neutron laboratory is designed.The system block diagram is shown as Fig.1.Automatic control device is for processing switch signal,so PLC is selected as the core component

  8. De Novo Transcriptome Assembly and Identification of Gene Candidates for Rapid Evolution of Soil Al Tolerance in Anthoxanthum odoratum at the Long-Term Park Grass Experiment.

    Directory of Open Access Journals (Sweden)

    Billie Gould

    Full Text Available Studies of adaptation in the wild grass Anthoxanthum odoratum at the Park Grass Experiment (PGE provided one of the earliest examples of rapid evolution in plants. Anthoxanthum has become locally adapted to differences in soil Al toxicity, which have developed there due to soil acidification from long-term experimental fertilizer treatments. In this study, we used transcriptome sequencing to identify Al stress responsive genes in Anthoxanhum and identify candidates among them for further molecular study of rapid Al tolerance evolution at the PGE. We examined the Al content of Anthoxanthum tissues and conducted RNA-sequencing of root tips, the primary site of Al induced damage. We found that despite its high tolerance Anthoxanthum is not an Al accumulating species. Genes similar to those involved in organic acid exudation (TaALMT1, ZmMATE, cell wall modification (OsSTAR1, and internal Al detoxification (OsNRAT1 in cultivated grasses were responsive to Al exposure. Expression of a large suite of novel loci was also triggered by early exposure to Al stress in roots. Three-hundred forty five transcripts were significantly more up- or down-regulated in tolerant vs. sensitive Anthoxanthum genotypes, providing important targets for future study of rapid evolution at the PGE.

  9. Image simulation for automatic license plate recognition

    Science.gov (United States)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  10. Automatically Determining Scale Within Unstructured Point Clouds

    Science.gov (United States)

    Kadamen, Jayren; Sithole, George

    2016-06-01

    Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.

  11. SPHOTOM - Package for an Automatic Multicolour Photometry

    Science.gov (United States)

    Parimucha, Š.; Vaňko, M.; Mikloš, P.

    2012-04-01

    We present basic information about package SPHOTOM for an automatic multicolour photometry. This package is in development for the creation of a photometric pipe-line, which we plan to use in the near future with our new instruments. It could operate in two independent modes, (i) GUI mode, in which the user can select images and control functions of package through interface and (ii) command line mode, in which all processes are controlled using a main parameter file. SPHOTOM is developed as a universal package for Linux based systems with easy implementation for different observatories. The photometric part of the package is based on the Sextractor code, which allows us to detect all objects on the images and perform their photometry with different apertures. We can also perform astrometric solutions for all images for a correct cross-identification of the stars on the images. The result is a catalogue of all objects with their instrumental photometric measurements which are consequently used for a differential magnitudes calculations with one or more comparison stars, transformations to an international system, and determinations of colour indices.

  12. Automatic Schema Evolution in Root

    Institute of Scientific and Technical Information of China (English)

    ReneBrun; FonsRademakers

    2001-01-01

    ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.

  13. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  14. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  15. Automatic design of magazine covers

    Science.gov (United States)

    Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.

    2012-03-01

    In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.

  16. An efficient scheme for automatic web pages categorization using the support vector machine

    Science.gov (United States)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  17. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range. F...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  18. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  19. Annual review in automatic programming

    CERN Document Server

    Halpern, Mark I; Bolliet, Louis

    2014-01-01

    Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown.

  20. How CBO Estimates Automatic Stabilizers

    Science.gov (United States)

    2015-11-01

    of wages and salaries and proprietors’ incomes as recorded in the NIPAs to changes in the GDP gap , CBO uses separate regressions based on equation (1...Outlays Without Automatic Stabilizers GDP Gapa Unemployment Gap (Percent)b Revenues Outlays 3 Table 1. (Continued) Deficit or Surplus With and...gross domestic product; * = between -0.05 percent and 0.05 percent. a. The GDP gap equals the difference between actual or projected GDP and CBO’s

  1. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  2. The Automatic Galaxy Collision Software

    CERN Document Server

    Smith, Beverly J; Pfeiffer, Phillip; Perkins, Sam; Barkanic, Jason; Fritts, Steve; Southerland, Derek; Manchikalapudi, Dinikar; Baker, Matt; Luckey, John; Franklin, Coral; Moffett, Amanda; Struck, Curtis

    2009-01-01

    The key to understanding the physical processes that occur during galaxy interactions is dynamical modeling, and especially the detailed matching of numerical models to specific systems. To make modeling interacting galaxies more efficient, we have constructed the `Automatic Galaxy Collision' (AGC) code, which requires less human intervention in finding good matches to data. We present some preliminary results from this code for the well-studied system Arp 284 (NGC 7714/5), and address questions of uniqueness of solutions.

  3. Automatic computation of transfer functions

    Science.gov (United States)

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  4. Memory as a function of attention, level of processing, and automatization.

    Science.gov (United States)

    Fisk, A D; Schneider, W

    1984-04-01

    The relationships between long-term memory (LTM) modification, attentional allocation, and type of processing are examined. Automatic/controlled processing theory (Schneider & Shiffrin, 1977) predicts that the nature and amount of controlled processing determines LTM storage and that stimuli can be automatically processed with no lasting LTM effect. Subjects performed the following: (a) an intentional learning, (b) a semantic categorization, (c) a graphic categorization, (d) a distracting digit-search while intentionally learning words, and (e) a distracting digit-search while ignoring words. Frequency judgments were more accurate in the semantic and intentional conditions than the graphic condition. Frequency judgments in the digit-search conditions were near chance. Experiment 2 extensively trained subjects to develop automatic categorization. Automatic categorization produced no frequency learning and little recognition. These results also disconfirm the Hasher and Zacks (1979) "automatic encoding" proposal regarding the nature of processing.

  5. Performance Test of System Identification Methods for a Nuclear Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Keuk Jong; Kim, Han Gon [KHNP, Daejeon (Korea, Republic of)

    2011-05-15

    An automatic controller that uses the model predictive control (MPC) method is being developed for automatic load follow operation. As described in Ref. a system identification method is important in the MPC method because MPC is based on a system model produced by system identification. There are many models and methods of system identification. In this study, AutoRegressive eXogenous (ARX) model was selected from among them, and the recursive least square (RLS) method and least square (LS) method associated with this model are used in a comparative performance analysis

  6. Semi-automated identification of leopard frogs

    Science.gov (United States)

    Petrovska-Delacrétaz, Dijana; Edwards, Aaron; Chiasson, John; Chollet, Gérard; Pilliod, David S.

    2014-01-01

    Principal component analysis is used to implement a semi-automatic recognition system to identify recaptured northern leopard frogs (Lithobates pipiens). Results of both open set and closed set experiments are given. The presented algorithm is shown to provide accurate identification of 209 individual leopard frogs from a total set of 1386 images.

  7. A Cost Benefit Analysis of Radio Frequency Identification (RFID) Implementation at the Defense Microelectronics Activity (DMEA)

    Science.gov (United States)

    2011-12-01

    grouped in the category of automatic identification technologies, along with barcodes, magnetic stripes, smart cards and biometrics. RFID uses...ANALYSIS OF RADIO FREQUENCY IDENTIFICATION ( RFID ) IMPLEMENTATION AT THE DEFENSE MICROELECTRONICS ACTIVITY (DMEA) by James B. Gerber December...Identification ( RFID ) Implementation at the Defense Microelectronics Activity (DMEA) 5. FUNDING NUMBERS 6. AUTHOR(S) James B. Gerber 7

  8. Automatic Spectral Classification of Galaxies in the Infrared

    Science.gov (United States)

    Navarro, S. G.; Guzmán, V.; Dafonte, C.; Kemp, S. N.; Corral, L. J.

    2016-10-01

    Multi-object spectroscopy (MOS) provides us with numerous spectral data, and the projected new facilities and survey missions will increment the available spectra from stars and galaxies. In order to better understand this huge amount of data we need to develop new techniques of analysis and classification. Over the past decades it has been demonstrated that artificial neural networks are excellent tools for automatic spectral classification and identification, being robust tools and highly resistant to the presence of noise. We present here the result of the application of unsupervised neural networks: competitive neural networks (CNN) and self organized maps (SOM), to a sample of 747 galaxy spectra from the Infrared Spectrograph (IRS) of Spitzer. We obtained an automatic classification on 17 groups with the CNN, and we compare the results with those obtained with SOMs.The final goal of the project is to develop an automatic spectral classification tool for galaxies in the infrared, making use of artificial neural networks with unsupervised training and analyze the spectral characteristics of the galaxies that can give us clues to the physical processes taking place inside them.

  9. Refinements to the Boolean approach to automatic data editing

    Energy Technology Data Exchange (ETDEWEB)

    Liepins, G.E.

    1980-09-01

    Automatic data editing consists of three components: identification of erroneous records, identification of most likely erroneous fields within an erroneous record (fields to impute), and assignment of acceptable values to failing records. Moreover the types of data considered naturally fall into three categories: coded (categorical) data, continuous data, and mixed data (both coded and continuous). For the case of coded data, a natural way to approach automatic data is commonly referred to as the Boolean approach, first developed by Fellegi and Holt. For the fields to impute problem, central to the operation of the Fellegi-Holt approach is the explicit recognition of certain implied edits; Fellegi and Holt orginally required a complete set of edits, and their algorithm to generate this complete set has occasionally had the distinct disadvantage of failing to converge within reasonable time. The primary results of this paper is an algorithm that significantly prunes the Fellegi-Holt edit generation process, yet, nonetheless, generates a sufficient collection of implied edits adequate for the solution of the fields to impute problem. 3 figures.

  10. Multilabel Learning for Automatic Web Services Tagging

    Directory of Open Access Journals (Sweden)

    Mustapha AZNAG

    2014-08-01

    Full Text Available Recently, some web services portals and search engines as Biocatalogue and Seekda!, have allowed users to manually annotate Web services using tags. User Tags provide meaningful descriptions of services and allow users to index and organize their contents. Tagging technique is widely used to annotate objects in Web 2.0 applications. In this paper we propose a novel probabilistic topic model (which extends the CorrLDA model - Correspondence Latent Dirichlet Allocation- to automatically tag web services according to existing manual tags. Our probabilistic topic model is a latent variable model that exploits local correlation labels. Indeed, exploiting label correlations is a challenging and crucial problem especially in multi-label learning context. Moreover, several existing systems can recommend tags for web services based on existing manual tags. In most cases, the manual tags have better quality. We also develop three strategies to automatically recommend the best tags for web services. We also propose, in this paper, WS-Portal; An Enriched Web Services Search Engine which contains 7063 providers, 115 sub-classes of category and 22236 web services crawled from the Internet. In WS-Portal, severals technologies are employed to improve the effectiveness of web service discovery (i.e. web services clustering, tags recommendation, services rating and monitoring. Our experiments are performed out based on real-world web services. The comparisons of Precision@n, Normalised Discounted Cumulative Gain (NDCGn values for our approach indicate that the method presented in this paper outperforms the method based on the CorrLDA in terms of ranking and quality of generated tags.

  11. Automatic evaluations and exercise setting preference in frequent exercisers.

    Science.gov (United States)

    Antoniewicz, Franziska; Brand, Ralf

    2014-12-01

    The goals of this study were to test whether exercise-related stimuli can elicit automatic evaluative responses and whether automatic evaluations reflect exercise setting preference in highly active exercisers. An adapted version of the Affect Misattribution Procedure was employed. Seventy-two highly active exercisers (26 years ± 9.03; 43% female) were subliminally primed (7 ms) with pictures depicting typical fitness center scenarios or gray rectangles (control primes). After each prime, participants consciously evaluated the "pleasantness" of a Chinese symbol. Controlled evaluations were measured with a questionnaire and were more positive in participants who regularly visited fitness centers than in those who reported avoiding this exercise setting. Only center exercisers gave automatic positive evaluations of the fitness center setting (partial eta squared = .08). It is proposed that a subliminal Affect Misattribution Procedure paradigm can elicit automatic evaluations to exercising and that, in highly active exercisers, these evaluations play a role in decisions about the exercise setting rather than the amounts of physical exercise. Findings are interpreted in terms of a dual systems theory of social information processing and behavior.

  12. Unification of automatic target tracking and automatic target recognition

    Science.gov (United States)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  13. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro

  14. On automatic machine translation evaluation

    Directory of Open Access Journals (Sweden)

    Darinka Verdonik

    2013-05-01

    Full Text Available An important task of developing machine translation (MT is evaluating system performance. Automatic measures are most commonly used for this task, as manual evaluation is time-consuming and costly. However, to perform an objective evaluation is not a trivial task. Automatic measures, such as BLEU, TER, NIST, METEOR etc., have their own weaknesses, while manual evaluations are also problematic since they are always to some extent subjective. In this paper we test the influence of a test set on the results of automatic MT evaluation for the subtitling domain. Translating subtitles is a rather specific task for MT, since subtitles are a sort of summarization of spoken text rather than a direct translation of (written text. Additional problem when translating language pair that does not include English, in our example Slovene-Serbian, is that commonly the translations are done from English to Serbian and from English to Slovenian, and not directly, since most of the TV production is originally filmed in English. All this poses additional challenges to MT and consequently to MT evaluation. Automatic evaluation is based on a reference translation, which is usually taken from an existing parallel corpus and marked as a test set. In our experiments, we compare the evaluation results for the same MT system output using three types of test set. In the first round, the test set are 4000 subtitles from the parallel corpus of subtitles SUMAT. These subtitles are not direct translations from Serbian to Slovene or vice versa, but are based on an English original. In the second round, the test set are 1000 subtitles randomly extracted from the first test set and translated anew, from Serbian to Slovenian, based solely on the Serbian written subtitles. In the third round, the test set are the same 1000 subtitles, however this time the Slovene translations were obtained by manually correcting the Slovene MT outputs so that they are correct translations of the

  15. Automatic Inference of DATR Theories

    CERN Document Server

    Barg, P

    1996-01-01

    This paper presents an approach for the automatic acquisition of linguistic knowledge from unstructured data. The acquired knowledge is represented in the lexical knowledge representation language DATR. A set of transformation rules that establish inheritance relationships and a default-inference algorithm make up the basis components of the system. Since the overall approach is not restricted to a special domain, the heuristic inference strategy uses criteria to evaluate the quality of a DATR theory, where different domains may require different criteria. The system is applied to the linguistic learning task of German noun inflection.

  16. Automatic analysis of multiparty meetings

    Indian Academy of Sciences (India)

    Steve Renals

    2011-10-01

    This paper is about the recognition and interpretation of multiparty meetings captured as audio, video and other signals. This is a challenging task since the meetings consist of spontaneous and conversational interactions between a number of participants: it is a multimodal, multiparty, multistream problem. We discuss the capture and annotation of the Augmented Multiparty Interaction (AMI) meeting corpus, the development of a meeting speech recognition system, and systems for the automatic segmentation, summarization and social processing of meetings, together with some example applications based on these systems.

  17. Commutated automatic gain control system

    Science.gov (United States)

    Yost, S. R.

    1982-01-01

    The commutated automatic gain control (AGC) system was designed and built for the prototype Loran-C receiver is discussed. The current version of the prototype receiver, the Mini L-80, was tested initially in 1980. The receiver uses a super jolt microcomputer to control a memory aided phase loop (MAPLL). The microcomputer also controls the input/output, latitude/longitude conversion, and the recently added AGC system. The AGC control adjusts the level of each station signal, such that the early portion of each envelope rise is about at the same amplitude in the receiver envelope detector.

  18. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  19. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  20. Quantum system identification

    CERN Document Server

    Raginsky, M

    2003-01-01

    We formulate and study, in general terms, the problem of quantum system identification, i.e., the determination (or estimation) of unknown quantum channels through their action on suitably chosen input density operators. We also present a quantitative analysis of the worst-case performance of these schemes.

  1. QXT-full Automatic Saccharify Instrument

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    QXT is a full automatic saccharify instrument of eight holes . The instrument use process control technology of micro-computer. It can realize automatic of saccharify full process correctly. Due to adapt control mode of high precision expert PID and digit automatic calibration technology of fill micro computer, not only ensure precision of linear raising temperature region (1 ℃ /min) and constant temperature region (temperature error ±0.2 ℃), but also overcome the disturbance

  2. Automatic Control of Water Pumping Stations

    Institute of Scientific and Technical Information of China (English)

    Muhannad Alrheeh; JIANG Zhengfeng

    2006-01-01

    Automatic Control of pumps is an interesting proposal to operate water pumping stations among many kinds of water pumping stations according to their functions.In this paper, our pumping station is being used for water supply system. This paper is to introduce the idea of pump controller and the important factors that must be considering when we want to design automatic control system of water pumping stations. Then the automatic control circuit with the function of all components will be introduced.

  3. Automatic generation of stop word lists for information retrieval and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  4. Rapid Automatic Motor Encoding of Competing Reach Options

    Directory of Open Access Journals (Sweden)

    Jason P. Gallivan

    2017-02-01

    Full Text Available Mounting neural evidence suggests that, in situations in which there are multiple potential targets for action, the brain prepares, in parallel, competing movements associated with these targets, prior to implementing one of them. Central to this interpretation is the idea that competing viewed targets, prior to selection, are rapidly and automatically transformed into corresponding motor representations. Here, by applying target-specific, gradual visuomotor rotations and dissociating, unbeknownst to participants, the visual direction of potential targets from the direction of the movements required to reach the same targets, we provide direct evidence for this provocative idea. Our results offer strong empirical support for theories suggesting that competing action options are automatically represented in terms of the movements required to attain them. The rapid motor encoding of potential targets may support the fast optimization of motor costs under conditions of target uncertainty and allow the motor system to inform decisions about target selection.

  5. Detection of Off-normal Images for NIF Automatic Alignment

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Awwal, A S; McClay, W A; Ferguson, S W; Burkhart, S C

    2005-07-11

    One of the major purposes of National Ignition Facility at Lawrence Livermore National Laboratory is to accurately focus 192 high energy laser beams on a nanoscale (mm) fusion target at the precise location and time. The automatic alignment system developed for NIF is used to align the beams in order to achieve the required focusing effect. However, if a distorted image is inadvertently created by a faulty camera shutter or some other opto-mechanical malfunction, the resulting image termed ''off-normal'' must be detected and rejected before further alignment processing occurs. Thus the off-normal processor acts as a preprocessor to automatic alignment image processing. In this work, we discuss the development of an ''off-normal'' pre-processor capable of rapidly detecting the off-normal images and performing the rejection. Wide variety of off-normal images for each loop is used to develop the criterion for rejections accurately.

  6. Grey-identification model based wind power generation short-term prediction%基于灰色-辨识模型的风电功率短期预测

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

      为了准确预测风电机组的输出功率,针对实际风场,给出一种基于灰色 GM(1,1)模型和辨识模型的风电功率预测建模方法,采用残差修正的方法对风速进行预测,得出准确的风速预测序列。同时为了提高风电功率预测的精度,引入 FIR-MA迭代辨识模型,从分段函数的角度对风电场实际风速-风电功率曲线进行拟合,取得合适的 FIR-MA 模型。利用该模型对额定容量为850 kW 的风电机组进行建模,采用平均绝对误差和均方根误差,以及单点误差作为评价指标,与风电场的实测数据进行比较分析。仿真结果表明,基于灰色-辨识模型的风电机组输出功率预测方法是有效和实用的,该模型能够很好地预测风电机组的实时输出功率,从而提高风电场输出功率预测的精确性。%To predict the output power of wind turbine accurately, based on the GM (1, 1) model and the identification method, a wind power generation short-term prediction method is presented for the real wind farm. The revision of residual error is applied to forecast the wind speed and get the accurate predicted wind speed series. Then, in order to increase the prediction precision of wind power, the FIR-MA iterative identification model is adopted to fit the real relationship between sequential wind speed and wind power and get the proper FIR-MA model. By modeling the wind turbine whose rated capacity is 850 kW, this paper compares the predicted wind generation power with the observed data using mean absolute percentage error, root mean square error and single point error as its evaluation indexes. The simulation shows the effectiveness and the practical applicability of the presented method, which can predict the real time generation power of wind turbineness and raise the accuracy of the wind power prediction. Finally, the simulation using the actual data from wind farm in China proves the efficiency of the

  7. Automatic Network Reconstruction using ASP

    CERN Document Server

    Ostrowski, Max; Durzinsky, Markus; Marwan, Wolfgang; Wagler, Annegret

    2011-01-01

    Building biological models by inferring functional dependencies from experimental data is an im- portant issue in Molecular Biology. To relieve the biologist from this traditionally manual process, various approaches have been proposed to increase the degree of automation. However, available ap- proaches often yield a single model only, rely on specific assumptions, and/or use dedicated, heuris- tic algorithms that are intolerant to changing circumstances or requirements in the view of the rapid progress made in Biotechnology. Our aim is to provide a declarative solution to the problem by ap- peal to Answer Set Programming (ASP) overcoming these difficulties. We build upon an existing approach to Automatic Network Reconstruction proposed by part of the authors. This approach has firm mathematical foundations and is well suited for ASP due to its combinatorial flavor providing a characterization of all models explaining a set of experiments. The usage of ASP has several ben- efits over the existing heuristic a...

  8. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... is the possiblility to combine the three methods in an extremely flexible way. We examine some applications where this flexibility is very useful. A method for Taylor expanding solutions of ordinary differential equations is presented, and a method for obtaining interval enclosures of the truncation errors incurred...... with intervals as initial values. A modification of the mean value enclosure of discrete mappings is considered, namely the extended mean value enclosure which in most cases leads to even better enclosures. These methods have previously been described in connection with discretizing solutions of ordinary...

  9. Autoclass: An automatic classification system

    Science.gov (United States)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  10. Study on flaw identification of ultrasonic signal for large shafts based on optimal support vector machine

    Institute of Scientific and Technical Information of China (English)

    Zhao Xiufen; Yin Guofu; Tian Guiyun; Yin Ying

    2008-01-01

    Automatic identification of flaws is very important for ultrasonic nondestructive testing and evaluation of large shaft. A novel automatic defect identification system is presented. Wavelet packet analysis (WPA) was applied to feature extraction of ultrasonic signal, and optimal Support vector machine (SVM) was used to perform the identification task. Meanwhile, comparative study on convergent velocity and classified effect was done among SVM and several improved BP network models. To validate the method, some experiments were performed and the results show that the proposed system has very high identification performance for large shafts and the optimal SVM processes better classification performance and spreading potential than BP manual neural network under small study sample condition.

  11. Solar Powered Automatic Shrimp Feeding System

    Directory of Open Access Journals (Sweden)

    Dindo T. Ani

    2015-12-01

    Full Text Available - Automatic system has brought many revolutions in the existing technologies. One among the technologies, which has greater developments, is the solar powered automatic shrimp feeding system. For instance, the solar power which is a renewable energy can be an alternative solution to energy crisis and basically reducing man power by using it in an automatic manner. The researchers believe an automatic shrimp feeding system may help solve problems on manual feeding operations. The project study aimed to design and develop a solar powered automatic shrimp feeding system. It specifically sought to prepare the design specifications of the project, to determine the methods of fabrication and assembly, and to test the response time of the automatic shrimp feeding system. The researchers designed and developed an automatic system which utilizes a 10 hour timer to be set in intervals preferred by the user and will undergo a continuous process. The magnetic contactor acts as a switch connected to the 10 hour timer which controls the activation or termination of electrical loads and powered by means of a solar panel outputting electrical power, and a rechargeable battery in electrical communication with the solar panel for storing the power. By undergoing through series of testing, the components of the modified system were proven functional and were operating within the desired output. It was recommended that the timer to be used should be tested to avoid malfunction and achieve the fully automatic system and that the system may be improved to handle changes in scope of the project.

  12. Automatic cobb angle determination from radiographic images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Met

  13. Automatic Cobb Angle Determination From Radiographic Images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H. F.; Veldhuizen, Albert G.; van Ooijen, Peter M. A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Methods.

  14. Automatization for development of HPLC methods.

    Science.gov (United States)

    Pfeffer, M; Windt, H

    2001-01-01

    Within the frame of inprocess analytics of the synthesis of pharmaceutical drugs a lot of HPLC methods are required for checking the quality of intermediates and drug substances. The methods have to be developed in terms of optimal selectivity and low limit of detection, minimum running time and chromatographic robustness. The goal was to shorten the method development process. Therefore, the screening of stationary phases was automated by means of switching modules equipped with 12 HPLC columns. Mobile phase and temperature could be optimized by using Drylab after evaluating chromatograms of gradient elutions performed automatically. The column switching module was applied for more than three dozens of substances, e.g. steroidal intermediates. Resolution (especially of isomeres), peak shape and number of peaks turned out to be the criteria for selection of the appropriate stationary phase. On the basis of the "best" column the composition of the "best" eluent was usually defined rapidly and with less effort. This approach leads to savings in manpower by more than one third. Overnight, impurity profiles of the intermediates were obtained yielding robust HPLC methods with high selectivity and minimized elution time.

  15. Development of a microcontroller-based automatic control system for the electrohydraulic total artificial heart.

    Science.gov (United States)

    Kim, H C; Khanwilkar, P S; Bearnson, G B; Olsen, D B

    1997-01-01

    An automatic physiological control system for the actively filled, alternately pumped ventricles of the volumetrically coupled, electrohydraulic total artificial heart (EHTAH) was developed for long-term use. The automatic control system must ensure that the device: 1) maintains a physiological response of cardiac output, 2) compensates for an nonphysiological condition, and 3) is stable, reliable, and operates at a high power efficiency. The developed automatic control system met these requirements both in vitro, in week-long continuous mock circulation tests, and in vivo, in acute open-chested animals (calves). Satisfactory results were also obtained in a series of chronic animal experiments, including 21 days of continuous operation of the fully automatic control mode, and 138 days of operation in a manual mode, in a 159-day calf implant.

  16. CRISPR Recognition Tool (CRT: a tool for automatic detection of clustered regularly interspaced palindromic repeats

    Directory of Open Access Journals (Sweden)

    Brown Kyndall

    2007-06-01

    Full Text Available Abstract Background Clustered Regularly Interspaced Palindromic Repeats (CRISPRs are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. Results CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT proved to be a huge improvement over Patscan. Both CRT and Pilercr were comparable in speed, however CRT was faster for genomes containing large numbers of repeats. Conclusion In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, showed some important improvements over current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n in space and O(nm/l in time.

  17. An improved, SSH-based method to automatically identify mesoscale eddies in the ocean

    Institute of Scientific and Technical Information of China (English)

    WANG Xin; DU Yun-yan; ZHOU Cheng-hu; FAN Xing; YI Jia-wei

    2013-01-01

      Mesoscale eddies are an important component of oceanic features. How to automatically identify these mesoscale eddies from available data has become an important research topic. Through careful examination of existing methods, we propose an improved, SSH-based automatic identification method. Using the inclusion relation of enclosed SSH contours, the mesoscale eddy boundary and core(s) can be automatically identified. The time evolution of eddies can be examined by a threshold search algorithm and a tracking algorithm based on similarity. Sea-surface height (SSH) data from Naval Research Laboratory Layered Ocean Model (NLOM) and sea-level anomaly (SLA) data from altimeter are used in the many experiments, in which different automatic identification methods are compared. Our results indicate that the improved method is able to extract the mesoscale eddy boundary more precisely, retaining the multiple-core structure. In combination with the tracking algorithm, this method can capture complete mesoscale eddy processes. It can thus provide reliable information for further study of reconstructing eddy dynamics, merging, splitting, and evolution of a multi-core structure.

  18. 7 CFR 58.418 - Automatic cheese making equipment.

    Science.gov (United States)

    2010-01-01

    ... processing or packaging areas. (c) Automatic salter. The automatic salter shall be constructed of stainless.... The automatic salter shall be constructed so that it can be satisfactorily cleaned. The salting...

  19. 一种基于动态关系辨识算法的短期预测方法%A Short-term Forecasting Method Based on Dynamic Identification Algorithm and Its Application

    Institute of Scientific and Technical Information of China (English)

    曹柬; 齐羽; 周根贵

    2013-01-01

    In short-term forecasts, the predicted values are usually treated with monthly, weekly or even daily periodicities. The short-term forecast has played more and more important roles in people's life and work because of rapid development in information technology. Short-term forecasts have many features, such as shorter forecasting period, more elusive regularity of practical data, many factors influencing the forecasting results, and so on. These features increase the difficulty of prediction. In order to reduce decision risks, decision-makers need to seek proper forecasting tools to reveal the regularity of the predicted data and make a reliable and accurate forecast. A variety of models and algorithms on short-term forecasts are proposed in recent years. Among them the traditional statistical technologies are still the most popular methods because they are easy to be accepted and applied by decision-makers. Statistical methods generally contain two kinds of models: static models and dynamic models. The static model has a fixed structure which usually reduces its ability to trace constant changes in the environment. The dynamic model is constructed based on the analysis of certain stochastic process. Compared with the static model, the dynamic model can produce more accurate and credible results and has the potential defect of low robustness in practical problems. After including the features of current short-term forecasts as well as the imperfection of existing statistical forecasting methods, a new short-term forecasting method based on the dynamic relation identification approach is presented in the paper. Its forecasting process is briefly described as follows. Firstly, the dynamic relation model reflects the relational pattern between the forecasting value and its correlative influencing factors. Secondly, in each forecasting period the optimal forecast precision is calculated by the precision-determined formula and the newly actual data. Thirdly, both the

  20. Sequentiality of daily life physiology: an automatized segmentation approach.

    Science.gov (United States)

    Fontecave-Jallon, J; Baconnier, P; Tanguy, S; Eymaron, M; Rongier, C; Guméry, P Y

    2013-09-01

    Based on the hypotheses that (1) a physiological organization exists inside each activity of daily life and (2) the pattern of evolution of physiological variables is characteristic of each activity, pattern changes should be detected on daily life physiological recordings. The present study aims at investigating whether a simple segmentation method can be set up to detect pattern changes on physiological recordings carried out during daily life. Heart and breathing rates and skin temperature have been non-invasively recorded in volunteers following scenarios made of "daily life" steps (13 records). An observer, undergoing the scenario, wrote down annotations during the recording time. Two segmentation procedures have been compared to the annotations, a visual inspection of the signals and an automatic program based on a trends detection algorithm applied to one physiological signal (skin temperature). The annotations resulted in a total number of 213 segments defined on the 13 records, the best visual inspection detected less segments (120) than the automatic program (194). If evaluated in terms of the number of correspondences between the times marks given by annotations and those resulting from both physiologically based segmentations, the automatic program was better than the visual inspection. The mean time lags between annotation and program time marks remain variables time series recorded in common life conditions exhibit different successive patterns that can be detected by a simple trends detection algorithm. Theses sequences are coherent with the corresponding annotated activity.

  1. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    Science.gov (United States)

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.

    2009-01-01

    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  2. A Hierarchy of Tree-Automatic Structures

    CERN Document Server

    Finkel, Olivier

    2011-01-01

    We consider $\\omega^n$-automatic structures which are relational structures whose domain and relations are accepted by automata reading ordinal words of length $\\omega^n$ for some integer $n\\geq 1$. We show that all these structures are $\\omega$-tree-automatic structures presentable by Muller or Rabin tree automata. We prove that the isomorphism relation for $\\omega^2$-automatic (resp. $\\omega^n$-automatic for $n>2$) boolean algebras (respectively, partial orders, rings, commutative rings, non commutative rings, non commutative groups) is not determined by the axiomatic system ZFC. We infer from the proof of the above result that the isomorphism problem for $\\omega^n$-automatic boolean algebras, $n > 1$, (respectively, rings, commutative rings, non commutative rings, non commutative groups) is neither a $\\Sigma_2^1$-set nor a $\\Pi_2^1$-set. We obtain that there exist infinitely many $\\omega^n$-automatic, hence also $\\omega$-tree-automatic, atomless boolean algebras $B_n$, $n\\geq 1$, which are pairwise isomorp...

  3. An Approach for Automatic Classification of Radiology Reports in Spanish.

    Science.gov (United States)

    Cotik, Viviana; Filippo, Darío; Castaño, José

    2015-01-01

    Automatic detection of relevant terms in medical reports is useful for educational purposes and for clinical research. Natural language processing (NLP) techniques can be applied in order to identify them. In this work we present an approach to classify radiology reports written in Spanish into two sets: the ones that indicate pathological findings and the ones that do not. In addition, the entities corresponding to pathological findings are identified in the reports. We use RadLex, a lexicon of English radiology terms, and NLP techniques to identify the occurrence of pathological findings. Reports are classified using a simple algorithm based on the presence of pathological findings, negation and hedge terms. The implemented algorithms were tested with a test set of 248 reports annotated by an expert, obtaining a best result of 0.72 F1 measure. The output of the classification task can be used to look for specific occurrences of pathological findings.

  4. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  5. Development of Automatic Extraction Weld for Industrial Radiographic Negative Inspection

    Institute of Scientific and Technical Information of China (English)

    张晓光; 林家骏; 李浴; 卢印举

    2003-01-01

    In industrial X-ray inspection, in order to identify weld defects automatically, raise the identification ratio, and avoid processing of complex background, it is an important step for sequent processing to extract weld from the image. According to the characteristics of weld radiograph image, median filter is adopted to reduce the noise with high frequency, then relative gray-scale of image is chosen as fuzzy characteristic, and image gray-scale fuzzy matrix is constructed and suitable membership function is selected to describe edge characteristic. A fuzzy algorithm is adopted for enhancing radiograph image processing. Based on the intensity distribution characteristic in weld, methodology of weld extraction is then designed. This paper describes the methodology of all the weld extraction, including reducing noise, fuzzy enhancement and weld extraction process. To prove its effectiveness, this methodology was tested with 64 weld negative images available for this study. The experimental results show that this methodology is very effective for extracting linear weld.

  6. Automatic target recognition based on cross-plot.

    Directory of Open Access Journals (Sweden)

    Kelvin Kian Loong Wong

    Full Text Available Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository.

  7. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  8. Automatic image cropping for republishing

    Science.gov (United States)

    Cheatle, Phil

    2010-02-01

    Image cropping is an important aspect of creating aesthetically pleasing web pages and repurposing content for different web or printed output layouts. Cropping provides both the possibility of improving the composition of the image, and also the ability to change the aspect ratio of the image to suit the layout design needs of different document or web page formats. This paper presents a method for aesthetically cropping images on the basis of their content. Underlying the approach is a novel segmentation-based saliency method which identifies some regions as "distractions", as an alternative to the conventional "foreground" and "background" classifications. Distractions are a particular problem with typical consumer photos found on social networking websites such as FaceBook, Flickr etc. Automatic cropping is achieved by identifying the main subject area of the image and then using an optimization search to expand this to form an aesthetically pleasing crop. Evaluation of aesthetic functions like auto-crop is difficult as there is no single correct solution. A further contribution of this paper is an automated evaluation method which goes some way towards handling the complexity of aesthetic assessment. This allows crop algorithms to be easily evaluated against a large test set.

  9. Automatic segmentation of psoriasis lesions

    Science.gov (United States)

    Ning, Yang; Shi, Chenbo; Wang, Li; Shu, Chang

    2014-10-01

    The automatic segmentation of psoriatic lesions is widely researched these years. It is an important step in Computer-aid methods of calculating PASI for estimation of lesions. Currently those algorithms can only handle single erythema or only deal with scaling segmentation. In practice, scaling and erythema are often mixed together. In order to get the segmentation of lesions area - this paper proposes an algorithm based on Random forests with color and texture features. The algorithm has three steps. The first step, the polarized light is applied based on the skin's Tyndall-effect in the imaging to eliminate the reflection and Lab color space are used for fitting the human perception. The second step, sliding window and its sub windows are used to get textural feature and color feature. In this step, a feature of image roughness has been defined, so that scaling can be easily separated from normal skin. In the end, Random forests will be used to ensure the generalization ability of the algorithm. This algorithm can give reliable segmentation results even the image has different lighting conditions, skin types. In the data set offered by Union Hospital, more than 90% images can be segmented accurately.

  10. Automatic Assessment of Programming assignment

    Directory of Open Access Journals (Sweden)

    Surendra Gupta

    2012-01-01

    Full Text Available In today’s world study of computer’s language is more important. Effective and good programming skills are need full all computer science students. They can be master in programming, only through intensive exercise practices. Due to day by day increasing number of students in the class, the assessment of programming exercises leads to extensive workload for teacher/instructor, particularly if it has to be carried out manually. In this paper, we propose an automatic assessment system for programming assignments, using verification program with random inputs. One of the most important properties of a program is that, it carries out its intended function. The intended function of a program or part of a program can be verified by using inverse function’s verification program. For checking intended functionality and evaluation of a program, we have used verification program. This assessment system has been tested on basic C programming courses, and results shows that it can work well in basic programming exercises, with some initial promising results

  11. Automatic Weather Station (AWS) Lidar

    Science.gov (United States)

    Rall, Jonathan A.R.; Abshire, James B.; Spinhirne, James D.; Smith, David E. (Technical Monitor)

    2000-01-01

    An autonomous, low-power atmospheric lidar instrument is being developed at NASA Goddard Space Flight Center. This compact, portable lidar will operate continuously in a temperature controlled enclosure, charge its own batteries through a combination of a small rugged wind generator and solar panels, and transmit its data from remote locations to ground stations via satellite. A network of these instruments will be established by co-locating them at remote Automatic Weather Station (AWS) sites in Antarctica under the auspices of the National Science Foundation (NSF). The NSF Office of Polar Programs provides support to place the weather stations in remote areas of Antarctica in support of meteorological research and operations. The AWS meteorological data will directly benefit the analysis of the lidar data while a network of ground based atmospheric lidar will provide knowledge regarding the temporal evolution and spatial extent of Type la polar stratospheric clouds (PSC). These clouds play a crucial role in the annual austral springtime destruction of stratospheric ozone over Antarctica, i.e. the ozone hole. In addition, the lidar will monitor and record the general atmospheric conditions (transmission and backscatter) of the overlying atmosphere which will benefit the Geoscience Laser Altimeter System (GLAS). Prototype lidar instruments have been deployed to the Amundsen-Scott South Pole Station (1995-96, 2000) and to an Automated Geophysical Observatory site (AGO 1) in January 1999. We report on data acquired with these instruments, instrument performance, and anticipated performance of the AWS Lidar.

  12. Automatic onset phase picking for portable seismic array observation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Automatic phase picking is a critical procedure for seismic data processing, especially for a huge amount of seismic data recorded by a large-scale portable seismic array. In this study is presented a new method used for automatic accurate onset phase picking based on the property of dense seismic array observations. In our method, the Akaike's information criterion (AIC) for the single channel observation and the least-squares cross-correlation for the multi-channel observation are combined together. The tests by the seismic array observation data after triggering with the short-term average/long-term average (STA/LTA) technique show that the phase picking error is less than 0.3 s for local events by using the single channel AIC algorithm. In terms of multi-channel least-squares cross-correlation technique, the clear teleseismic P onset can be detected reliably. Even for the teleseismic records with high noise level, our algorithm is also able to effectually avoid manual misdetections.

  13. Automatic Ration Material Distributions Based on GSM and RFID Technology

    Directory of Open Access Journals (Sweden)

    S.Valarmathy

    2013-10-01

    Full Text Available Now a day ration card is very important for every home and used for various field such as family members details, to get gas connection, it act as address proof for various purposes etc. All the people having a ration card to buy the various materials (sugar, rice, oil, kerosene, etc from the ration shops. But in this system having two draw backs, first one is weight of the material may be inaccurate due to human mistakes and secondly, if not buy the materials at the end of the month, they will sale to others without any intimation to the government and customers. In this paper, proposed an Automatic Ration Materials Distribution Based on GSM (Global System for Mobile and RFID (Radio Frequency Identification technology instead of ration cards. To get the materials in ration shops need to show the RFID tag into the RFID reader, then controller check the customer codes and details of amounts in the card. After verification, these systems show the amount details. Then customer need to enter they required materials by using keyboard, after receiving materials controller send the information to government office and customer through GSM technology. In this system provides the materials automatically without help of humans.

  14. Automatic detection of scoliotic curves in posteroanterior radiographs.

    Science.gov (United States)

    Duong, Luc; Cheriet, Farida; Labelle, Hubert

    2010-05-01

    Spinal deformities are diagnosed using posteroanterior (PA) radiographs. Automatic detection of the spine on conventional radiographs would be of interest to quantify curve severity, would help reduce observer variability and would allow large-scale retrospective studies on radiographic databases. The goal of this paper is to present a new method for automatic detection of spinal curves from a PA radiograph. A region of interest (ROI) is first extracted according to the 2-D shape variability of the spine obtained from a set of PA radiographs of scoliotic patients. This region includes 17 bounding boxes delimiting each vertebral level from T1 to L5. An adaptive filter combining shock with complex diffusion is used to individually restore the image of each vertebral level. Then, texture descriptors of small block elements are computed and submitted for training to support vector machines (SVM). Vertebral body's locations are thereby inferred for a particular vertebral level. The classifications of block elements for all 17 SVMs are identified in the image and a voting system is introduced to cumulate correctly predicted blocks. A spline curve is then fitted through the centers of the predicted vertebral regions and compared to a manual identification using a Student t-test. A clinical validation is performed using 100 radiographs of scoliotic patients (not used for training) and the detected spinal curve is found to be statistically similar (p < 0.05) in 93% of cases to the manually identified curve.

  15. Requirements to a Norwegian national automatic gamma monitoring system

    DEFF Research Database (Denmark)

    Lauritzen, B.; Jensen, Per Hedemann; Nielsen, F.

    2005-01-01

    An assessment of the overall requirements to a Norwegian gamma-monitoring network is undertaken with special emphasis on the geographical distribution of automatic gamma monitoring stations, type of detectors in such stations and the sensitivity of thesystem in terms of ambient dose equivalent rate...... large distances using historical weather data; the minimum density is estimated from the requirement that a radioactive plume may not slip unnoticed inbetween stations of the monitoring network. The sensitivity of the gamma monitoring system is obtained from the condition that events that may require...

  16. Automatic Testing of a CANopen Node

    OpenAIRE

    Liang, Hui

    2013-01-01

    This Bachelor’s thesis was commissioned by TK Engineering Oy in Vaasa. The goals of the thesis were to test a prototype CANopen node, called UWASA Node for conformance to the CiA 301 standard, and to develop the automatic performance test software and the automatic CiA 401 test software. A test report that describes to the designer what needs to be corrected and improved is made in this thesis. For the CiA 301 test there is a CANopen conformance test tool that can be used. The automatic perfo...

  17. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  18. Automatic semi-continuous accumulation chamber for diffuse gas emissions monitoring in volcanic and non-volcanic areas

    Science.gov (United States)

    Lelli, Matteo; Raco, Brunella; Norelli, Francesco; Virgili, Giorgio; Continanza, Davide

    2016-04-01

    Since various decades the accumulation chamber method is intensively used in monitoring activities of diffuse gas emissions in volcanic areas. Although some improvements have been performed in terms of sensitivity and reproducibility of the detectors, the equipment used for measurement of gas emissions temporal variation usually requires expensive and bulky equipment. The unit described in this work is a low cost, easy to install-and-manage instrument that will make possible the creation of low-cost monitoring networks. The Non-Dispersive Infrared detector used has a concentration range of 0-5% CO2, but the substitution with other detector (range 0-5000 ppm) is possible and very easy. Power supply unit has a 12V, 7Ah battery, which is recharged by a 35W solar panel (equipped with charge regulator). The control unit contains a custom programmed CPU and the remote transmission is assured by a GPRS modem. The chamber is activated by DataLogger unit, using a linear actuator between the closed position (sampling) and closed position (idle). A probe for the measure of soil temperature, soil electrical conductivity, soil volumetric water content, air pressure and air temperature is assembled on the device, which is already arranged for the connection of others external sensors, including an automatic weather station. The automatic station has been tested on the field at Lipari island (Sicily, Italy) during a period of three months, performing CO2 flux measurement (and also weather parameters), each 1 hour. The possibility to measure in semi-continuous mode, and at the same time, the gas fluxes from soil and many external parameters, helps the time series analysis aimed to the identification of gas flux anomalies due to variations in deep system (e.g. onset of volcanic crises) from those triggered by external conditions.

  19. An efficient approach to the evaluation of mid-term dynamic processes in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Zivanovic, R.M. (Pretoria Technikon (South Africa)); Popovic, D.P. (Nikola Tesla Inst., Belgrade (Yugoslavia). Power System Dept.)

    1993-01-01

    This paper presents some improvements in the methodology for analysing mid-term dynamic processes in power systems. These improvements are: an efficient application of the hierarchical clustering algorithm to adaptive identification of coherent generator groups and a significant reduction of the mathematical model, on the basis of monitoring the state of only one generator in one of the established coherent groups. This enables a flexible, simple and fast transformation from the full to the reduced model and vice versa, a significant acceleration of the simulation while keeping the desired accuracy and the automatic use in continual dynamic analysis. Verification of the above mentioned contributions was performed on examples of the dynamic analysis of New England and Yugoslav power systems. (author)

  20. Laser Scanner For Automatic Storage

    Science.gov (United States)

    Carvalho, Fernando D.; Correia, Bento A.; Rebordao, Jose M.; Rodrigues, F. Carvalho

    1989-01-01

    The automated magazines are beeing used at industry more and more. One of the problems related with the automation of a Store House is the identification of the products envolved. Already used for stock management, the Bar Codes allows an easy way to identify one product. Applied to automated magazines, the bar codes allows a great variety of items in a small code. In order to be used by the national producers of automated magazines, a devoted laser scanner has been develloped. The Prototype uses an He-Ne laser whose beam scans a field angle of 75 degrees at 16 Hz. The scene reflectivity is transduced by a photodiode into an electrical signal, which is then binarized. This digital signal is the input of the decodifying program. The machine is able to see barcodes and to decode the information. A parallel interface allows the comunication with the central unit, which is responsible for the management of automated magazine.

  1. Automatic tuning of myoelectric prostheses.

    Science.gov (United States)

    Bonivento, C; Davalli, A; Fantuzzi, C; Sacchetti, R; Terenzi, S

    1998-07-01

    This paper is concerned with the development of a software package for the automatic tuning of myoelectric prostheses. The package core consists of Fuzzy Logic Expert Systems (FLES) that embody skilled operator heuristics in the tuning of prosthesis control parameters. The prosthesis system is an artificial arm-hand system developed at the National Institute of Accidents at Work (INAIL) laboratories. The prosthesis is powered by an electric motor that is controlled by a microprocessor using myoelectric signals acquired from skin-surface electrodes placed on a muscle in the residual limb of the subject. The software package, Microprocessor Controlled Arm (MCA) Auto Tuning, is a tool for aiding both INAIL expert operators and unskilled persons in the controller parameter tuning procedure. Prosthesis control parameter setup and subsequent recurrent adjustments are fundamental for the correct working of the prosthesis, especially when we consider that myoelectric parameters may vary greatly with environmental modifications. The parameter adjustment requires the end-user to go to the manufacturer's laboratory for the control parameters setup because, generally, he/she does not have the necessary knowledge and instruments to do this at home. However, this procedure is not very practical and involves a waste of time for the technicians and uneasiness for the clients. The idea behind the MCA Auto Tuning package consists in translating technician expertise into an FLES knowledge database. The software interacts through a user-friendly graphic interface with an unskilled user, who is guided through a step-by-step procedure in the prosthesis parameter tuning that emulates the traditional expert-aided procedure. The adoption of this program on a large scale may yield considerable economic benefits and improve the service quality supplied to the users of prostheses. In fact, the time required to set the prosthesis parameters are remarkably reduced, as is the technician

  2. Automaticity in social-cognitive processes.

    Science.gov (United States)

    Bargh, John A; Schwader, Kay L; Hailey, Sarah E; Dyer, Rebecca L; Boothby, Erica J

    2012-12-01

    Over the past several years, the concept of automaticity of higher cognitive processes has permeated nearly all domains of psychological research. In this review, we highlight insights arising from studies in decision-making, moral judgments, close relationships, emotional processes, face perception and social judgment, motivation and goal pursuit, conformity and behavioral contagion, embodied cognition, and the emergence of higher-level automatic processes in early childhood. Taken together, recent work in these domains demonstrates that automaticity does not result exclusively from a process of skill acquisition (in which a process always begins as a conscious and deliberate one, becoming capable of automatic operation only with frequent use) - there are evolved substrates and early childhood learning mechanisms involved as well.

  3. Automatic lexical classification: bridging research and practice.

    Science.gov (United States)

    Korhonen, Anna

    2010-08-13

    Natural language processing (NLP)--the automatic analysis, understanding and generation of human language by computers--is vitally dependent on accurate knowledge about words. Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale.

  4. A Demonstration of Automatically Switched Optical Network

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    We build an automatically switched optical network (ASON) testbed with four optical cross-connect nodes. Many fundamental ASON features are demonstrated, which is implemented by control protocols based on generalized multi-protocol label switching (GMPLS) framework.

  5. Automatic acquisition of pattern collocations in GO

    Institute of Scientific and Technical Information of China (English)

    LIU Zhi-qing; DOU Qing; LI Wen-hong; LU Ben-jie

    2008-01-01

    The quality, quantity, and consistency of the knowledgeused in GO-playing programs often determine their strengths,and automatic acquisition of large amounts of high-quality andconsistent GO knowledge is crucial for successful GO playing.In a previous article of this subject, we have presented analgorithm for efficient and automatic acquisition of spatialpatterns of GO as well as their frequency of occurrence fromgame records. In this article, we present two algorithms, one forefficient and automatic acquisition of pairs of spatial patternsthat appear jointly in a local context, and the other for deter-mining whether the joint pattern appearances are of certainsignificance statistically and not just a coincidence. Results ofthe two algorithms include 1 779 966 pairs of spatial patternsacquired automatically from 16 067 game records of profess-sional GO players, of which about 99.8% are qualified as patterncollocations with a statistical confidence of 99.5% or higher.

  6. Automatization and familiarity in repeated checking

    NARCIS (Netherlands)

    Dek, Eliane C P; van den Hout, Marcel A.; Giele, Catharina L.; Engelhard, Iris M.

    2014-01-01

    Repeated checking paradoxically increases memory uncertainty. This study investigated the underlying mechanism of this effect. We hypothesized that as a result of repeated checking, familiarity with stimuli increases, and automatization of the checking procedure occurs, which should result in decrea

  7. Automatic Speech Segmentation Based on HMM

    Directory of Open Access Journals (Sweden)

    M. Kroul

    2007-06-01

    Full Text Available This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is needed here. In this work, different kinds of HMMs with various parameters have been trained and their usefulness for automatic segmentation is discussed. At the end of this work, some segmentation accuracy tests of all models are presented.

  8. Automatic coding of online collaboration protocols

    NARCIS (Netherlands)

    Erkens, Gijsbert; Janssen, J.J.H.M.

    2006-01-01

    An automatic coding procedure is described to determine the communicative functions of messages in chat discussions. Five main communicative functions are distinguished: argumentative (indicating a line of argumentation or reasoning), responsive (e.g., confirmations, denials, and answers), informati

  9. Collapsible truss structure is automatically expandable

    Science.gov (United States)

    1965-01-01

    Coil springs wound with maximum initial tension in a three-truss, closed loop structure form a collapsible truss structure. The truss automatically expands and provides excellent rigidity and close dimensional tolerance when expanded.

  10. Phoneme vs Grapheme Based Automatic Speech Recognition

    OpenAIRE

    Magimai.-Doss, Mathew; Dines, John; Bourlard, Hervé; Hermansky, Hynek

    2004-01-01

    In recent literature, different approaches have been proposed to use graphemes as subword units with implicit source of phoneme information for automatic speech recognition. The major advantage of using graphemes as subword units is that the definition of lexicon is easy. In previous studies, results comparable to phoneme-based automatic speech recognition systems have been reported using context-independent graphemes or context-dependent graphemes with decision trees. In this paper, we study...

  11. Automatic quiz generation for elderly people

    OpenAIRE

    Samuelsen, Jeanette

    2016-01-01

    Studies have indicated that games can be beneficial for the elderly, in areas such as cognitive functioning and well-being. Taking part in social activities, such as playing a game with others, could also be beneficial. One type of game is a computer-based quiz. One can create quiz questions manually; however, this can be time-consuming. Another approach is to generate quiz questions automatically. This project has examined how quizzes for Norwegian elderly can be automatically generated usin...

  12. Automatic Age Estimation System for Face Images

    OpenAIRE

    Chin-Teng Lin; Dong-Lin Li; Jian-Hao Lai; Ming-Feng Han; Jyh-Yeong Chang

    2012-01-01

    Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in rea...

  13. Automatic Control of Freeboard and Turbine Operation

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Frigaard, Peter Bak; Friis-Madsen, Erik;

    The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003.......The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003....

  14. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  15. Automatic Fringe Detection Of Dynamic Moire Patterns

    Science.gov (United States)

    Fang, Jing; Su, Xian-ji; Shi, Hong-ming

    1989-10-01

    Fringe-carrier method is used in automatic fringe-order numbering of dynamic in-plane moire patterns. In experiment both static carrier and dynamic moire patterns are recorded. The image files corresponding to instants are set up to assign fringe orders automatically. Subtracting the carrier image from the modulated ones, the moire patterns due to the dynamic deformations are restored with fringe-order variation displayed by different grey levels.

  16. Automatic safety rod for reactors. [LMFBR

    Science.gov (United States)

    Germer, J.H.

    1982-03-23

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  17. Automatic penalty continuation in structural topology optimization

    DEFF Research Database (Denmark)

    Rojas Labanda, Susana; Stolpe, Mathias

    2015-01-01

    this issue is addressed. We propose an automatic continuation method, where the material penalization parameter is included as a new variable in the problem and a constraint guarantees that the requested penalty is eventually reached. The numerical results suggest that this approach is an appealing...... alternative to continuation methods. Automatic continuation also generally obtains better designs than the classical formulation using a reduced number of iterations....

  18. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    Science.gov (United States)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    Structural Health Monitoring together with all the dynamic identification techniques and damage detection techniques are increasing in popularity in both scientific and civil community in last years. The basic idea arises from the observation that spectral properties, described in terms of the so-called modal parameters (eigenfrequencies, mode shapes, and modal damping), are functions of the physical properties of the structure (mass, energy dissipation mechanisms and stiffness). Damage detection techniques traditionally consist in visual inspection and/or non-destructive testing. A different approach consists in vibration based methods detecting changes of feature related to damage. Structural damage exhibits its main effects in terms of stiffness and damping variation. Damage detection approach based on dynamic monitoring of structural properties over time has received a considerable attention in recent scientific literature. We focused the attention on the structural damage localization and detection after an earthquake, from the evaluation of the mode curvature difference. The methodology is based on the acquisition of the structural dynamic response through a three-directional accelerometer installed on the top floor of the structure. It is able to assess the presence of any damage on the structure providing also information about the related position and severity of the damage. The procedure is based on a Band-Variable Filter, (Ditommaso et al., 2012), used to extract the dynamic characteristics of systems that evolve over time by acting simultaneously in both time and frequency domain. In this paper using a combined approach based on the Fourier Transform and on the seismic interferometric analysis, an useful tool for the automatic fundamental frequency evaluation of nonlinear structures has been proposed. Moreover, using this kind of approach it is possible to improve some of the existing methods for the automatic damage detection providing stable results

  19. Is Mobile-Assisted Language Learning Really Useful? An Examination of Recall Automatization and Learner Autonomy

    Science.gov (United States)

    Sato, Takeshi; Murase, Fumiko; Burden, Tyler

    2015-01-01

    The aim of this study is to examine the advantages of Mobile-Assisted Language Learning (MALL), especially vocabulary learning of English as a foreign or second language (L2) in terms of the two strands: automatization and learner autonomy. Previous studies articulate that technology-enhanced L2 learning could bring about some positive effects.…

  20. An Evaluation of Response Cost in the Treatment of Inappropriate Vocalizations Maintained by Automatic Reinforcement

    Science.gov (United States)

    Falcomata, Terry S.; Roane, Henry S.; Hovanetz, Alyson N.; Kettering, Tracy L.; Keeney, Kris M.

    2004-01-01

    In the current study, we examined the utility of a procedure consisting of noncontingent reinforcement with and without response cost in the treatment of inappropriate vocalizations maintained by automatic reinforcement. Results are discussed in terms of examining the variables that contribute to the effectiveness of response cost as treatment for…

  1. 14 CFR 23.1329 - Automatic pilot system.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Automatic pilot system. 23.1329 Section 23...: Installation § 23.1329 Automatic pilot system. If an automatic pilot system is installed, it must meet the following: (a) Each system must be designed so that the automatic pilot can— (1) Be quickly and...

  2. Audio watermarking technologies for automatic cue sheet generation systems

    Science.gov (United States)

    Caccia, Giuseppe; Lancini, Rosa C.; Pascarella, Annalisa; Tubaro, Stefano; Vicario, Elena

    2001-08-01

    Usually watermark is used as a way for hiding information on digital media. The watermarked information may be used to allow copyright protection or user and media identification. In this paper we propose a watermarking scheme for digital audio signals that allow automatic identification of musical pieces transmitted in TV broadcasting programs. In our application the watermark must be, obviously, imperceptible to the users, should be robust to standard TV and radio editing and have a very low complexity. This last item is essential to allow a software real-time implementation of the insertion and detection of watermarks using only a minimum amount of the computation power of a modern PC. In the proposed method the input audio sequence is subdivided in frames. For each frame a watermark spread spectrum sequence is added to the original data. A two steps filtering procedure is used to generate the watermark from a Pseudo-Noise (PN) sequence. The filters approximate respectively the threshold and the frequency masking of the Human Auditory System (HAS). In the paper we discuss first the watermark embedding system then the detection approach. The results of a large set of subjective tests are also presented to demonstrate the quality and robustness of the proposed approach.

  3. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  4. Efficient formulations of the material identification problem using full-field measurements

    Science.gov (United States)

    Pérez Zerpa, Jorge M.; Canelas, Alfredo

    2016-08-01

    The material identification problem addressed consists of determining the constitutive parameters distribution of a linear elastic solid using displacement measurements. This problem has been considered in important applications such as the design of methodologies for breast cancer diagnosis. Since the resolution of real life problems involves high computational costs, there is great interest in the development of efficient methods. In this paper two new efficient formulations of the problem are presented. The first formulation leads to a second-order cone optimization problem, and the second one leads to a quadratic optimization problem, both allowing the resolution of the problem with high efficiency and precision. Numerical examples are solved using synthetic input data with error. A regularization technique is applied using the Morozov criterion along with an automatic selection strategy of the regularization parameter. The proposed formulations present great advantages in terms of efficiency, when compared to other formulations that require the application of general nonlinear optimization algorithms.

  5. Towards Automatic Improvement of Patient Queries in Health Retrieval Systems

    Directory of Open Access Journals (Sweden)

    Nesrine KSENTINI

    2016-07-01

    Full Text Available With the adoption of health information technology for clinical health, e-health is becoming usual practice today. Users of this technology find it difficult to seek information relevant to their needs due to the increasing amount of the clinical and medical data on the web, and the lack of knowledge of medical jargon. In this regards, a method is described to improve user's needs by automatically adding new related terms to their queries which appear in the same context of the original query in order to improve final search results. This method is based on the assessment of semantic relationships defined by a proposed statistical method between a set of terms or keywords. Experiments were performed on CLEF-eHealth-2015 database and the obtained results show the effectiveness of our proposed method.

  6. Does long-term object priming depend on the explicit detection of object identity at encoding?

    Science.gov (United States)

    Gomes, Carlos A; Mayes, Andrew

    2015-01-01

    It is currently unclear whether objects have to be explicitly identified at encoding for reliable behavioral long-term object priming to occur. We conducted two experiments that investigated long-term object and non-object priming using a selective-attention encoding manipulation that reduces explicit object identification. In Experiment 1, participants either counted dots flashed within an object picture (shallow encoding) or engaged in an animacy task (deep encoding) at study, whereas, at test, they performed an object-decision task. Priming, as measured by reaction times (RTs), was observed for both types of encoding, and was of equivalent magnitude. In Experiment 2, non-object priming (faster RTs for studied relative to unstudied non-objects) was also obtained under the same selective-attention encoding manipulation as in Experiment 1, and the magnitude of the priming effect was equivalent between experiments. In contrast, we observed a linear decrement in recognition memory accuracy across conditions (deep encoding of Experiment 1 > shallow encoding Experiment 1 > shallow encoding of Experiment 2), suggesting that priming was not contaminated by explicit memory strategies. We argue that our results are more consistent with the identification/production framework than the perceptual/conceptual distinction, and we conclude that priming of pictures largely ignored at encoding can be subserved by the automatic retrieval of two types of instances: one at the motor level and another at an object-decision level.

  7. An improved automatic detection method for earthquake-collapsed buildings from ADS40 image

    Institute of Scientific and Technical Information of China (English)

    GUO HuaDong; LU LinLin; MA JianWen; PESARESI Martino; YUAN FangYan

    2009-01-01

    Earthquake-collapsed building identification is important in earthquake damage assessment and is evidence for mapping seismic intensity. After the May 12th Wenchuan major earthquake occurred,experts from CEODE and IPSC collaborated to make a rapid earthquake damage assessment. A crucial task was to identify collapsed buildings from ADS40 images in the earthquake region. The difficulty was to differentiate collapsed buildings from concrete bridges,dry gravels,and landslide-induced rolling stones since they had a similar gray level range in the image. Based on the IPSC method,an improved automatic identification technique was developed and tested in the study area,a portion of Beichuan County. Final results showed that the technique's accuracy was over 95%. Procedures and results of this experiment are presented in this article. Theory of this technique indicates that it could be applied to collapsed building identification caused by other disasters.

  8. Sensitivity-based model updating for structural damage identification using total variation regularization

    Science.gov (United States)

    Grip, Niklas; Sabourova, Natalia; Tu, Yongming

    2017-02-01

    Sensitivity-based Finite Element Model Updating (FEMU) is one of the widely accepted techniques used for damage identification in structures. FEMU can be formulated as a numerical optimization problem and solved iteratively making automatic updating of the unknown model parameters by minimizing the difference between measured and analytical structural properties. However, in the presence of noise in the measurements, the updating results are usually prone to errors. This is mathematically described as instability of the damage identification as an inverse problem. One way to resolve this problem is by using regularization. In this paper, we compare a well established interpolation-based regularization method against methods based on the minimization of the total variation of the unknown model parameters. These are new regularization methods for structural damage identification. We investigate how using Huber and pseudo Huber functions in the definition of total variation affects important properties of the methods. For instance, for well-localized damages the results show a clear advantage of the total variation based regularization in terms of the identified location and severity of damage compared with the interpolation-based solution. For a practical test of the proposed method we use a reinforced concrete plate. Measurements and analysis were performed first on an undamaged plate, and then repeated after applying four different degrees of damage.

  9. Developing a Speaker Identification System for the DARPA RATS Project

    DEFF Research Database (Denmark)

    Plchot, O; Matsoukas, S; Matejka, P

    2013-01-01

    This paper describes the speaker identification (SID) system developed by the Patrol team for the first phase of the DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state of the art detection capabilities on audio from highly degraded communication channels. We...

  10. Musical Instrument Identification using Multiscale Mel-frequency Cepstral Coefficients

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Morvidone, Marcela; Daudet, Laurent

    2010-01-01

    We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs...

  11. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    and unreliable feature components in the context of automatic speaker identification (SID). A systematic evaluation under ideal and non-ideal conditions demonstrated that the robustness to errors in the binary mask varied substantially across the different missing-data strategies. Moreover, full and bounded...

  12. 48 CFR 252.211-7006 - Radio Frequency Identification.

    Science.gov (United States)

    2010-10-01

    ... supply, as defined in DoD 4140.1-R, DoD Supply Chain Materiel Management Regulation, AP1.1.11: (A... immediate, automatic, and accurate identification of any item in the supply chain of any company, in any..., organizational tool kits, hand tools, and administrative and housekeeping supplies and equipment. (C) Class...

  13. Automatic Fastening Large Structures: a New Approach

    Science.gov (United States)

    Lumley, D. F.

    1985-01-01

    The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.

  14. Automaticity: Componential, Causal, and Mechanistic Explanations.

    Science.gov (United States)

    Moors, Agnes

    2016-01-01

    The review first discusses componential explanations of automaticity, which specify non/automaticity features (e.g., un/controlled, un/conscious, non/efficient, fast/slow) and their interrelations. Reframing these features as factors that influence processes (e.g., goals, attention, and time) broadens the range of factors that can be considered (e.g., adding stimulus intensity and representational quality). The evidence reviewed challenges the view of a perfect coherence among goals, attention, and consciousness, and supports the alternative view that (a) these and other factors influence the quality of representations in an additive way (e.g., little time can be compensated by extra attention or extra stimulus intensity) and that (b) a first threshold of this quality is required for unconscious processing and a second threshold for conscious processing. The review closes with a discussion of causal explanations of automaticity, which specify factors involved in automatization such as repetition and complexity, and a discussion of mechanistic explanations, which specify the low-level processes underlying automatization.

  15. Some reflections on identification.

    Science.gov (United States)

    Szpilka, J

    1999-12-01

    The author presents a view of identification based on a rereading of two of Freud's key texts and an approach derived from an academic interpretation of Hegel dating from the 1930s. These aspects are considered at length. The importance of the human and anthropogenic element is stressed. The human subject is presented as coming into being through language; being called upon to be what he is not and not to be what he is, the subject appears as wishful in nature, desiring the wish of the other at the same time as he desires the object of the other's wish. The author argues that identification as a problem arises only in a human being who speaks or has received an injunction to speak; this raises the question of who or what he is and of being as such. Analytic treatment may in his view therefore proceed in one of two directions, one based on the interplay of projection and introjection with identification as an end, and the other on resistance and repression where the Oedipus complex is seen as the nuclear issue. Identification is seen in terms of overcoming the negative identity of not being all other subjects, and identity is found to be a conscious response that might even have a political element.

  16. Chinese Term Extraction Based on PAT Tree

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; FAN Xiao-zhong; XU Yun

    2006-01-01

    A new method of automatic Chinese term extraction is proposed based on Patricia (PAT) tree. Mutual information is calculated based on prefix searching in PAT tree of domain corpus to estimate the internal associative strength between Chinese characters in a string. It can improve the speed of term candidate extraction largely compared with methods based on domain corpus directly. Common collocation suffix, prefix bank are constructed and term part of speech (POS) composing rules are summarized to improve the precision of term extraction. Experiment results show that the F-measure is 74.97 %.

  17. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  18. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  19. Automatic Image-Based Pencil Sketch Rendering

    Institute of Scientific and Technical Information of China (English)

    王进; 鲍虎军; 周伟华; 彭群生; 徐迎庆

    2002-01-01

    This paper presents an automatic image-based approach for converting greyscale images to pencil sketches, in which strokes follow the image features. The algorithm first extracts a dense direction field automatically using Logical/Linear operators which embody the drawing mechanism. Next, a reconstruction approach based on a sampling-and-interpolation scheme is introduced to generate stroke paths from the direction field. Finally, pencil strokes are rendered along the specified paths with consideration of image tone and artificial illumination.As an important application, the technique is applied to render portraits from images with little user interaction. The experimental results demonstrate that the approach can automatically achieve compelling pencil sketches from reference images.

  20. Research on an Intelligent Automatic Turning System

    Directory of Open Access Journals (Sweden)

    Lichong Huang

    2012-12-01

    Full Text Available Equipment manufacturing industry is the strategic industries of a country. And its core part is the CNC machine tool. Therefore, enhancing the independent research of relevant technology of CNC machine, especially the open CNC system, is of great significance. This paper presented some key techniques of an Intelligent Automatic Turning System and gave a viable solution for system integration. First of all, the integrated system architecture and the flexible and efficient workflow for perfoming the intelligent automatic turning process is illustrated. Secondly, the innovated methods of the workpiece feature recognition and expression and process planning of the NC machining are put forward. Thirdly, the cutting tool auto-selection and the cutting parameter optimization solution are generated with a integrated inference of rule-based reasoning and case-based reasoning. Finally, the actual machining case based on the developed intelligent automatic turning system proved the presented solutions are valid, practical and efficient.

  1. Automatic Phonetic Transcription for Danish Speech Recognition

    DEFF Research Database (Denmark)

    Kirkedal, Andreas Søeborg

    Automatic speech recognition (ASR) uses dictionaries that map orthographic words to their phonetic representation. To minimize the occurrence of out-of-vocabulary words, ASR requires large phonetic dictionaries to model pronunciation. Hand-crafted high-quality phonetic dictionaries are difficult...... of automatic phonetic transcriptions vary greatly with respect to language and transcription strategy. For some languages where the difference between the graphemic and phonetic representations are small, graphemic transcriptions can be used to create ASR systems with acceptable performance. In other languages......, like Danish, the graphemic and phonetic representations are very dissimilar and more complex rewriting rules must be applied to create the correct phonetic representation. Automatic phonetic transcribers use different strategies, from deep analysis to shallow rewriting rules, to produce phonetic...

  2. An Automatic Hierarchical Delay Analysis Tool

    Institute of Scientific and Technical Information of China (English)

    FaridMheir-El-Saadi; BozenaKaminska

    1994-01-01

    The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.

  3. Towards unifying inheritance and automatic program specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2002-01-01

    Inheritance allows a class to be specialized and its attributes refined, but implementation specialization can only take place by overriding with manually implemented methods. Automatic program specialization can generate a specialized, effcient implementation. However, specialization of programs...... and specialization of classes (inheritance) are considered different abstractions. We present a new programming language, Lapis, that unifies inheritance and program specialization at the conceptual, syntactic, and semantic levels. This paper presents the initial development of Lapis, which uses inheritance...... with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be e#ciently implemented...

  4. Automatic Age Estimation System for Face Images

    Directory of Open Access Journals (Sweden)

    Chin-Teng Lin

    2012-11-01

    Full Text Available Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in real‐time. This means that the proposed system has more potential in applications compared to other semi‐automatic systems. The results obtained from this novel approach could provide clearer insight for operators in the field of age estimation to develop real‐world applications.

  5. Automatic weld torch guidance control system

    Science.gov (United States)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  6. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  7. Automatic EEG-assisted retrospective motion correction for fMRI (aE-REMCOR).

    Science.gov (United States)

    Wong, Chung-Ki; Zotev, Vadim; Misaki, Masaya; Phillips, Raquel; Luo, Qingfei; Bodurka, Jerzy

    2016-04-01

    Head motions during functional magnetic resonance imaging (fMRI) impair fMRI data quality and introduce systematic artifacts that can affect interpretation of fMRI results. Electroencephalography (EEG) recordings performed simultaneously with fMRI provide high-temporal-resolution information about ongoing brain activity as well as head movements. Recently, an EEG-assisted retrospective motion correction (E-REMCOR) method was introduced. E-REMCOR utilizes EEG motion artifacts to correct the effects of head movements in simultaneously acquired fMRI data on a slice-by-slice basis. While E-REMCOR is an efficient motion correction approach, it involves an independent component analysis (ICA) of the EEG data and identification of motion-related ICs. Here we report an automated implementation of E-REMCOR, referred to as aE-REMCOR, which we developed to facilitate the application of E-REMCOR in large-scale EEG-fMRI studies. The aE-REMCOR algorithm, implemented in MATLAB, enables an automated preprocessing of the EEG data, an ICA decomposition, and, importantly, an automatic identification of motion-related ICs. aE-REMCOR has been used to perform retrospective motion correction for 305 fMRI datasets from 16 subjects, who participated in EEG-fMRI experiments conducted on a 3T MRI scanner. Performance of aE-REMCOR has been evaluated based on improvement in temporal signal-to-noise ratio (TSNR) of the fMRI data, as well as correction efficiency defined in terms of spike reduction in fMRI motion parameters. The results show that aE-REMCOR is capable of substantially reducing head motion artifacts in fMRI data. In particular, when there are significant rapid head movements during the scan, a large TSNR improvement and high correction efficiency can be achieved. Depending on a subject's motion, an average TSNR improvement over the brain upon the application of aE-REMCOR can be as high as 27%, with top ten percent of the TSNR improvement values exceeding 55%. The average

  8. AUTOMATIC RECOGNITION OF BOTH INTER AND INTRA CLASSES OF DIGITAL MODULATED SIGNALS USING ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    JIDE JULIUS POPOOLA

    2014-04-01

    Full Text Available In radio communication systems, signal modulation format recognition is a significant characteristic used in radio signal monitoring and identification. Over the past few decades, modulation formats have become increasingly complex, which has led to the problem of how to accurately and promptly recognize a modulation format. In addressing these challenges, the development of automatic modulation recognition systems that can classify a radio signal’s modulation format has received worldwide attention. Decision-theoretic methods and pattern recognition solutions are the two typical automatic modulation recognition approaches. While decision-theoretic approaches use probabilistic or likelihood functions, pattern recognition uses feature-based methods. This study applies the pattern recognition approach based on statistical parameters, using an artificial neural network to classify five different digital modulation formats. The paper deals with automatic recognition of both inter-and intra-classes of digitally modulated signals in contrast to most of the existing algorithms in literature that deal with either inter-class or intra-class modulation format recognition. The results of this study show that accurate and prompt modulation recognition is possible beyond the lower bound of 5 dB commonly acclaimed in literature. The other significant contribution of this paper is the usage of the Python programming language which reduces computational complexity that characterizes other automatic modulation recognition classifiers developed using the conventional MATLAB neural network toolbox.

  9. An automatic synthesis method of compact models of integrated circuit devices based on equivalent circuits

    Science.gov (United States)

    Abramov, I. I.

    2006-05-01

    An automatic synthesis method of equivalent circuits of integrated circuit devices is described in the paper. This method is based on a physical approach to construction of finite-difference approximation to basic equations of semiconductor device physics. It allows to synthesize compact equivalent circuits of different devices automatically as alternative to, for example, sufficiently formal BSIM2 and BSIM3 models used in circuit simulation programs of SPICE type. The method is one of possible variants of general methodology for automatic synthesis of compact equivalent circuits of almost arbitrary devices and circuit-type structures of micro- and nanoelecronics [1]. The method is easily extended in the case of necessity to account thermal effects in integrated circuits. It was shown that its application would be especially perspective for analysis of integrated circuit fragments as a whole and for identification of significant collective physical effects, including parasitic effects in VLSI and ULSI. In the paper the examples illustrating possibilities of the method for automatic synthesis of compact equivalent circuits of some of semiconductor devices and integrated circuit devices are considered. Special attention is given to examples of integrated circuit devices for coarse grids of spatial discretization (less than 10 nodes).

  10. Semi-automatic knee cartilage segmentation

    Science.gov (United States)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  11. Automatic malware analysis an emulator based approach

    CERN Document Server

    Yin, Heng

    2012-01-01

    Malicious software (i.e., malware) has become a severe threat to interconnected computer systems for decades and has caused billions of dollars damages each year. A large volume of new malware samples are discovered daily. Even worse, malware is rapidly evolving becoming more sophisticated and evasive to strike against current malware analysis and defense systems. Automatic Malware Analysis presents a virtualized malware analysis framework that addresses common challenges in malware analysis. In regards to this new analysis framework, a series of analysis techniques for automatic malware analy

  12. Automatic Target Detection Using Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ganesan L

    2004-01-01

    Full Text Available Automatic target recognition (ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. This paper presents an algorithm for detecting a specified set of target objects embedded in visual images for an ATR application. The developed algorithm employs a novel technique for automatically detecting man-made and non-man-made single, two, and multitargets from nontarget objects, located within a cluttered environment by evaluating nonoverlapping image blocks, where block-by-block comparison of wavelet cooccurrence feature is done. The results of the proposed algorithm are found to be satisfactory.

  13. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  14. Automatic cell counting with ImageJ.

    Science.gov (United States)

    Grishagin, Ivan V

    2015-03-15

    Cell counting is an important routine procedure. However, to date there is no comprehensive, easy to use, and inexpensive solution for routine cell counting, and this procedure usually needs to be performed manually. Here, we report a complete solution for automatic cell counting in which a conventional light microscope is equipped with a web camera to obtain images of a suspension of mammalian cells in a hemocytometer assembly. Based on the ImageJ toolbox, we devised two algorithms to automatically count these cells. This approach is approximately 10 times faster and yields more reliable and consistent results compared with manual counting.

  15. Robust, accurate and fast automatic segmentation of the spinal cord.

    Science.gov (United States)

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.

  16. Time series modeling for automatic target recognition

    Science.gov (United States)

    Sokolnikov, Andre

    2012-05-01

    Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.

  17. Automatic segmentation of mammogram and tomosynthesis images

    Science.gov (United States)

    Sargent, Dusty; Park, Sun Young

    2016-03-01

    Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for

  18. Sparse discriminant analysis for breast cancer biomarker identification and classification

    Institute of Scientific and Technical Information of China (English)

    Yu Shi; Daoqing Dai; Chaochun Liu; Hong Yan

    2009-01-01

    Biomarker identification and cancer classification are two important procedures in microarray data analysis. We propose a novel uni-fied method to carry out both tasks. We first preselect biomarker candidates by eliminating unrelated genes through the BSS/WSS ratio filter to reduce computational cost, and then use a sparse discriminant analysis method for simultaneous biomarker identification and cancer classification. Moreover, we give a mathematical justification about automatic biomarker identification. Experimental results show that the proposed method can identify key genes that have been verified in biochemical or biomedical research and classify the breast cancer type correctly.

  19. Automatic Synchronization as the Element of a Power System's Anti-Collapse Complex

    Science.gov (United States)

    Barkāns, J.; Žalostība, D.

    2008-01-01

    In the work, a new universal technical solution is proposed for blackout prevention in a power system, which combines the means for its optimal short-term sectioning and automatic self-restoration to normal conditions. The key element of self-restoration is automatic synchronization. The authors show that for this purpose it is possible to use automatic re-closing with a device for synchronism-check. The results of computations, with simplified formulas and a relevant mathematical model employed, indicate the area of application for this approach. The proposed solution has been created based on many-year experience in the liquidation of emergencies and on the potentialities of equipment, taking into account new features of blackout development that have come into being recently.

  20. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.