WorldWideScience

Sample records for automatic term identification

  1. Automatic term identification for bibliometric mapping

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo); E.C.M. Noyons (Ed); R.K. Buter (Reindert)

    2010-01-01

    textabstractA term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subject

  2. Automatic Kurdish Dialects Identification

    Directory of Open Access Journals (Sweden)

    Hossein Hassani

    2016-02-01

    Full Text Available Automatic dialect identification is a necessary Lan guage Technology for processing multi- dialect languages in which the dialects are linguis tically far from each other. Particularly, this becomes crucial where the dialects are mutually uni ntelligible. Therefore, to perform computational activities on these languages, the sy stem needs to identify the dialect that is the subject of the process. Kurdish language encompasse s various dialects. It is written using several different scripts. The language lacks of a standard orthography. This situation makes the Kurdish dialectal identification more interesti ng and required, both form the research and from the application perspectives. In this research , we have applied a classification method, based on supervised machine learning, to identify t he dialects of the Kurdish texts. The research has focused on two widely spoken and most dominant Kurdish dialects, namely, Kurmanji and Sorani. The approach could be applied to the other Kurdish dialects as well. The method is also applicable to the languages which are similar to Ku rdish in their dialectal diversity and differences.

  3. Automatic Identification of Metaphoric Utterances

    Science.gov (United States)

    Dunn, Jonathan Edwin

    2013-01-01

    This dissertation analyzes the problem of metaphor identification in linguistic and computational semantics, considering both manual and automatic approaches. It describes a manual approach to metaphor identification, the Metaphoricity Measurement Procedure (MMP), and compares this approach with other manual approaches. The dissertation then…

  4. Automatic sign language identification

    OpenAIRE

    Gebre, B.G.; Wittenburg, P.; Heskes, T.

    2013-01-01

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy...

  5. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  6. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  7. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  8. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  9. Intelligent Storage System Based on Automatic Identification

    Directory of Open Access Journals (Sweden)

    Kolarovszki Peter

    2014-09-01

    Full Text Available This article describes RFID technology in conjunction with warehouse management systems. Article also deals with automatic identification and data capture technologies and each processes, which are used in warehouse management system. It describes processes from entering goods into production to identification of goods and also palletizing, storing, bin transferring and removing goods from warehouse. Article focuses on utilizing AMP middleware in WMS processes in Nowadays, the identification of goods in most warehouses is carried through barcodes. In this article we want to specify, how can be processes described above identified through RFID technology. All results are verified by measurement in our AIDC laboratory, which is located at the University of Žilina, and also in Laboratory of Automatic Identification Goods and Services located in GS1 Slovakia. The results of our research bring the new point of view and indicate the ways using of RFID technology in warehouse management system.

  10. Automatic identification of mass spectra

    International Nuclear Information System (INIS)

    Several approaches to preprocessing and comparison of low resolution mass spectra have been evaluated by various test methods related to library search. It is shown that there is a clear correlation between the nature of any contamination of a spectrum, the basic principle of the transformation or distance measure, and the performance of the identification system. The identification of functionality from low resolution spectra has also been evaluated using several classification methods. It is shown that there is an upper limit to the success of this approach, but also that this can be improved significantly by using a very limited amount of additional information. 10 refs

  11. An efficient automatic firearm identification system

    Science.gov (United States)

    Chuan, Zun Liang; Liong, Choong-Yeun; Jemain, Abdul Aziz; Ghani, Nor Azura Md.

    2014-06-01

    Automatic firearm identification system (AFIS) is highly demanded in forensic ballistics to replace the traditional approach which uses comparison microscope and is relatively complex and time consuming. Thus, several AFIS have been developed for commercial and testing purposes. However, those AFIS are still unable to overcome some of the drawbacks of the traditional firearm identification approach. The goal of this study is to introduce another efficient and effective AFIS. A total of 747 firing pin impression images captured from five different pistols of same make and model are used to evaluate the proposed AFIS. It was demonstrated that the proposed AFIS is capable of producing firearm identification accuracy rate of over 95.0% with an execution time of less than 0.35 seconds per image.

  12. Abbreviation definition identification based on automatic precision estimates

    OpenAIRE

    Kim Won; Comeau Donald C; Sohn Sunghwan; Wilbur W John

    2008-01-01

    Abstract Background The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. ...

  13. Automatic Palette Identification of Colored Graphics

    Science.gov (United States)

    Lacroix, Vinciane

    The median-shift, a new clustering algorithm, is proposed to automatically identify the palette of colored graphics, a pre-requisite for graphics vectorization. The median-shift is an iterative process which shifts each data point to the "median" point of its neighborhood defined thanks to a distance measure and a maximum radius, the only parameter of the method. The process is viewed as a graph transformation which converges to a set of clusters made of one or several connected vertices. As the palette identification depends on color perception, the clustering is performed in the L*a*b* feature space. As pixels located on edges are made of mixed colors not expected to be part of the palette, they are removed from the initial data set by an automatic pre-processing. Results are shown on scanned maps and on the Macbeth color chart and compared to well established methods.

  14. On the advances of automatic modal identification for SHM

    Directory of Open Access Journals (Sweden)

    Cardoso Rharã

    2015-01-01

    Full Text Available Structural health monitoring of civil infrastructures has great practical importance for engineers, owners and stakeholders. Numerous researches have been carried out using long-term monitoring, for instance the Rio-Niterói Bridge in Brazil, the former Z24 Bridge in Switzerland, the Millau Bridge in France, among others. In fact, some structures are monitored 24/7 in order to supply dynamic measurements that can be used for the identification of structural problems such as the presence of cracks, excessive vibration, damage or even to perform a quite extensive structural evaluation concerning its reliability and life cycle. The outputs of such an analysis, commonly entitled modal identification, are the so-called modal parameters, i.e. natural frequencies, damping ratios and mode shapes. Therefore, the development and validation of tools for the automatic identification of modal parameters based on the structural responses during normal operation is fundamental, as the success of subsequent damage detection algorithms depends on the accuracy of the modal parameters estimates. The proposed methodology uses the data driven stochastic subspace identification method (SSI-DATA, which is then complemented by a novel procedure developed for the automatic analysis of the stabilization diagrams provided by the SSI-DATA method. The efficiency of the proposed approach is attested via experimental investigations on a simply supported beam tested in laboratory and on a motorway bridge.

  15. Time Synchronization Module for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Choi Il-heung; Oh Sang-heon; Choi Dae-soo; Park Chan-sik; Hwang Dong-hwan; Lee Sang-jeong

    2003-01-01

    This paper proposed a design and implementation procedure of the Time Synchronization Module (TSM) for the Automatic Identification System (AIS). The proposed TSM module uses a Temperature Compensated Crystal Oscillator (TCXO) as a local reference clock, and consists of a Digitally Controlled Oscillator (DCO), a divider, a phase discriminator, and register blocks. The TSM measures time difference between the 1 PPS from the Global Navigation Satellite System (GNSS) receiver and the generated transmitter clock. The measured time difference is compensated by controlling the DCO and the transmit clock is synchronized to the Universal Time Coordinated (UTC). The designed TSM can also be synchronized to the reference time derived from the received message. The proposed module is tested using the experimental AIS transponder set. The experimental results show that the proposed module satisfies the functional and timing specification of the AIS technical standard, ITU-R M.1371.

  16. Statistical pattern recognition for automatic writer identification and verification

    NARCIS (Netherlands)

    Bulacu, Marius Lucian

    2007-01-01

    The thesis addresses the problem of automatic person identification using scanned images of handwriting.Identifying the author of a handwritten sample using automatic image-based methods is an interesting pattern recognition problem with direct applicability in the forensic and historic document ana

  17. Automatic handwriting identification on medieval documents

    NARCIS (Netherlands)

    Bulacu, M.L.; Schomaker, L.R.B.

    2007-01-01

    In this paper, we evaluate the performance of text-independent writer identification methods on a handwriting dataset containing medieval English documents. Applicable identification rates are achieved by combining textural features (joint directional probability distributions) with allographic feat

  18. FORENSIC LINGUISTICS: AUTOMATIC WEB AUTHOR IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    A. A. Vorobeva

    2016-03-01

    Full Text Available Internet is anonymous, this allows posting under a false name, on behalf of others or simply anonymous. Thus, individuals, criminal or terrorist organizations can use Internet for criminal purposes; they hide their identity to avoid the prosecuting. Existing approaches and algorithms for author identification of web-posts on Russian language are not effective. The development of proven methods, technics and tools for author identification is extremely important and challenging task. In this work the algorithm and software for authorship identification of web-posts was developed. During the study the effectiveness of several classification and feature selection algorithms were tested. The algorithm includes some important steps: 1 Feature extraction; 2 Features discretization; 3 Feature selection with the most effective Relief-f algorithm (to find the best feature set with the most discriminating power for each set of candidate authors and maximize accuracy of author identification; 4 Author identification on model based on Random Forest algorithm. Random Forest and Relief-f algorithms are used to identify the author of a short text on Russian language for the first time. The important step of author attribution is data preprocessing - discretization of continuous features; earlier it was not applied to improve the efficiency of author identification. The software outputs top q authors with maximum probabilities of authorship. This approach is helpful for manual analysis in forensic linguistics, when developed tool is used to narrow the set of candidate authors. For experiments on 10 candidate authors, real author appeared in to top 3 in 90.02% cases, on first place real author appeared in 70.5% of cases.

  19. All-optical automatic pollen identification: Towards an operational system

    Science.gov (United States)

    Crouzy, Benoît; Stella, Michelle; Konzelmann, Thomas; Calpini, Bertrand; Clot, Bernard

    2016-09-01

    We present results from the development and validation campaign of an optical pollen monitoring method based on time-resolved scattering and fluorescence. Focus is first set on supervised learning algorithms for pollen-taxa identification and on the determination of aerosol properties (particle size and shape). The identification capability provides a basis for a pre-operational automatic pollen season monitoring performed in parallel to manual reference measurements (Hirst-type volumetric samplers). Airborne concentrations obtained from the automatic system are compatible with those from the manual method regarding total pollen and the automatic device provides real-time data reliably (one week interruption over five months). In addition, although the calibration dataset still needs to be completed, we are able to follow the grass pollen season. The high sampling from the automatic device allows to go beyond the commonly-presented daily values and we obtain statistically significant hourly concentrations. Finally, we discuss remaining challenges for obtaining an operational automatic monitoring system and how the generic validation environment developed for the present campaign could be used for further tests of automatic pollen monitoring devices.

  20. Automatic defect identification on PWR nuclear power station fuel pellets

    International Nuclear Information System (INIS)

    This article presents a new automatic identification technique of structural failures in nuclear green fuel pellet. This technique was developed to identify failures occurred during the fabrication process. It is based on a smart image analysis technique for automatic identification of the failures on uranium oxide pellets used as fuel in PWR nuclear power stations. In order to achieve this goal, an artificial neural network (ANN) has been trained and validated from image histograms of pellets containing examples not only from normal pellets (flawless), but from defective pellets as well (with the main flaws normally found during the manufacturing process). Based on this technique, a new automatic identification system of flaws on nuclear fuel element pellets, composed by the association of image pre-processing and intelligent, will be developed and implemented on the Brazilian nuclear fuel production industry. Based on the theoretical performance of the technology proposed and presented in this article, it is believed that this new system, NuFAS (Nuclear Fuel Pellets Failures Automatic Identification Neural System) will be able to identify structural failures in nuclear fuel pellets with virtually zero error margins. After implemented, the NuFAS will add value to control quality process of the national production of the nuclear fuel.

  1. Person categorization and automatic racial stereotyping effects on weapon identification.

    Science.gov (United States)

    Jones, Christopher R; Fazio, Russell H

    2010-08-01

    Prior stereotyping research provides conflicting evidence regarding the importance of person categorization along a particular dimension for the automatic activation of a stereotype corresponding to that dimension. Experiment 1 replicated a racial stereotyping effect on object identification and examined whether it could be attenuated by encouraging categorization by age. Experiment 2 employed socially complex person stimuli and manipulated whether participants categorized spontaneously or by race. In Experiment 3, the distinctiveness of the racial dimension was manipulated by having Black females appear in the context of either Black males or White females. The results indicated that conditions fostering categorization by race consistently produced automatic racial stereotyping and that conditions fostering nonracial categorization can eliminate automatic racial stereotyping. Implications for the relation between automatic stereotype activation and dimension of categorization are discussed.

  2. Automatic seagrass pattern identification on sonar images

    Science.gov (United States)

    Rahnemoonfar, Maryam; Rahman, Abdullah

    2016-05-01

    Natural and human-induced disturbances are resulting in degradation and loss of seagrass. Freshwater flooding, severe meteorological events and invasive species are among the major natural disturbances. Human-induced disturbances are mainly due to boat propeller scars in the shallow seagrass meadows and anchor scars in the deeper areas. Therefore, there is a vital need to map seagrass ecosystems in order to determine worldwide abundance and distribution. Currently there is no established method for mapping the pothole or scars in seagrass. One of the most precise sensors to map the seagrass disturbance is side scan sonar. Here we propose an automatic method which detects seagrass potholes in sonar images. Side scan sonar images are notorious for having speckle noise and uneven illumination across the image. Moreover, disturbance presents complex patterns where most segmentation techniques will fail. In this paper, by applying mathematical morphology technique and calculating the local standard deviation of the image, the images were enhanced and the pothole patterns were identified. The proposed method was applied on sonar images taken from Laguna Madre in Texas. Experimental results show the effectiveness of the proposed method.

  3. MAC, A System for Automatically IPR Identification, Collection and Distribution

    Science.gov (United States)

    Serrão, Carlos

    Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

  4. Automatic Identification And Data Collection Via Barcode Laser Scanning.

    Science.gov (United States)

    Jacobeus, Michel

    1986-07-01

    How to earn over 100 million a year by investing 40 million ? No this is not the latest Wall Street "tip" but the costsavings obtained by the U.S. Department of Defense. 2 % savings on annual turnover claim supermarkets ! Millions of Dollars saved report automotive companies ! These are not daydreams, but tangible results measured by users after implemen-ting Automatic Identification and Data Collection systems, based on bar codes. To paraphrase the famous sentence "I think, thus I am", with AI/ADC systems "You knonw, thus you are". Indeed, in today's world, an immediate, accurate and precise information is a vital management need for companies growth and survival. AI/ADC techniques fullfill these objectives by supplying automatically and without any delay nor alteration the right information.

  5. Automatic identification of algal community from microscopic images.

    Science.gov (United States)

    Santhi, Natchimuthu; Pradeepa, Chinnaraj; Subashini, Parthasarathy; Kalaiselvi, Senthil

    2013-01-01

    A good understanding of the population dynamics of algal communities is crucial in several ecological and pollution studies of freshwater and oceanic systems. This paper reviews the subsequent introduction to the automatic identification of the algal communities using image processing techniques from microscope images. The diverse techniques of image preprocessing, segmentation, feature extraction and recognition are considered one by one and their parameters are summarized. Automatic identification and classification of algal community are very difficult due to various factors such as change in size and shape with climatic changes, various growth periods, and the presence of other microbes. Therefore, the significance, uniqueness, and various approaches are discussed and the analyses in image processing methods are evaluated. Algal identification and associated problems in water organisms have been projected as challenges in image processing application. Various image processing approaches based on textures, shapes, and an object boundary, as well as some segmentation methods like, edge detection and color segmentations, are highlighted. Finally, artificial neural networks and some machine learning algorithms were used to classify and identifying the algae. Further, some of the benefits and drawbacks of schemes are examined. PMID:24151424

  6. Semi-automatic long-term acoustic surveying

    DEFF Research Database (Denmark)

    Andreassen, Tórur; Surlykke, Annemarie; Hallam, John

    2014-01-01

    Increasing concern about decline in biodiversity has created a demand for population surveys. Acoustic monitoring is an efficient non-invasive method, which may be deployed for surveys of animals as diverse as insects, birds, and bats. Long-term unmanned automatic monitoring may provide unique...... to determine bat behavior and correct for the bias toward loud bats inherent in acoustic surveying. © 2013 Elsevier B.V....

  7. Automatic Identification of Modal, Breathy and Creaky Voices

    Directory of Open Access Journals (Sweden)

    Poonam Sharma

    2013-12-01

    Full Text Available This paper presents a way for the automatic identification of different voice qualities present in a speech signal which is very beneficiary for detecting any kind of speech by an efficient speech recognition system. Proposed technique is based on three important characteristics of speech signal namely Zero Crossing Rate, Short Time Energy and Fundamental Frequency. The performance of the proposed algorithm is evaluated using the data collected from three different speakers and an overall accuracy of 87.2 % is achieved.

  8. Automatic Person Identification in Camera Video by Motion Correlation

    Directory of Open Access Journals (Sweden)

    Dingbo Duan

    2014-01-01

    Full Text Available Person identification plays an important role in semantic analysis of video content. This paper presents a novel method to automatically label persons in video sequence captured from fixed camera. Instead of leveraging traditional face recognition approaches, we deal with the task of person identification by fusing information from motion sensor platforms, like smart phones, carried on human bodies and extracted from camera video. More specifically, a sequence of motion features extracted from camera video are compared with each of those collected from accelerometers of smart phones. When strong correlation is detected, identity information transmitted from the corresponding smart phone is used to identify the phone wearer. To test the feasibility and efficiency of the proposed method, extensive experiments are conducted which achieved impressive performance.

  9. Development of an automatic identification algorithm for antibiogram analysis.

    Science.gov (United States)

    Costa, Luan F R; da Silva, Eduardo S; Noronha, Victor T; Vaz-Moreira, Ivone; Nunes, Olga C; Andrade, Marcelino M de

    2015-12-01

    Routinely, diagnostic and microbiology laboratories perform antibiogram analysis which can present some difficulties leading to misreadings and intra and inter-reader deviations. An Automatic Identification Algorithm (AIA) has been proposed as a solution to overcome some issues associated with the disc diffusion method, which is the main goal of this work. AIA allows automatic scanning of inhibition zones obtained by antibiograms. More than 60 environmental isolates were tested using susceptibility tests which were performed for 12 different antibiotics for a total of 756 readings. Plate images were acquired and classified as standard or oddity. The inhibition zones were measured using the AIA and results were compared with reference method (human reading), using weighted kappa index and statistical analysis to evaluate, respectively, inter-reader agreement and correlation between AIA-based and human-based reading. Agreements were observed in 88% cases and 89% of the tests showed no difference or a reading problems such as overlapping inhibition zones, imperfect microorganism seeding, non-homogeneity of the circumference, partial action of the antimicrobial, and formation of a second halo of inhibition. Furthermore, AIA proved to overcome some of the limitations observed in other automatic methods. Therefore, AIA may be a practical tool for automated reading of antibiograms in diagnostic and microbiology laboratories. PMID:26513468

  10. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  11. Channel Access Algorithm Design for Automatic Identification System

    Institute of Scientific and Technical Information of China (English)

    Oh Sang-heon; Kim Seung-pum; Hwang Dong-hwan; Park Chan-sik; Lee Sang-jeong

    2003-01-01

    The Automatic Identification System (AIS) is a maritime equipment to allow an efficient exchange of the navigational data between ships and between ships and shore stations. It utilizes a channel access algorithm which can quickly resolve conflicts without any intervention from control stations. In this paper, a design of channel access algorithm for the AIS is presented. The input/output relationship of each access algorithm module is defined by drawing the state transition diagram, dataflow diagram and flowchart based on the technical standard, ITU-R M.1371. In order to verify the designed channel access algorithm, the simulator was developed using the C/C++ programming language. The results show that the proposed channel access algorithm can properly allocate transmission slots and meet the operational performance requirements specified by the technical standard.

  12. Automatic Identification of Antibodies in the Protein Data Bank

    Institute of Scientific and Technical Information of China (English)

    LI Xun; WANG Renxiao

    2009-01-01

    An automatic method has been developed for identifying antibody entries in the protein data bank (PDB). Our method, called KIAb (Keyword-based Identification of Antibodies), parses PDB-format files to search for particular keywords relevant to antibodies, and makes judgment accordingly. Our method identified 780 entries as antibodies on the entire PDB. Among them, 767 entries were confirmed by manual inspection, indicating a high success rate of 98.3%. Our method recovered basically all of the entries compiled in the Summary of Antibody Crystal Structures (SACS) database. It also identified a number of entries missed by SACS. Our method thus provides a more com-plete mining of antibody entries in PDB with a very low false positive rate.

  13. Automatic Identification of Interictal Epileptiform Discharges in Secondary Generalized Epilepsy

    Science.gov (United States)

    Chang, Won-Du; Cha, Ho-Seung; Lee, Chany; Kang, Hoon-Chul; Im, Chang-Hwan

    2016-01-01

    Ictal epileptiform discharges (EDs) are characteristic signal patterns of scalp electroencephalogram (EEG) or intracranial EEG (iEEG) recorded from patients with epilepsy, which assist with the diagnosis and characterization of various types of epilepsy. The EEG signal, however, is often recorded from patients with epilepsy for a long period of time, and thus detection and identification of EDs have been a burden on medical doctors. This paper proposes a new method for automatic identification of two types of EDs, repeated sharp-waves (sharps), and runs of sharp-and-slow-waves (SSWs), which helps to pinpoint epileptogenic foci in secondary generalized epilepsy such as Lennox-Gastaut syndrome (LGS). In the experiments with iEEG data acquired from a patient with LGS, our proposed method detected EDs with an accuracy of 93.76% and classified three different signal patterns with a mean classification accuracy of 87.69%, which was significantly higher than that of a conventional wavelet-based method. Our study shows that it is possible to successfully detect and discriminate sharps and SSWs from background EEG activity using our proposed method. PMID:27379172

  14. Automatic Identification of Interictal Epileptiform Discharges in Secondary Generalized Epilepsy

    Directory of Open Access Journals (Sweden)

    Won-Du Chang

    2016-01-01

    Full Text Available Ictal epileptiform discharges (EDs are characteristic signal patterns of scalp electroencephalogram (EEG or intracranial EEG (iEEG recorded from patients with epilepsy, which assist with the diagnosis and characterization of various types of epilepsy. The EEG signal, however, is often recorded from patients with epilepsy for a long period of time, and thus detection and identification of EDs have been a burden on medical doctors. This paper proposes a new method for automatic identification of two types of EDs, repeated sharp-waves (sharps, and runs of sharp-and-slow-waves (SSWs, which helps to pinpoint epileptogenic foci in secondary generalized epilepsy such as Lennox-Gastaut syndrome (LGS. In the experiments with iEEG data acquired from a patient with LGS, our proposed method detected EDs with an accuracy of 93.76% and classified three different signal patterns with a mean classification accuracy of 87.69%, which was significantly higher than that of a conventional wavelet-based method. Our study shows that it is possible to successfully detect and discriminate sharps and SSWs from background EEG activity using our proposed method.

  15. AUTOMATIC LICENSE PLATE LOCALISATION AND IDENTIFICATION VIA SIGNATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Lorita Angeline

    2014-02-01

    Full Text Available A new algorithm for license plate localisation and identification is proposed on the basis of Signature analysis. Signature analysis has been used to locate license plate candidate and its properties can be further utilised in supporting and affirming the license plate character recognition. This paper presents Signature Analysis and the improved conventional Connected Component Analysis (CCA to design an automatic license plate localisation and identification. A procedure called Euclidean Distance Transform is added to the conventional CCA in order to tackle the multiple bounding boxes that occurred. The developed algorithm, SAICCA achieved 92% successful rate, with 8% failed localisation rate due to the restrictions such as insufficient light level, clarity and license plate perceptual information. The processing time for a license plate localisation and recognition is a crucial criterion that needs to be concerned. Therefore, this paper has utilised several approaches to decrease the processing time to an optimal value. The results obtained show that the proposed system is capable to be implemented in both ideal and non-ideal environments.

  16. Automatic Identification of Storm Cells Using Doppler Radars

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Three storm automatic identification algorithms for Doppler radar axe discussed. The WSR-88D Build 7.0 (B7SI) tests the intensity and continuity of the objective echoes by multiple-prescribed thresholds to build 3D storms, and when storms are merging, splitting, or clustered closely, the detection errors become larger. The B9SI algorithm is part of the Build 9.0 Radar Products Generator of the WSR-88D system. It uses multiple thresholds of reflectivity, newly designs the techniques of cell nucleus extraction and close-storms processing, and therefore is capable of identifying embedded cells in multi-cellular storms. The strong area components at a long distance are saved as 2D storms. However, the B9SI cannot give information on the convection strength of storm, because texture and gradient of reflectivity are not calculated and radial velocity data are not used. To overcome this limitation, the CSI (Convective Storm Identification) algorithm is designed in this paper. By using the fuzzy logic technique, and under the condition that the levels of the seven reflectivity thresholds of B9SI are lowered, the CSI processes the radar base data and the output of B9SI to obtain the convection index of storm. Finally, the CSI is verified with the case of a supercell occurring in Guangzhou on 11 August 2004. The computational and analysis results show that the two rises of convection index matched well with a merging growth and strong convergent growth of the supercell, and the index was 0.744 when the supercell was the strongest, and then decreased. Correspondingly, the height of the maximum reflectivity, detected by the radar also reduced, and heavy rain also occurred in a large-scale area.

  17. Automatic Personal Identification Using Feature Similarity Index Matching

    Directory of Open Access Journals (Sweden)

    R. Gayathri

    2012-01-01

    Full Text Available Problem statement: Biometrics based personal identification is as an effective method for automatically recognizing, a persons identity with high confidence. Palmprint is an essential biometric feature for use in access control and forensic applications. In this study, we present a multi feature extraction, based on edge detection scheme, applying Log Gabor filter to enhance image structures and suppress noise. Approach: A novel Feature-Similarity Indexing (FSIM of image algorithm is used to generate the matching score between the original image in database and the input test image. Feature Similarity (FSIM index for full reference (image quality assurance IQA is proposed based on the fact that Human Visual System (HVS understands an image mainly according to its low-level features. Results and Conclusion: The experimental results achieve recognition accuracy using canny and perwitt FSIM of 97.3227 and 94.718%, respectively, on the publicly available database of Hong Kong Polytechnic University. Totally 500 images of 100 individuals, 4 samples for each palm are randomly selected to train in this research. Then we get every person each palm image as a template (total 100. Experimental evaluation using palmprint image databases clearly demonstrates the efficient recognition performance of the proposed algorithm compared with the conventional palmprint recognition algorithms.

  18. Rewriting and suppressing UMLS terms for improved biomedical term identification

    Directory of Open Access Journals (Sweden)

    Hettne Kristina M

    2010-03-01

    Full Text Available Abstract Background Identification of terms is essential for biomedical text mining.. We concentrate here on the use of vocabularies for term identification, specifically the Unified Medical Language System (UMLS. To make the UMLS more suitable for biomedical text mining we implemented and evaluated nine term rewrite and eight term suppression rules. The rules rely on UMLS properties that have been identified in previous work by others, together with an additional set of new properties discovered by our group during our work with the UMLS. Our work complements the earlier work in that we measure the impact on the number of terms identified by the different rules on a MEDLINE corpus. The number of uniquely identified terms and their frequency in MEDLINE were computed before and after applying the rules. The 50 most frequently found terms together with a sample of 100 randomly selected terms were evaluated for every rule. Results Five of the nine rewrite rules were found to generate additional synonyms and spelling variants that correctly corresponded to the meaning of the original terms and seven out of the eight suppression rules were found to suppress only undesired terms. Using the five rewrite rules that passed our evaluation, we were able to identify 1,117,772 new occurrences of 14,784 rewritten terms in MEDLINE. Without the rewriting, we recognized 651,268 terms belonging to 397,414 concepts; with rewriting, we recognized 666,053 terms belonging to 410,823 concepts, which is an increase of 2.8% in the number of terms and an increase of 3.4% in the number of concepts recognized. Using the seven suppression rules, a total of 257,118 undesired terms were suppressed in the UMLS, notably decreasing its size. 7,397 terms were suppressed in the corpus. Conclusions We recommend applying the five rewrite rules and seven suppression rules that passed our evaluation when the UMLS is to be used for biomedical term identification in MEDLINE. A software

  19. An automatic identification and monitoring system for coral reef fish

    Science.gov (United States)

    Wilder, Joseph; Tonde, Chetan; Sundar, Ganesh; Huang, Ning; Barinov, Lev; Baxi, Jigesh; Bibby, James; Rapport, Andrew; Pavoni, Edward; Tsang, Serena; Garcia, Eri; Mateo, Felix; Lubansky, Tanya M.; Russell, Gareth J.

    2012-10-01

    To help gauge the health of coral reef ecosystems, we developed a prototype of an underwater camera module to automatically census reef fish populations. Recognition challenges include pose and lighting variations, complicated backgrounds, within-species color variations and within-family similarities among species. An open frame holds two cameras, LED lights, and two `background' panels in an L-shaped configuration. High-resolution cameras send sequences of 300 synchronized image pairs at 10 fps to an on-shore PC. Approximately 200 sequences containing fish were recorded at the New York Aquarium's Glover's Reef exhibit. These contained eight `common' species with 85-672 images, and eight `rare' species with 5-27 images that were grouped into an `unknown/rare' category for classification. Image pre-processing included background modeling and subtraction, and tracking of fish across frames for depth estimation, pose correction, scaling, and disambiguation of overlapping fish. Shape features were obtained from PCA analysis of perimeter points, color features from opponent color histograms, and `banding' features from DCT of vertical projections. Images were classified to species using feedforward neural networks arranged in a three-level hierarchy in which errors remaining after each level are targeted by networks in the level below. Networks were trained and tested on independent image sets. Overall accuracy of species-specific identifications typically exceeded 96% across multiple training runs. A seaworthy version of our system will allow for population censuses with high temporal resolution, and therefore improved statistical power to detect trends. A network of such devices could provide an `early warning system' for coral ecosystem collapse.

  20. Gust Front Statistical Characteristics and Automatic Identification Algorithm for CINRAD

    Institute of Scientific and Technical Information of China (English)

    郑佳锋; 张杰; 朱克云; 刘黎平; 刘艳霞

    2014-01-01

    Gust front is a kind of meso-and micro-scale weather phenomenon that often causes serious ground wind and wind shear. This paper presents an automatic gust front identification algorithm. Totally 879 radar volume-scan samples selected from 21 gust front weather processes that occurred in China between 2009 and 2012 are examined and analyzed. Gust front echo statistical features in reflectivity, velocity, and spectrum width fields are obtained. Based on these features, an algorithm is designed to recognize gust fronts and generate output products and quantitative indices. Then, 315 samples are used to verify the algorithm and 3 typical cases are analyzed. Major conclusions include: 1) for narrow band echoes intensity is between 5 and 30 dBZ, widths are between 2 and 10 km, maximum heights are less than 4 km (89.33%are lower than 3 km), and the lengths are between 50 and 200 km. The narrow-band echo is higher than its surrounding echo. 2) Gust fronts present a convergence line or a wind shear in the velocity field;the frontal wind speed gradually decreases when the distance increases radially outward. Spectral widths of gust fronts are large, with 87.09% exceeding 4 m s-1 . 3) Using 315 gust front volume-scan samples to test the algorithm reveals that the algorithm is highly stable and has successfully recognized 277 samples. The algorithm also works for small-scale or weak gust fronts. 4) Radar data quality has certain impact on the algorithm.

  1. Automatic identification technology tracking weapons and ammunition for the Norwegian Armed Forces

    OpenAIRE

    Lien, Tord Hjalmar.

    2011-01-01

    Approved for public release; distribution is unlimited. The purpose of this study is to recommend technology and solutions that improve the accountability and accuracy of small arms and ammunition inventories in the Norwegian Armed Forces (NAF). Radio Frequency Identification (RFID) and Item Unique Identification (IUID) are described, and challenges and benefits of these two major automatic identification technologies are discussed. A case study for the NAF is conducted where processes a...

  2. A Wireless Framework for Lecturers' Attendance System with Automatic Vehicle Identification (AVI Technology

    Directory of Open Access Journals (Sweden)

    Emammer Khamis Shafter

    2015-10-01

    Full Text Available Automatic Vehicle Identification (AVI technology is one type of Radio Frequency Identification (RFID method which can be used to significantly improve the efficiency of lecturers' attendance system. It provides the capability of automatic data capture for attendance records using mobile device equipped in users’ vehicle. The intent of this article is to propose a framework for automatic lecturers' attendance system using AVI technology. The first objective of this work involves gathering of requirements for Automatic Lecturers' Attendance System and to represent them using UML diagrams. The second objective is to put forward a framework that will provide guidelines for developing the system. A prototype has also been created as a pilot project.

  3. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  4. Automatic player detection and identification for sports entertainment applications

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khattak, Shadid; Hasan, Laiq; Khan, Samee U.

    2014-01-01

    In this paper, we develop an augmented reality sports broadcasting application for automatic detection, recognition of players during play, followed by display of personal information of players. The proposed application can be divided into four major steps. In first step, each player in the image i

  5. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity. PMID:26357393

  6. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.

  7. Automatic script identification from images using cluster-based templates

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.; Kerns, L.; Kelly, P.; Thomas, T.

    1995-02-01

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a new document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.

  8. Automatic Identification of Personal Life Events in Twitter

    OpenAIRE

    Dickinson, Thomas; Fernández, Miriam; Thomas, Lisa A.; Mulholland, Paul; Briggs, Pam; Alani, Harith

    2015-01-01

    New social media has led to an explosion in personal digital data that encompasses both those expressions of self chosen by the individual as well as reflections of self provided by other, third parties. The resulting Digital Personhood (DP) data is complex and for many users it is too easy to become lost in the mire of digital data. This paper studies the automatic detection of personal life events in Twitter. Six relevant life events are considered from psychological research including: beg...

  9. Wavelet Packet Based Features for Automatic Script Identification

    OpenAIRE

    M.C. Padma & P. A. Vijaya

    2010-01-01

    In a multi script environment, an archive of documents printed in different scriptsis in practice. For automatic processing of such documents through OpticalCharacter Recognition (OCR), it is necessary to identify the script type of thedocument. In this paper, a novel texture-based approach is presented to identifythe script type of the collection of documents printed in ten Indian scripts -Bangla, Devanagari, Roman (English), Gujarati, Malayalam, Oriya, Tamil,Telugu, Kannada and Urdu. The do...

  10. Automatic and Direct Identification of Blink Components from Scalp EEG

    Directory of Open Access Journals (Sweden)

    Guojun Dai

    2013-08-01

    Full Text Available Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA. Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn’t need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects.

  11. Automatic and direct identification of blink components from scalp EEG.

    Science.gov (United States)

    Kong, Wanzeng; Zhou, Zhanpeng; Hu, Sanqing; Zhang, Jianhai; Babiloni, Fabio; Dai, Guojun

    2013-08-16

    Eye blink is an important and inevitable artifact during scalp electroencephalogram (EEG) recording. The main problem in EEG signal processing is how to identify eye blink components automatically with independent component analysis (ICA). Taking into account the fact that the eye blink as an external source has a higher sum of correlation with frontal EEG channels than all other sources due to both its location and significant amplitude, in this paper, we proposed a method based on correlation index and the feature of power distribution to automatically detect eye blink components. Furthermore, we prove mathematically that the correlation between independent components and scalp EEG channels can be translating directly from the mixing matrix of ICA. This helps to simplify calculations and understand the implications of the correlation. The proposed method doesn't need to select a template or thresholds in advance, and it works without simultaneously recording an electrooculography (EOG) reference. The experimental results demonstrate that the proposed method can automatically recognize eye blink components with a high accuracy on entire datasets from 15 subjects.

  12. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  13. Automatically Identification and Classification of Moving Vehicles at Night

    Directory of Open Access Journals (Sweden)

    Atena Khodarahmi

    2012-07-01

    Full Text Available Todays moving object detection plays an important role in computer vision filed. Although a lot of moving objects detection methods has been proposed but monitoring at nights is still a challenging topic. In this paper, a robust algorithm is proposed for automatic detection moving vehicles at night or in environments with low level of light which has quality problems. In this algorithm, first preprocessing steps were conducted. Then all of vehicles in frame identify and classify according their type. Finally, the moving vehicles detected. The results demonstrate that the proposed algorithm significantly outperforms existing algorithm for the detecting and classification of moving vehicles at night.

  14. Automatic Identification of Silence, Unvoiced and Voiced Chunks in Speech

    Directory of Open Access Journals (Sweden)

    Poonam Sharma

    2013-05-01

    Full Text Available The objective of this work is to automatically seg ment the speech signal into silence, voiced and unvoiced regions which are very beneficial in incre asing the accuracy and performance of recognition systems. Proposed algorithm is based on three important characteristics of speech signal namely Zero Crossing Rate, Short Time Energy and Fundamental Frequency. The performance of the proposed algorithm is evaluated using the data collected from four different speakers and an overall accuracy of 96.61 % is achi eved.

  15. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  16. An Evaluation of Cellular Neural Networks for the Automatic Identification of Cephalometric Landmarks on Digital Images

    Directory of Open Access Journals (Sweden)

    Rosalia Leonardi

    2009-01-01

    Full Text Available Several efforts have been made to completely automate cephalometric analysis by automatic landmark search. However, accuracy obtained was worse than manual identification in every study. The analogue-to-digital conversion of X-ray has been claimed to be the main problem. Therefore the aim of this investigation was to evaluate the accuracy of the Cellular Neural Networks approach for automatic location of cephalometric landmarks on softcopy of direct digital cephalometric X-rays. Forty-one, direct-digital lateral cephalometric radiographs were obtained by a Siemens Orthophos DS Ceph and were used in this study and 10 landmarks (N, A Point, Ba, Po, Pt, B Point, Pg, PM, UIE, LIE were the object of automatic landmark identification. The mean errors and standard deviations from the best estimate of cephalometric points were calculated for each landmark. Differences in the mean errors of automatic and manual landmarking were compared with a 1-way analysis of variance. The analyses indicated that the differences were very small, and they were found at most within 0.59 mm. Furthermore, only few of these differences were statistically significant, but differences were so small to be in most instances clinically meaningless. Therefore the use of X-ray files with respect to scanned X-ray improved landmark accuracy of automatic detection. Investigations on softcopy of digital cephalometric X-rays, to search more landmarks in order to enable a complete automatic cephalometric analysis, are strongly encouraged.

  17. Automatic Priming Effects for New Associations in Lexical Decision and Perceptual Identification

    NARCIS (Netherlands)

    D. Pecher (Diane); J.G.W. Raaijmakers (Jeroen)

    1999-01-01

    textabstractInformation storage in semantic memory was investigated by looking at automatic priming effects for new associations in two experiments. In the study phase word pairs were presented in a paired-associate learning task. Lexical decision and perceptual identification were used to examine p

  18. Performance Modelling of Automatic Identification System with Extended Field of View

    DEFF Research Database (Denmark)

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard;

    2010-01-01

    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV...

  19. Exploring features for automatic identification of news queries through query logs

    Institute of Scientific and Technical Information of China (English)

    Xiaojuan; ZHANG; Jian; LI

    2014-01-01

    Purpose:Existing researches of predicting queries with news intents have tried to extract the classification features from external knowledge bases,this paper tries to present how to apply features extracted from query logs for automatic identification of news queries without using any external resources.Design/methodology/approach:First,we manually labeled 1,220 news queries from Sogou.com.Based on the analysis of these queries,we then identified three features of news queries in terms of query content,time of query occurrence and user click behavior.Afterwards,we used 12 effective features proposed in literature as baseline and conducted experiments based on the support vector machine(SVM)classifier.Finally,we compared the impacts of the features used in this paper on the identification of news queries.Findings:Compared with baseline features,the F-score has been improved from 0.6414 to0.8368 after the use of three newly-identified features,among which the burst point(bst)was the most effective while predicting news queries.In addition,query expression(qes)was more useful than query terms,and among the click behavior-based features,news URL was the most effective one.Research limitations:Analyses based on features extracted from query logs might lead to produce limited results.Instead of short queries,the segmentation tool used in this study has been more widely applied for long texts.Practical implications:The research will be helpful for general-purpose search engines to address search intents for news events.Originality/value:Our approach provides a new and different perspective in recognizing queries with news intent without such large news corpora as blogs or Twitter.

  20. Defect Automatic Identification of Eddy Current Pulsed Thermography

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2014-01-01

    Full Text Available Eddy current pulsed thermography (ECPT is an effective nondestructive testing and evaluation (NDT&E technique, and has been applied for a wide range of conductive materials. Manual selected frames have been used for defects detection and quantification. Defects are indicated by high/low temperature in the frames. However, the variation of surface emissivity sometimes introduces illusory temperature inhomogeneity and results in false alarm. To improve the probability of detection, this paper proposes a two-heat balance states-based method which can restrain the influence of the emissivity. In addition, the independent component analysis (ICA is also applied to automatically identify defect patterns and quantify the defects. An experiment was carried out to validate the proposed methods.

  1. Automatic identification of model reductions for discrete stochastic simulation

    Science.gov (United States)

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda

    2012-07-01

    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  2. Perspective of the applications of automatic identification technologies in the Serbian Army

    Directory of Open Access Journals (Sweden)

    Velibor V. Jovanović

    2012-07-01

    Full Text Available Without modern information systems, supply-chain management is almost impossible. Automatic identification technologies provide automated data processing, which contributes to improving the conditions and support decision making. Automatic identification technology media, notably BARCODE and RFID technology, are used as carriers of labels with high quality data and adequate description of material means, for providing a crucial visibility of inventory levels through the supply chain. With these media and the use of an adequate information system, the Ministry of Defense of the Republic of Serbia will be able to establish a system of codification and, in accordance with the NATO codification system, to successfully implement a unique codification, classification and determination of storage numbers for all tools, components and spare parts for their unequivocal identification. In the perspective, this will help end users to perform everyday tasks without compromising the material integrity of security data. It will also help command structures to have reliable information for decision making to ensure optimal management. Products and services that pass the codification procedure will have the opportunity to be offered in the largest market of armament and military equipment. This paper gives a comparative analysis of two automatic identification technologies - BARCODE, the most common one, and RFID, the most advanced one - with an emphasis on the advantages and disadvantages of their use in tracking inventory through the supply chain. Their possible application in the Serbian Army is discussed in general.

  3. Managing Returnable Containers Logistics - A Case Study Part II - Improving Visibility through Using Automatic Identification Technologies

    Directory of Open Access Journals (Sweden)

    Gretchen Meiser

    2011-05-01

    Full Text Available This case study is the result of a project conducted on behalf of a company that uses its own returnable containers to transport purchased parts from suppliers. The objective of this project was to develop a proposal to enable the company to more effectively track and manage its returnable containers. The research activities in support of this project included (1 the analysis and documentation of the physical flow and the information flow associated with the containers and (2 the investigation of new technologies to improve the automatic identification and tracking of containers. This paper explains the automatic identification technologies and important criteria for selection. A companion paper details the flow of information and containers within the logistics chain, and it identifies areas for improving the management of the containers.

  4. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    Science.gov (United States)

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. PMID:27268736

  5. Automatic Identification of Tomato Maturation Using Multilayer Feed Forward Neural Network with Genetic Algorithms (GA)

    Institute of Scientific and Technical Information of China (English)

    FANG Jun-long; ZHANG Chang-li; WANG Shu-wen

    2004-01-01

    We set up computer vision system for tomato images. By using this system, the RGB value of tomato image was converted into HIS value whose H was used to acquire the color character of the surface of tomato. To use multilayer feed forward neural network with GA can finish automatic identification of tomato maturation. The results of experiment showed that the accuracy was upto 94%.

  6. Rapid Identification of Volatile Compounds in Aromathic Plants by Automatic Thermal Desorption - GC-MS

    OpenAIRE

    Esteban, J. L.; Martínez-Castro, I.; Morales Valverde, Ramón; Fabrellas, B.; Sanz, J.

    1996-01-01

    [EN]Thermal desorption is a valuable mathod for the fractionation of plant volatile components, which can be carried out on-line with GC analysis. The use of coupled GC-MS affords additional qualitative information, of special interest for plant species whose composition has not been previosly studied. Some examples of the application of automatic thermal desorption coupled to GC-MS to the identification and characterization of volatile components of plants of different families are given.

  7. An Automatic Identification Procedure to Promote the use of FES-Cycling Training for Hemiparetic Patients

    Directory of Open Access Journals (Sweden)

    Emilia Ambrosini

    2014-01-01

    Full Text Available Cycling induced by Functional Electrical Stimulation (FES training currently requires a manual setting of different parameters, which is a time-consuming and scarcely repeatable procedure. We proposed an automatic procedure for setting session-specific parameters optimized for hemiparetic patients. This procedure consisted of the identification of the stimulation strategy as the angular ranges during which FES drove the motion, the comparison between the identified strategy and the physiological muscular activation strategy, and the setting of the pulse amplitude and duration of each stimulated muscle. Preliminary trials on 10 healthy volunteers helped define the procedure. Feasibility tests on 8 hemiparetic patients (5 stroke, 3 traumatic brain injury were performed. The procedure maximized the motor output within the tolerance constraint, identified a biomimetic strategy in 6 patients, and always lasted less than 5 minutes. Its reasonable duration and automatic nature make the procedure usable at the beginning of every training session, potentially enhancing the performance of FES-cycling training.

  8. A new approach to the automatic identification of organism evolution using neural networks.

    Science.gov (United States)

    Kasperski, Andrzej; Kasperska, Renata

    2016-01-01

    Automatic identification of organism evolution still remains a challenging task, which is especially exiting, when the evolution of human is considered. The main aim of this work is to present a new idea to allow organism evolution analysis using neural networks. Here we show that it is possible to identify evolution of any organisms in a fully automatic way using the designed EvolutionXXI program, which contains implemented neural network. The neural network has been taught using cytochrome b sequences of selected organisms. Then, analyses have been carried out for the various exemplary organisms in order to demonstrate capabilities of the EvolutionXXI program. It is shown that the presented idea allows supporting existing hypotheses, concerning evolutionary relationships between selected organisms, among others, Sirenia and elephants, hippopotami and whales, scorpions and spiders, dolphins and whales. Moreover, primate (including human), tree shrew and yeast evolution has been reconstructed. PMID:26975238

  9. A new approach to the automatic identification of organism evolution using neural networks.

    Science.gov (United States)

    Kasperski, Andrzej; Kasperska, Renata

    2016-01-01

    Automatic identification of organism evolution still remains a challenging task, which is especially exiting, when the evolution of human is considered. The main aim of this work is to present a new idea to allow organism evolution analysis using neural networks. Here we show that it is possible to identify evolution of any organisms in a fully automatic way using the designed EvolutionXXI program, which contains implemented neural network. The neural network has been taught using cytochrome b sequences of selected organisms. Then, analyses have been carried out for the various exemplary organisms in order to demonstrate capabilities of the EvolutionXXI program. It is shown that the presented idea allows supporting existing hypotheses, concerning evolutionary relationships between selected organisms, among others, Sirenia and elephants, hippopotami and whales, scorpions and spiders, dolphins and whales. Moreover, primate (including human), tree shrew and yeast evolution has been reconstructed.

  10. Automatic Active-Region Identification and Azimuth Disambiguation of the SOLIS/VSM Full-Disk Vector Magnetograms

    CERN Document Server

    Georgoulis, M K; Henney, C J

    2007-01-01

    The Vector Spectromagnetograph (VSM) of the NSO's Synoptic Optical Long-Term Investigations of the Sun (SOLIS) facility is now operational and obtains the first-ever vector magnetic field measurements of the entire visible solar hemisphere. To fully exploit the unprecedented SOLIS/VSM data, however, one must first address two critical problems: first, the study of solar active regions requires an automatic, physically intuitive, technique for active-region identification in the solar disk. Second, use of active-region vector magnetograms requires removal of the azimuthal $180^o$-ambiguity in the orientation of the transverse magnetic field component. Here we report on an effort to address both problems simultaneously and efficiently. To identify solar active regions we apply an algorithm designed to locate complex, flux-balanced, magnetic structures with a dominant E-W orientation on the disk. Each of the disk portions corresponding to active regions is thereafter extracted and subjected to the Nonpotential M...

  11. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors.

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-04-02

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier's subtechniques in course conditions.

  12. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Yoshihisa Sakurai

    2016-04-01

    Full Text Available This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier’s subtechniques in course conditions.

  13. Automatic Identification of Subtechniques in Skating-Style Roller Skiing Using Inertial Sensors.

    Science.gov (United States)

    Sakurai, Yoshihisa; Fujita, Zenya; Ishige, Yusuke

    2016-01-01

    This study aims to develop and validate an automated system for identifying skating-style cross-country subtechniques using inertial sensors. In the first experiment, the performance of a male cross-country skier was used to develop an automated identification system. In the second, eight male and seven female college cross-country skiers participated to validate the developed identification system. Each subject wore inertial sensors on both wrists and both roller skis, and a small video camera on a backpack. All subjects skied through a 3450 m roller ski course using a skating style at their maximum speed. The adopted subtechniques were identified by the automated method based on the data obtained from the sensors, as well as by visual observations from a video recording of the same ski run. The system correctly identified 6418 subtechniques from a total of 6768 cycles, which indicates an accuracy of 94.8%. The precisions of the automatic system for identifying the V1R, V1L, V2R, V2L, V2AR, and V2AL subtechniques were 87.6%, 87.0%, 97.5%, 97.8%, 92.1%, and 92.0%, respectively. Most incorrect identification cases occurred during a subtechnique identification that included a transition and turn event. Identification accuracy can be improved by separately identifying transition and turn events. This system could be used to evaluate each skier's subtechniques in course conditions. PMID:27049388

  14. Automatic derivation of domain terms and concept location based on the analysis of the identifiers

    CERN Document Server

    Vaclavik, Peter; Mezei, Marek

    2010-01-01

    Developers express the meaning of the domain ideas in specifically selected identifiers and comments that form the target implemented code. Software maintenance requires knowledge and understanding of the encoded ideas. This paper presents a way how to create automatically domain vocabulary. Knowledge of domain vocabulary supports the comprehension of a specific domain for later code maintenance or evolution. We present experiments conducted in two selected domains: application servers and web frameworks. Knowledge of domain terms enables easy localization of chunks of code that belong to a certain term. We consider these chunks of code as "concepts" and their placement in the code as "concept location". Application developers may also benefit from the obtained domain terms. These terms are parts of speech that characterize a certain concept. Concepts are encoded in "classes" (OO paradigm) and the obtained vocabulary of terms supports the selection and the comprehension of the class' appropriate identifiers. ...

  15. RESEARCH ON AUTOMATIC FOG IDENTIFICATION TECHNOLOGY BY METEOROLOGICAL SATELLITE REMOTE SENSING

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hong-mei; GE Wei-qiang; BAI Hua; LIU Dong-wei; YANG Ying-min

    2009-01-01

    There is an urgent need for the development of a method that can undertake rapid,effective,and accurate monitoring and identification of fog by satellite remote sensing,since heavy fog can cause enormous disasters to China's national economy and people's lives and property in the urban and coastal areas. In this paper,the correlative relationship between the rellectivity of land surface and clouds in different time phases is found,based on the analysis of the radiative and satellite-based spectral characteristics of fog. Through calculation and analyses of the relative variability of the reflectivity in the images,the threshold to identify quasi-fog areas is generated automatically. Furthermore,using the technique of quick image run-length encoding,and in combination with such practical methods as analyzing texture and shape fractures,smoothness,and template characteristics,the automatic identification of fog and fog-cloud separation using meteorological satellite remote sensing images are studied,with good results in application.

  16. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  17. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    Science.gov (United States)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  18. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Energy Technology Data Exchange (ETDEWEB)

    Amato, G. [ISTI-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V. [IPCF-CNR, Area della Ricerca, Via Moruzzi 1, 56124, Pisa (Italy); Sorrentino, F., E-mail: sorrentino@fi.infn.i [Dipartimento di Fisica e astronomia, Universita di Firenze, Polo Scientifico, via Sansone 1, 50019 Sesto Fiorentino (Italy); Istituto di Cibernetica CNR, via Campi Flegrei 34, 80078 Pozzuoli (Italy); Marwan Technology, c/o Dipartimento di Fisica ' E. Fermi' , Largo Pontecorvo 3, 56127 Pisa (Italy); Tognoni, E. [INO-CNR, Area della Ricerca, Via Moruzzi 1, 56124 Pisa (Italy)

    2010-08-15

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  19. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    Science.gov (United States)

    Amato, G.; Cristoforetti, G.; Legnaioli, S.; Lorenzetti, G.; Palleschi, V.; Sorrentino, F.; Tognoni, E.

    2010-08-01

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  20. Progress towards an unassisted element identification from Laser Induced Breakdown Spectra with automatic ranking techniques inspired by text retrieval

    International Nuclear Information System (INIS)

    In this communication, we will illustrate an algorithm for automatic element identification in LIBS spectra which takes inspiration from the vector space model applied to text retrieval techniques. The vector space model prescribes that text documents and text queries are represented as vectors of weighted terms (words). Document ranking, with respect to relevance to a query, is obtained by comparing the vectors representing the documents with the vector representing the query. In our case, we represent elements and samples as vectors of weighted peaks, obtained from their spectra. The likelihood of the presence of an element in a sample is computed by comparing the corresponding vectors of weighted peaks. The weight of a peak is proportional to its intensity and to the inverse of the number of peaks, in the database, in its wavelength neighboring. We suppose to have a database containing the peaks of all elements we want to recognize, where each peak is represented by a wavelength and it is associated with its expected relative intensity and the corresponding element. Detection of elements in a sample is obtained by ranking the elements according to the distance of the associated vectors from the vector representing the sample. The application of this approach to elements identification using LIBS spectra obtained from several kinds of metallic alloys will be also illustrated. The possible extension of this technique towards an algorithm for fully automated LIBS analysis will be discussed.

  1. Terminology of the public relations field: corpus — automatic term recognition — terminology database

    Directory of Open Access Journals (Sweden)

    Nataša Logar Berginc

    2013-12-01

    Full Text Available The article describes an analysis of automatic term recognition results performed for single- and multi-word terms with the LUIZ term extraction system. The target application of the results is a terminology database of Public Relations and the main resource the KoRP Public Relations Corpus. Our analysis is focused on two segments: (a single-word noun term candidates, which we compare with the frequency list of nouns from KoRP and evaluate termhood on the basis of the judgements of two domain experts, and (b multi-word term candidates with verb and noun as headword. In order to better assess the performance of the system and the soundness of our approach we also performed an analysis of recall. Our results show that the terminological relevance of extracted nouns is indeed higher than that of merely frequent nouns, and that verbal phrases only rarely count as proper terms. The most productive patterns of multi-word terms with noun as a headword have the following structure: [adjective + noun], [adjective + and + adjective + noun] and [adjective + adjective + noun]. The analysis of recall shows low inter-annotator agreement, but nevertheless very satisfactory recall levels.

  2. Semi-automatic charge and mass identification in two-dimensional matrices

    CERN Document Server

    Gruyer, Diego; Chbihi, A; Frankland, J D; Barlini, S; Borderie, B; Bougault, R; Duenas, J A; Neindre, N Le; Lopez, O; Pastore, G; Piantelli, S; Valdre, S; Verde, G; Vient, E

    2016-01-01

    This article presents a new semi-automatic method for charge and mass identification in two-dimensional matrices. The proposed algorithm is based on the matrix's properties and uses as little information as possible on the global form of the iden tification lines, making it applicable to a large variety of matrices, including various $\\Delta$E-E correlations, or those coming from Pulse Shape Analysis of the charge signal in silicon detectors. Particular attention has been paid to the implementation in a suitable graphical environment, so that only two mouse-clicks are required from the user to calculate all initialization parameters. Example applications to recent data from both INDRA and FAZIA telescopes are presented.

  3. Identification of forensic samples by using an infrared-based automatic DNA sequencer.

    Science.gov (United States)

    Ricci, Ugo; Sani, Ilaria; Klintschar, Michael; Cerri, Nicoletta; De Ferrari, Francesco; Giovannucci Uzielli, Maria Luisa

    2003-06-01

    We have recently introduced a new protocol for analyzing all core loci of the Federal Bureau of Investigation's (FBI) Combined DNA Index System (CODIS) with an infrared (IR) automatic DNA sequencer (LI-COR 4200). The amplicons were labeled with forward oligonucleotide primers, covalently linked to a new infrared fluorescent molecule (IRDye 800). The alleles were displayed as familiar autoradiogram-like images with real-time detection. This protocol was employed for paternity testing, population studies, and identification of degraded forensic samples. We extensively analyzed some simulated forensic samples and mixed stains (blood, semen, saliva, bones, and fixed archival embedded tissues), comparing the results with donor samples. Sensitivity studies were also performed for the four multiplex systems. Our results show the efficiency, reliability, and accuracy of the IR system for the analysis of forensic samples. We also compared the efficiency of the multiplex protocol with ultraviolet (UV) technology. Paternity tests, undegraded DNA samples, and real forensic samples were analyzed with this approach based on IR technology and with UV-based automatic sequencers in combination with commercially-available kits. The comparability of the results with the widespread UV methods suggests that it is possible to exchange data between laboratories using the same core group of markers but different primer sets and detection methods.

  4. Automatic Identification of Artifact-Related Independent Components for Artifact Removal in EEG Recordings.

    Science.gov (United States)

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2016-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods.

  5. Automatic identification of pectoralis muscle on digital cranio-caudal-view mammograms

    Science.gov (United States)

    Ge, Mei; Mawdsley, Gordon; Yaffe, Martin

    2011-03-01

    To improve efficiency and reduce human error in the computerized calculation of volumetric breast density, we have developed an automatic identification process which suppresses the projected region of the pectoralis muscle on digital CC-view mammograms. The pixels in the image of the pectoralis muscle, represent dense tissue, but not related to risk, will cause an error in estimated breast density if counted as fibroglandular tissue. The pectoralis muscle on the CC-view is not always visible and has variable shape and location. Our algorithm robustly detects the existence of the pectoralis in the image and segments it as a semi-elliptical region that closely matches manually segmented images. We present a pipeline where adaptive thresholding and distance transforms have been used in the initial pectoralis region identification process; statistical region growing is applied to explore the region within the identified location aimed at refining the boundary; and a 2D shape descriptor is developed for the target validation: the segmented region is identified as the pectoralis muscle if it has a semi-elliptical contour. After the pectoralis muscle is identified, a 1D-FFT filtering is used for boundary smoothing. Quantitative evaluation was performed by comparing manual segmentation by a trained operator, and analysis using the algorithm in a set of 174 randomly selected digital mammograms. Use of the algorithm is shown to improve accuracy in the automatic determination of the volumetric ratio of breast composition by removal of the pectoralis muscle from both the numerator and denominator. As well, it greatly improves the efficiency and throughput in large scale volumetric mammographic density studies where previously interaction with an operator was required to obtain that level of accuracy.

  6. Automatic Identification of Artifact-Related Independent Components for Artifact Removal in EEG Recordings.

    Science.gov (United States)

    Zou, Yuan; Nathan, Viswam; Jafari, Roozbeh

    2016-01-01

    Electroencephalography (EEG) is the recording of electrical activity produced by the firing of neurons within the brain. These activities can be decoded by signal processing techniques. However, EEG recordings are always contaminated with artifacts which hinder the decoding process. Therefore, identifying and removing artifacts is an important step. Researchers often clean EEG recordings with assistance from independent component analysis (ICA), since it can decompose EEG recordings into a number of artifact-related and event-related potential (ERP)-related independent components. However, existing ICA-based artifact identification strategies mostly restrict themselves to a subset of artifacts, e.g., identifying eye movement artifacts only, and have not been shown to reliably identify artifacts caused by nonbiological origins like high-impedance electrodes. In this paper, we propose an automatic algorithm for the identification of general artifacts. The proposed algorithm consists of two parts: 1) an event-related feature-based clustering algorithm used to identify artifacts which have physiological origins; and 2) the electrode-scalp impedance information employed for identifying nonbiological artifacts. The results on EEG data collected from ten subjects show that our algorithm can effectively detect, separate, and remove both physiological and nonbiological artifacts. Qualitative evaluation of the reconstructed EEG signals demonstrates that our proposed method can effectively enhance the signal quality, especially the quality of ERPs, even for those that barely display ERPs in the raw EEG. The performance results also show that our proposed method can effectively identify artifacts and subsequently enhance the classification accuracies compared to four commonly used automatic artifact removal methods. PMID:25415992

  7. Automatic Threshold Determination for a Local Approach of Change Detection in Long-Term Signal Recordings

    Directory of Open Access Journals (Sweden)

    David Hewson

    2007-01-01

    Full Text Available CUSUM (cumulative sum is a well-known method that can be used to detect changes in a signal when the parameters of this signal are known. This paper presents an adaptation of the CUSUM-based change detection algorithms to long-term signal recordings where the various hypotheses contained in the signal are unknown. The starting point of the work was the dynamic cumulative sum (DCS algorithm, previously developed for application to long-term electromyography (EMG recordings. DCS has been improved in two ways. The first was a new procedure to estimate the distribution parameters to ensure the respect of the detectability property. The second was the definition of two separate, automatically determined thresholds. One of them (lower threshold acted to stop the estimation process, the other one (upper threshold was applied to the detection function. The automatic determination of the thresholds was based on the Kullback-Leibler distance which gives information about the distance between the detected segments (events. Tests on simulated data demonstrated the efficiency of these improvements of the DCS algorithm.

  8. A new technology for automatic identification and sorting of plastics for recycling.

    Science.gov (United States)

    Ahmad, S R

    2004-10-01

    A new technology for automatic sorting of plastics, based upon optical identification of fluorescence signatures of dyes, incorporated in such materials in trace concentrations prior to product manufacturing, is described. Three commercial tracers were selected primarily on the basis of their good absorbency in the 310-370 nm spectral band and their identifiable narrow-band fluorescence signatures in the visible band of the spectrum when present in binary combinations. This absorption band was selected because of the availability of strong emission lines in this band from a commercial Hg-arc lamp and high fluorescence quantum yields of the tracers at this excitation wavelength band. The plastics chosen for tracing and identification are HDPE, LDPE, PP, EVA, PVC and PET and the tracers were compatible and chemically non-reactive with the host matrices and did not affect the transparency of the plastics. The design of a monochromatic and collimated excitation source, the sensor system are described and their performances in identifying and sorting plastics doped with tracers at a few parts per million concentration levels are evaluated. In an industrial sorting system, the sensor was able to sort 300 mm long plastic bottles at a conveyor belt speed of 3.5 m.sec(-1) with a sorting purity of -95%. The limitation was imposed due to mechanical singulation irregularities at high speed and the limited processing speed of the computer used.

  9. Automatic identification of bird targets with radar via patterns produced by wing flapping.

    Science.gov (United States)

    Zaugg, Serge; Saporta, Gilbert; van Loon, Emiel; Schmaljohann, Heiko; Liechti, Felix

    2008-09-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research. PMID:18331979

  10. Automatic identification of bird targets with radar via patterns produced by wing flapping.

    Science.gov (United States)

    Zaugg, Serge; Saporta, Gilbert; van Loon, Emiel; Schmaljohann, Heiko; Liechti, Felix

    2008-09-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pattern due to wing flapping. The data were labelled by experts into the four classes BIRD, INSECT, CLUTTER and UFO (unidentifiable signals). We present a classification algorithm aimed at automatic recognition of bird targets. Variables related to signal intensity and wing flapping pattern were extracted (via continuous wavelet transform). We used support vector classifiers to build predictive models. We estimated classification performance via cross validation on four datasets. When data from the same dataset were used for training and testing the classifier, the classification performance was extremely to moderately high. When data from one dataset were used for training and the three remaining datasets were used as test sets, the performance was lower but still extremely to moderately high. This shows that the method generalizes well across different locations or times. Our method provides a substantial gain of time when birds must be identified in large collections of radar signals and it represents the first substantial step in developing a real time bird identification radar system. We provide some guidelines and ideas for future research.

  11. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training.

  12. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training. PMID:25223112

  13. Identification and Estimation of Gaussian Affine Term Structure Models

    OpenAIRE

    Hamilton, James D.; Jing Cynthia Wu

    2012-01-01

    This paper develops new results for identification and estimation of Gaussian affine term structure models. We establish that three popular canonical representations are unidentified, and demonstrate how unidentified regions can complicate numerical optimization. A separate contribution of the paper is the proposal of minimum-chi-square estimation as an alternative to MLE. We show that, although it is asymptotically equivalent to MLE, it can be much easier to compute. In some cases, MCSE allo...

  14. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  15. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  16. The Iqmulus Urban Showcase: Automatic Tree Classification and Identification in Huge Mobile Mapping Point Clouds

    Science.gov (United States)

    Böhm, J.; Bredif, M.; Gierlinger, T.; Krämer, M.; Lindenberg, R.; Liu, K.; Michel, F.; Sirmacek, B.

    2016-06-01

    Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.

  17. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature.

  18. Automatic Identification Algorithm of KPI%KPI指标的自动辨别算法

    Institute of Scientific and Technical Information of China (English)

    张卓

    2016-01-01

    应用数理统计原理给出了通过实际监测数据作为样本来估计话务量正常取值范围的方法及结论,并将这一结论推广到了一般的KPI指标。通过实际监测数据作为样本来估计KPI指标的均值及方差,进而推断其分布函数及其正常取值范围,最终给出自动辨别算法及自动控制程序。%This paper applied the principle of mathematical statistics to the actual monitoring data as a sample to estimate traffic normal value scope of method and conclusion, and this conclusion has been expanded to general kPIs, such as traffic, cutting over the success rate and amount of paging, etc. The final automatic identification algorithm is presented.

  19. Automatic procedure for mass and charge identification of light isotopes detected in CsI(Tl) of the GARFIELD apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Morelli, L.; Bruno, M.; Baiocco, G. [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); Bardelli, L.; Barlini, S.; Bini, M.; Casini, G. [Dipartimento di Fisica dell' Universita and INFN, Firenze (Italy); D' Agostino, M., E-mail: dagostino@bo.infn.i [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); Degerlier, M.; Gramegna, F. [INFN, Laboratori Nazionali di Legnaro (Italy); Kravchuk, V.L. [Dipartimento di Fisica dell' Universita and INFN, Bologna (Italy); INFN, Laboratori Nazionali di Legnaro (Italy); Marchi, T. [Dipartimento di Fisica dell' Universita, Padova, ItalyNUCL-EX Collaboration (Italy); INFN, Laboratori Nazionali di Legnaro (Italy); Pasquali, G.; Poggi, G. [Dipartimento di Fisica dell' Universita and INFN, Firenze (Italy)

    2010-08-21

    Mass and charge identification of light charged particles detected with the 180 CsI(Tl) detectors of the GARFIELD apparatus is presented. A 'tracking' method to automatically sample the Z and A ridges of 'Fast-Slow' histograms is developed. An empirical analytic identification function is used to fit correlations between Fast and Slow, in order to determine, event by event, the atomic and mass numbers of the detected charged reaction products. A summary of the advantages of the proposed method with respect to 'hand-based' procedures is reported.

  20. Automatic procedure for mass and charge identification of light isotopes detected in CsI(Tl) of the GARFIELD apparatus

    Science.gov (United States)

    Morelli, L.; Bruno, M.; Baiocco, G.; Bardelli, L.; Barlini, S.; Bini, M.; Casini, G.; D'Agostino, M.; Degerlier, M.; Gramegna, F.; Kravchuk, V. L.; Marchi, T.; Pasquali, G.; Poggi, G.

    2010-08-01

    Mass and charge identification of light charged particles detected with the 180 CsI(Tl) detectors of the GARFIELD apparatus is presented. A "tracking" method to automatically sample the Z and A ridges of "Fast-Slow" histograms is developed. An empirical analytic identification function is used to fit correlations between Fast and Slow, in order to determine, event by event, the atomic and mass numbers of the detected charged reaction products. A summary of the advantages of the proposed method with respect to "hand-based" procedures is reported.

  1. AROMA-AIRWICK: a CHLOE/CDC-3600 system for the automatic identification of spark images and their association into tracks

    International Nuclear Information System (INIS)

    The AROMA-AIRWICK System for CHLOE, an automatic film scanning equipment built at Argonne by Donald Hodges, and the CDC-3600 computer is a system for the automatic identification of spark images and their association into tracks. AROMA-AIRWICK has been an outgrowth of the generally recognized need for the automatic processing of high energy physics data and the fact that the Argonne National Laboratory has been a center of serious spark chamber development in recent years

  2. Evaluation of the algorithm for automatic identification of the common carotid artery in ARTSENS

    International Nuclear Information System (INIS)

    Arterial compliance (AC) is an indicator of the risk of cardiovascular diseases (CVDs) and it is generally estimated by B-mode ultrasound investigation. The number of sonologists in low- and middle-income countries is very disproportionate to the extent of CVD. To bridge this gap we are developing an image-free CVD risk screening tool–arterial stiffness evaluation for non-invasive screening (ARTSENS™) which can be operated with minimal training. ARTSENS uses a single element ultrasound transducer to investigate the wall dynamics of the common carotid artery (CCA) and subsequently measure the AC. Identification of the proximal and distal walls of the CCA, in the ultrasound frames, is an important step in the process of the measurement of AC. The image-free nature of ARTSENS creates some unique issues which necessitate the development of a new algorithm that can automatically identify the CCA from a sequence of A-mode radio-frequency (RF) frames. We have earlier presented the concept and preliminary results for an algorithm that employed clues from the relative positions and temporal motion of CCA walls, for identifying the CCA and finding the approximate wall positions. In this paper, we present the detailed algorithm and its extensive evaluation based on simulation and clinical studies. The algorithm identified the wall position correctly in more than 90% of all simulated datasets where the signal-to-noise ratio was greater than 3 dB. The algorithm was then tested extensively on RF data obtained from the CCA of 30 human volunteers, where it successfully located the arterial walls in more than 70% of all measurements. The algorithm could successfully reject frames where the CCA was not present thus assisting the operator to place the probe correctly in the image-free system, ARTSENS. It was demonstrated that the algorithm can be used in real-time with few trade-offs which do not affect the accuracy of CCA identification. A new method for depth range selection

  3. MetaboHunter: an automatic approach for identification of metabolites from 1H-NMR spectra of complex mixtures

    Directory of Open Access Journals (Sweden)

    Culf Adrian

    2011-10-01

    Full Text Available Abstract Background One-dimensional 1H-NMR spectroscopy is widely used for high-throughput characterization of metabolites in complex biological mixtures. However, the accurate identification of individual compounds is still a challenging task, particularly in spectral regions with higher peak densities. The need for automatic tools to facilitate and further improve the accuracy of such tasks, while using increasingly larger reference spectral libraries becomes a priority of current metabolomics research. Results We introduce a web server application, called MetaboHunter, which can be used for automatic assignment of 1H-NMR spectra of metabolites. MetaboHunter provides methods for automatic metabolite identification based on spectra or peak lists with three different search methods and with possibility for peak drift in a user defined spectral range. The assignment is performed using as reference libraries manually curated data from two major publicly available databases of NMR metabolite standard measurements (HMDB and MMCD. Tests using a variety of synthetic and experimental spectra of single and multi metabolite mixtures show that MetaboHunter is able to identify, in average, more than 80% of detectable metabolites from spectra of synthetic mixtures and more than 50% from spectra corresponding to experimental mixtures. This work also suggests that better scoring functions improve by more than 30% the performance of MetaboHunter's metabolite identification methods. Conclusions MetaboHunter is a freely accessible, easy to use and user friendly 1H-NMR-based web server application that provides efficient data input and pre-processing, flexible parameter settings, fast and automatic metabolite fingerprinting and results visualization via intuitive plotting and compound peak hit maps. Compared to other published and freely accessible metabolomics tools, MetaboHunter implements three efficient methods to search for metabolites in manually curated

  4. Google Earth Visualizations of the Marine Automatic Identification System (AIS): Monitoring Ship Traffic in National Marine Sanctuaries

    Science.gov (United States)

    Schwehr, K.; Hatch, L.; Thompson, M.; Wiley, D.

    2007-12-01

    The Automatic Identification System (AIS) is a new technology that provides ship position reports with location, time, and identity information without human intervention from ships carrying the transponders to any receiver listening to the broadcasts. In collaboration with the USCG's Research and Development Center, NOAA's Stellwagen Bank National Marine Sanctuary (SBNMS) has installed 3 AIS receivers around Massachusetts Bay to monitor ship traffic transiting the sanctuary and surrounding waters. The SBNMS and the USCG also worked together propose the shifting the shipping lanes (termed the traffic separation scheme; TSS) that transit the sanctuary slightly to the north to reduce the probability of ship strikes of whales that frequent the sanctuary. Following approval by the United Nation's International Maritime Organization, AIS provided a means for NOAA to assess changes in the distribution of shipping traffic caused by formal change in the TSS effective July 1, 2007. However, there was no easy way to visualize this type of time series data. We have created a software package called noaadata-py to process the AIS ship reports and produce KML files for viewing in Google Earth. Ship tracks can be shown changing over time to allow the viewer to feel the motion of traffic through the sanctuary. The ship tracks can also be gridded to create ship traffic density reports for specified periods of time. The density is displayed as map draped on the sea surface or as vertical histogram columns. Additional visualizations such as bathymetry images, S57 nautical charts, and USCG Marine Information for Safety and Law Enforcement (MISLE) can be combined with the ship traffic visualizations to give a more complete picture of the maritime environment. AIS traffic analyses have the potential to give managers throughout NOAA's National Marine Sanctuaries an improved ability to assess the impacts of ship traffic on the marine resources they seek to protect. Viewing ship traffic

  5. Automatic classification of long-term ambulatory ECG records according to type of ischemic heart disease

    Directory of Open Access Journals (Sweden)

    Smrdel Aleš

    2011-12-01

    Full Text Available Abstract Background Elevated transient ischemic ST segment episodes in the ambulatory electrocardiographic (AECG records appear generally in patients with transmural ischemia (e. g. Prinzmetal's angina while depressed ischemic episodes appear in patients with subendocardial ischemia (e. g. unstable or stable angina. Huge amount of AECG data necessitates automatic methods for analysis. We present an algorithm which determines type of transient ischemic episodes in the leads of records (elevations/depressions and classifies AECG records according to type of ischemic heart disease (Prinzmetal's angina; coronary artery diseases excluding patients with Prinzmetal's angina; other heart diseases. Methods The algorithm was developed using 24-hour AECG records of the Long Term ST Database (LTST DB. The algorithm robustly generates ST segment level function in each AECG lead of the records, and tracks time varying non-ischemic ST segment changes such as slow drifts and axis shifts to construct the ST segment reference function. The ST segment reference function is then subtracted from the ST segment level function to obtain the ST segment deviation function. Using the third statistical moment of the histogram of the ST segment deviation function, the algorithm determines deflections of leads according to type of ischemic episodes present (elevations, depressions, and then classifies records according to type of ischemic heart disease. Results Using 74 records of the LTST DB (containing elevated or depressed ischemic episodes, mixed ischemic episodes, or no episodes, the algorithm correctly determined deflections of the majority of the leads of the records and correctly classified majority of the records with Prinzmetal's angina into the Prinzmetal's angina category (7 out of 8; majority of the records with other coronary artery diseases into the coronary artery diseases excluding patients with Prinzmetal's angina category (47 out of 55; and correctly

  6. Introducing a semi-automatic method to simulate large numbers of forensic fingermarks for research on fingerprint identification.

    Science.gov (United States)

    Rodriguez, Crystal M; de Jongh, Arent; Meuwly, Didier

    2012-03-01

    Statistical research on fingerprint identification and the testing of automated fingerprint identification system (AFIS) performances require large numbers of forensic fingermarks. These fingermarks are rarely available. This study presents a semi-automatic method to create simulated fingermarks in large quantities that model minutiae features or images of forensic fingermarks. This method takes into account several aspects contributing to the variability of forensic fingermarks such as the number of minutiae, the finger region, and the elastic deformation of the skin. To investigate the applicability of the simulated fingermarks, fingermarks have been simulated with 5-12 minutiae originating from different finger regions for six fingers. An AFIS matching algorithm was used to obtain similarity scores for comparisons between the minutiae configurations of fingerprints and the minutiae configurations of simulated and forensic fingermarks. The results showed similar scores for both types of fingermarks suggesting that the simulated fingermarks are good substitutes for forensic fingermarks. PMID:22103733

  7. Contribution to automatic speech recognition. Analysis of the direct acoustical signal. Recognition of isolated words and phoneme identification

    International Nuclear Information System (INIS)

    This report deals with the acoustical-phonetic step of the automatic recognition of the speech. The parameters used are the extrema of the acoustical signal (coded in amplitude and duration). This coding method, the properties of which are described, is simple and well adapted to a digital processing. The quality and the intelligibility of the coded signal after reconstruction are particularly satisfactory. An experiment for the automatic recognition of isolated words has been carried using this coding system. We have designed a filtering algorithm operating on the parameters of the coding. Thus the characteristics of the formants can be derived under certain conditions which are discussed. Using these characteristics the identification of a large part of the phonemes for a given speaker was achieved. Carrying on the studies has required the development of a particular methodology of real time processing which allowed immediate evaluation of the improvement of the programs. Such processing on temporal coding of the acoustical signal is extremely powerful and could represent, used in connection with other methods an efficient tool for the automatic processing of the speech.(author)

  8. Automatic Identification of the Repolarization Endpoint by Computing the Dominant T-wave on a Reduced Number of Leads.

    Science.gov (United States)

    Giuliani, C; Agostinelli, A; Di Nardo, F; Fioretti, S; Burattini, L

    2016-01-01

    Electrocardiographic (ECG) T-wave endpoint (Tend) identification suffers lack of reliability due to the presence of noise and variability among leads. Tend identification can be improved by using global repolarization waveforms obtained by combining several leads. The dominant T-wave (DTW) is a global repolarization waveform that proved to improve Tend identification when computed using the 15 (I to III, aVr, aVl, aVf, V1 to V6, X, Y, Z) leads usually available in clinics, of which only 8 (I, II, V1 to V6) are independent. The aim of the present study was to evaluate if the 8 independent leads are sufficient to obtain a DTW which allows a reliable Tend identification. To this aim Tend measures automatically identified from 15-dependent-lead DTWs of 46 control healthy subjects (CHS) and 103 acute myocardial infarction patients (AMIP) were compared with those obtained from 8-independent-lead DTWs. Results indicate that Tend distributions have not statistically different median values (CHS: 340 ms vs. 340 ms, respectively; AMIP: 325 ms vs. 320 ms, respectively), besides being strongly correlated (CHS: ρ=0.97, AMIP: 0.88; Pidentification from DTW, the 8 independent leads can be used without a statistically significant loss of accuracy but with a significant decrement of computational effort. The lead dependence of 7 out of 15 leads does not introduce a significant bias in the Tend determination from 15 dependent lead DTWs. PMID:27347218

  9. 6 CFR 37.21 - Temporary or limited-term driver's licenses and identification cards.

    Science.gov (United States)

    2010-01-01

    ... identification cards. 37.21 Section 37.21 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY REAL ID DRIVER'S LICENSES AND IDENTIFICATION CARDS Minimum Documentation, Verification, and Card... may only issue a temporary or limited-term REAL ID driver's license or identification card to...

  10. Language Identification in Short Utterances Using Long Short-Term Memory (LSTM) Recurrent Neural Networks.

    Science.gov (United States)

    Zazo, Ruben; Lozano-Diez, Alicia; Gonzalez-Dominguez, Javier; Toledano, Doroteo T; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    Long Short Term Memory (LSTM) Recurrent Neural Networks (RNNs) have recently outperformed other state-of-the-art approaches, such as i-vector and Deep Neural Networks (DNNs), in automatic Language Identification (LID), particularly when dealing with very short utterances (∼3s). In this contribution we present an open-source, end-to-end, LSTM RNN system running on limited computational resources (a single GPU) that outperforms a reference i-vector system on a subset of the NIST Language Recognition Evaluation (8 target languages, 3s task) by up to a 26%. This result is in line with previously published research using proprietary LSTM implementations and huge computational resources, which made these former results hardly reproducible. Further, we extend those previous experiments modeling unseen languages (out of set, OOS, modeling), which is crucial in real applications. Results show that a LSTM RNN with OOS modeling is able to detect these languages and generalizes robustly to unseen OOS languages. Finally, we also analyze the effect of even more limited test data (from 2.25s to 0.1s) proving that with as little as 0.5s an accuracy of over 50% can be achieved.

  11. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  12. The Effects of Degraded Vision and Automatic Combat Identification Reliability on Infantry Friendly Fire Engagements

    OpenAIRE

    Kogler, Timothy Michael

    2003-01-01

    Fratricide is one of the most devastating consequences of any military conflict. Target identification failures have been identified as the last link in a chain of mistakes that can lead to fratricide. Other links include weapon and equipment malfunctions, command, control, and communication failures, navigation failures, fire discipline failures, and situation awareness failures. This research examined the effects of degraded vision and combat identification reliability on the time-stress...

  13. Automatic Screening of Missing Objects and Identification with Group Coding of RF Tags

    OpenAIRE

    G. Vijayaraju

    2013-01-01

    Here the container of the shipping based phenomena it is a collection of the objects in a well oriented fashion by which there is a group oriented fashion related to the well efficient strategy of the objects based on the physical phenomena in a well efficient fashion respectively. Here by the enabling of the radio frequency identification based strategy in which object identification takes place in the system in a well efficient fashion and followed by the container oriented strategy in ...

  14. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    Science.gov (United States)

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types. PMID:26723348

  15. A Method for Automatic Identification of Reliable Heart Rates Calculated from ECG and PPG Waveforms

    OpenAIRE

    Yu, Chenggang; Liu, Zhenqiu; McKenna, Thomas; Reisner, Andrew T.; Reifman, Jaques

    2006-01-01

    Objective: The development and application of data-driven decision-support systems for medical triage, diagnostics, and prognostics pose special requirements on physiologic data. In particular, that data are reliable in order to produce meaningful results. The authors describe a method that automatically estimates the reliability of reference heart rates (HRr) derived from electrocardiogram (ECG) waveforms and photoplethysmogram (PPG) waveforms recorded by vital-signs monitors. The reliabilit...

  16. Compensation of Cable Voltage Drops and Automatic Identification of Cable Parameters in 400 Hz Ground Power Units

    DEFF Research Database (Denmark)

    Borup, Uffe; Nielsen, Bo Vork; Blaabjerg, Frede

    2004-01-01

    In this paper a new cable voltage drop compensation scheme for ground power units (GPU) is presented. The scheme is able to predict and compensate the voltage drop in an output cable by measuring the current quantities at the source. The prediction is based on an advanced cable model that includes...... self and mutual impedance parameters. The model predicts the voltage drop at both symmetrical and unbalanced loads. In order to determine the cable model parameters an automatic identification concept is derived. The concept is tested in full scale on a 90-kVA 400-Hz GPU with two different cables....... It is concluded that the performance is significantly improved both with symmetrical and unsymmetrical cables and with balanced and unbalanced loads....

  17. Semi-automatic identification of punching areas for tissue microarray building: the tubular breast cancer pilot study

    Directory of Open Access Journals (Sweden)

    Beltrame Francesco

    2010-11-01

    Full Text Available Abstract Background Tissue MicroArray technology aims to perform immunohistochemical staining on hundreds of different tissue samples simultaneously. It allows faster analysis, considerably reducing costs incurred in staining. A time consuming phase of the methodology is the selection of tissue areas within paraffin blocks: no utilities have been developed for the identification of areas to be punched from the donor block and assembled in the recipient block. Results The presented work supports, in the specific case of a primary subtype of breast cancer (tubular breast cancer, the semi-automatic discrimination and localization between normal and pathological regions within the tissues. The diagnosis is performed by analysing specific morphological features of the sample such as the absence of a double layer of cells around the lumen and the decay of a regular glands-and-lobules structure. These features are analysed using an algorithm which performs the extraction of morphological parameters from images and compares them to experimentally validated threshold values. Results are satisfactory since in most of the cases the automatic diagnosis matches the response of the pathologists. In particular, on a total of 1296 sub-images showing normal and pathological areas of breast specimens, algorithm accuracy, sensitivity and specificity are respectively 89%, 84% and 94%. Conclusions The proposed work is a first attempt to demonstrate that automation in the Tissue MicroArray field is feasible and it can represent an important tool for scientists to cope with this high-throughput technique.

  18. Need of a consistent and convenient nucleus identification in ENDF files for the automatic construction of the depletion chains

    Science.gov (United States)

    Mosca, Pietro; Mounier, Claude

    2016-03-01

    The automatic construction of evolution chains recently implemented in GALILEE system is based on the analysis of several ENDF files : the multigroup production cross sections present in the GENDF files processed by NJOY from the ENDF evaluation, the decay file and the fission product yields (FPY) file. In this context, this paper highlights the importance of the nucleus identification to properly interconnect the data mentioned above. The first part of the paper describes the present status of the nucleus identification among the several ENDF files focusing, in particular, on the use of the excited state number and of the isomeric state number. The second part reviews the problems encountered during the automatic construction of the depletion chains using recent ENDF data. The processing of the JEFF-3.1.1, ENDF/B-VII.0 (decay and FPY) and the JEFF-3.2 (production cross section) points out problems about the compliance or not of the nucleus identifiers with the ENDF-6 format and sometimes the inconsistencies among the various ENDF files. In addition, the analysis of EAF-2003 and EAF-2010 shows some incoherence between the ZA product identifier and the reaction identifier MT for the reactions (n, pα) and (n, 2np). As a main result of this work, our suggestion is to change the ENDF format using systematically the isomeric state number to identify the nuclei. This proposal is already compliant to a huge amount ENDF data that are not in agreement with the present ENDF format. This choice is the most convenient because, ultimately, it allows one to give human readable names to the nuclei of the depletion chains.

  19. An automatic method for atom identification in scanning tunnelling microscopy images of Fe-chalcogenide superconductors.

    Science.gov (United States)

    Perasso, A; Toraci, C; Massone, A M; Piana, M; Gerbi, A; Buzio, R; Kawale, S; Bellingeri, E; Ferdeghini, C

    2015-12-01

    We describe a computational approach for the automatic recognition and classification of atomic species in scanning tunnelling microscopy images. The approach is based on a pipeline of image processing methods in which the classification step is performed by means of a Fuzzy Clustering algorithm. As a representative example, we use the computational tool to characterize the nanoscale phase separation in thin films of the Fe-chalcogenide superconductor FeSex Te1-x , starting from synthetic data sets and experimental topographies. We quantify the stoichiometry fluctuations on length scales from tens to a few nanometres. PMID:26291960

  20. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  1. Automatic identification of bird targets with radar via patterns produced by wing flapping

    NARCIS (Netherlands)

    S. Zaugg; G. Saporta; E. van Loon; H. Schmaljohann; F. Liechti

    2008-01-01

    Bird identification with radar is important for bird migration research, environmental impact assessments (e.g. wind farms), aircraft security and radar meteorology. In a study on bird migration, radar signals from birds, insects and ground clutter were recorded. Signals from birds show a typical pa

  2. Analysis and Development of FACE Automatic Apparatus for Rapid Identification of Transuranium Isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Sebesta, E.H.

    1978-09-01

    A description of and operating manual for the FACE Automatic Apparatus has been written along with a documentation of the FACE machine operating program, to provide a user manual for the FACE Automatic Apparatus. In addition, FACE machine performance was investigated to improve transuranium throughput. Analysis of the causes of transuranium isotope loss was undertaken both chemical and radioactive. To lower radioactive loss, the dynamics of the most time consuming step of the FACE machine, the chromatographic column output droplet drying and flaming, in preparation of sample for alpha spectroscopy and counting, was investigated. A series of droplets were dried in an experimental apparatus demonstrating that droplets could be dried significantly faster through more intensie heating, enabling the FACE machine cycle to be shortened by 30-60 seconds. Proposals incorporating these ideas were provided for FACE machine development. The 66% chemical loss of product was analyzed and changes were proposed to reduce the radioisotopes product loss. An analysis of the chromatographic column was also provided. All operating steps in the FACE machine are described and analyzed to provide a complete guide, along with the proposals for machine improvement.

  3. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  4. Price strategy and pricing strategy: terms and content identification

    OpenAIRE

    Panasenko Tetyana

    2015-01-01

    The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  5. A smart pattern recognition system for the automatic identification of aerospace acoustic sources

    Science.gov (United States)

    Cabell, R. H.; Fuller, C. R.

    1989-01-01

    An intelligent air-noise recognition system is described that uses pattern recognition techniques to distinguish noise signatures of five different types of acoustic sources, including jet planes, propeller planes, a helicopter, train, and wind turbine. Information for classification is calculated using the power spectral density and autocorrelation taken from the output of a single microphone. Using this system, as many as 90 percent of test recordings were correctly identified, indicating that the linear discriminant functions developed can be used for aerospace source identification.

  6. Towards the automatic identification of cloudiness condition by means of solar global irradiance measurements

    Science.gov (United States)

    Sanchez, G.; Serrano, A.; Cancillo, M. L.

    2010-09-01

    This study focuses on the design of an automatic algorithm for classification of the cloudiness condition based only on global irradiance measurements. Clouds are a major modulating factor for the Earth radiation budget. They attenuate the solar radiation and control the terrestrial radiation participating in the energy balance. Generally, cloudiness is a limiting factor for the solar radiation reaching the ground, highly contributing to the Earth albedo. Additionally it is the main responsible for the high variability shown by the downward irradiance measured at ground level. Being a major source for the attenuation and high-frequency variability of the solar radiation available for energy purposes in solar power plants, the characterization of the cloudiness condition is of great interest. This importance is even higher in Southern Europe, where very high irradiation values are reached during long periods within the year. Thus, several indexes have been proposed in the literature for the characterization of the cloudiness condition of the sky. Among these indexes, those exclusively involving global irradiance are of special interest since this variable is the most widely available measurement in most radiometric stations. Taking this into account, this study proposes an automatic algorithm for classifying the cloudiness condition of the sky into three categories: cloud-free, partially cloudy and overcast. For that aim, solar global irradiance was measured by Kipp&Zonen CMP11 pyranometer installed on the terrace of the Physics building in the Campus of Badajoz (Spain) of the University of Extremadura. Measurements were recorded at one-minute basis for a period of study extending from 23 November 2009 to 31 March 2010. The algorithm is based on the clearness index kt, which is calculated as the ratio between the solar global downward irradiance measured at ground and the solar downward irradiance at the top of the atmosphere. Since partially cloudy conditions

  7. Price strategy and pricing strategy: terms and content identification

    Directory of Open Access Journals (Sweden)

    Panasenko Tetyana

    2015-11-01

    Full Text Available The article is devoted to the terminology and content identification of seemingly identical concepts "price strategy" and "pricing strategy". The article contains evidence that the price strategy determines the direction, principles and procedure of implementing the company price policy and pricing strategy creates a set of rules and practical methods of price formation in accordance with the pricing strategy of the company.

  8. Automatic identification of mobile and rigid substructures in molecular dynamics simulations and fractional structural fluctuation analysis.

    Directory of Open Access Journals (Sweden)

    Leandro Martínez

    Full Text Available The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD and Root Mean Square Fluctuations (RMSF of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.

  9. Automatic Identification of Critical Follow-Up Recommendation Sentences in Radiology Reports

    Science.gov (United States)

    Yetisgen-Yildiz, Meliha; Gunn, Martin L.; Xia, Fei; Payne, Thomas H.

    2011-01-01

    Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. When recommendations are not systematically identified and promptly communicated to referrers, poor patient outcomes can result. Using information technology can improve communication and improve patient safety. In this paper, we describe a text processing approach that uses natural language processing (NLP) and supervised text classification methods to automatically identify critical recommendation sentences in radiology reports. To increase the classification performance we enhanced the simple unigram token representation approach with lexical, semantic, knowledge-base, and structural features. We tested different combinations of those features with the Maximum Entropy (MaxEnt) classification algorithm. Classifiers were trained and tested with a gold standard corpus annotated by a domain expert. We applied 5-fold cross validation and our best performing classifier achieved 95.60% precision, 79.82% recall, 87.0% F-score, and 99.59% classification accuracy in identifying the critical recommendation sentences in radiology reports. PMID:22195225

  10. Variable identification and automatic tuning of the main module of a servo system of parallel mechanism

    Institute of Scientific and Technical Information of China (English)

    YANG Zhiyong; XU Meng; HUANG Tian; NI Yanbing

    2007-01-01

    The variables of the main module of a servo system for miniature reconfigurable parallel mechanism were identified and automatically tuned. With the reverse solution module of the translation, the module with the exerted translation joint was obtained, which included the location, velocity and acceleration of the parallelogram carriage- branch. The rigid dynamic reverse model was set as the virtual work principle. To identify the variables of the servo system, the triangle-shaped input signal with variable frequency was adopted to overcome the disadvantages of the pseudo-random number sequence, i.e., making the change of the vibration amplitude of the motor dramatically, easily impact the servo motor and make the velocity loop open and so on. Moreover, all the variables including,the rotary inertia of the servo system were identified by the additive mass. The overshoot and rise time were the optimum goals, the limited changing load with the attitude was considered, and the range of the controller variables in the servo system was identified. The results of the experiments prove that the method is accurate.

  11. REMI and ROUSE: Quantitative Models for Long-Term and Short-Term Priming in Perceptual Identification

    NARCIS (Netherlands)

    E.J. Wagenmakers (Eric-Jan); R. Zeelenberg (René); D.E. Huber (David); J.G.W. Raaijmakers (Jeroen)

    2003-01-01

    textabstractThe REM model originally developed for recognition memory (Shiffrin & Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. We discuss two REM models based on Bayesian principles: a model for long-term priming (REMI; Sc

  12. Comparison between three implementations of automatic identification algorithms for the quantification and characterization of mesoscale eddies in the South Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    J. M. A. C. Souza

    2011-03-01

    Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.

  13. Automatic Spatially-Adaptive Balancing of Energy Terms for Image Segmentation

    CERN Document Server

    Rao, Josna; Abugharbieh, Rafeef

    2009-01-01

    Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images.

  14. Automatic Screening of Missing Objects and Identification with Group Coding of RF Tags

    Directory of Open Access Journals (Sweden)

    G. Vijayaraju

    2013-11-01

    Full Text Available Here the container of the shipping based phenomena it is a collection of the objects in a well oriented fashion by which there is a group oriented fashion related to the well efficient strategy of the objects based on the physical phenomena in a well efficient fashion respectively. Here by the enabling of the radio frequency identification based strategy in which object identification takes place in the system in a well efficient fashion and followed by the container oriented strategy in a well effective fashion respectively. Here there is a problem with respect to the present strategy in which there is a problem with respect to the design oriented mechanism by which there is a no proper analysis takes place for the accurate identification of the objects based on the missing strategy plays a major role in the system based aspect respectively. Here a new technique is proposed in order to overcome the problem of the previous method here the present design oriented powerful strategy includes the object oriented determination of the ID based on the user oriented phenomena in a well effective manner where the data related to the strategy of the missing strategy plays a major role in the system based aspect in a well effective fashion by which that is from the perfect analysis takes place from the same phenomena without the help of the entire database n a well respective fashion takes place in the system respectively. Here the main key aspect of the present method is to effectively divide the entire data related to the particular aspect and define based on the present strategy in a well effective manner in which there is coordination has to be maintained in the system based aspect respectively. Simulations have been conducted on the present method and a lot of analysis takes place on the large number of the data sets in a well oriented fashion with respect to the different environmental conditions where there is an accurate analysis with respect to

  15. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    Science.gov (United States)

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. PMID:27313114

  16. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    Science.gov (United States)

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community.

  17. Automatic identification of resting state networks: an extended version of multiple template-matching

    Science.gov (United States)

    Guaje, Javier; Molina, Juan; Rudas, Jorge; Demertzi, Athena; Heine, Lizette; Tshibanda, Luaba; Soddu, Andrea; Laureys, Steven; Gómez, Francisco

    2015-12-01

    Functional magnetic resonance imaging in resting state (fMRI-RS) constitutes an informative protocol to investigate several pathological and pharmacological conditions. A common approach to study this data source is through the analysis of changes in the so called resting state networks (RSNs). These networks correspond to well-defined functional entities that have been associated to different low and high brain order functions. RSNs may be characterized by using Independent Component Analysis (ICA). ICA provides a decomposition of the fMRI-RS signal into sources of brain activity, but it lacks of information about the nature of the signal, i.e., if the source is artifactual or not. Recently, a multiple template-matching (MTM) approach was proposed to automatically recognize RSNs in a set of Independent Components (ICs). This method provides valuable information to assess subjects at individual level. Nevertheless, it lacks of a mechanism to quantify how much certainty there is about the existence/absence of each network. This information may be important for the assessment of patients with severely damaged brains, in which RSNs may be greatly affected as a result of the pathological condition. In this work we propose a set of changes to the original MTM that improves the RSNs recognition task and also extends the functionality of the method. The key points of this improvement is a standardization strategy and a modification of method's constraints that adds flexibility to the approach. Additionally, we also introduce an analysis to the trustworthiness measurement of each RSN obtained by using template-matching approach. This analysis consists of a thresholding strategy applied over the computed Goodness-of-Fit (GOF) between the set of templates and the ICs. The proposed method was validated on 2 two independent studies (Baltimore, 23 healthy subjects and Liege, 27 healthy subjects) with different configurations of MTM. Results suggest that the method will provide

  18. 基于条件随机场的《伤寒论》中医术语自动识别%Automatic identification of TCM terminology in Shanghan Lun based on conditional random field

    Institute of Scientific and Technical Information of China (English)

    孟洪宇; 谢晴宇; 常虹; 孟庆刚

    2015-01-01

    Objective To explore the methods of automatic identification of TCM terminology and to ex-pand the forms of natural language processing in TCM documents.Methods Based on the methods of conditional random field( CRF) , annotation and automatic identification on terms of symptoms, diseases, pulse-types and prescriptions recorded in Shanghan Lun as the research subjects, the effects of different combinations of the features, such as Chinese character itself, part of speech, word boundary and term category label, on identification of terminology were analyzed and the most effective combination was selected.Results The TCM terminology automatic identification model, combining with the features of Chinese character itself, part of speech, word boundary and term category label, had the precision of 85.00%, recall of 68.00%and F score of 75.56%.Conclusion The multi-features model of combi-nation of Chinese character itself, part of speech, word boundary and the term category label achieved the best identifying result in all combinations.%目的:探索中医术语的自动识别方法,扩充中医文本的自然语言处理形式。方法采用基于条件随机场( CRF)的方法,针对《伤寒论》文本中的症状、病名、脉象、方剂等中医术语的自动识别标注问题,通过结合字本身、词性、词边界、术语类别标注的特征,分析不同特征组合对术语识别的影响,并探讨最具有效性的组合。结果以字本身、词边界、词性、类别标签为特征组合的中医术语识别模型准确率为85.00%,召回率为68.00%,F值为75.56%。结论字本身、词性、词边界、术语类别标注的多特征融合的模型识别效果最优。

  19. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-09-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  20. Automatic estimation of aquifer parameters using long-term water supply pumping and injection records

    Science.gov (United States)

    Luo, Ning; Illman, Walter A.

    2016-04-01

    Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity (T) and storativity (S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.

  1. Automatic Identification of Messages Related to Adverse Drug Reactions from Online User Reviews using Feature-based Classification.

    Directory of Open Access Journals (Sweden)

    Jingfang Liu

    2014-11-01

    Full Text Available User-generated medical messages on Internet contain extensive information related to adverse drug reactions (ADRs and are known as valuable resources for post-marketing drug surveillance. The aim of this study was to find an effective method to identify messages related to ADRs automatically from online user reviews.We conducted experiments on online user reviews using different feature set and different classification technique. Firstly, the messages from three communities, allergy community, schizophrenia community and pain management community, were collected, the 3000 messages were annotated. Secondly, the N-gram-based features set and medical domain-specific features set were generated. Thirdly, three classification techniques, SVM, C4.5 and Naïve Bayes, were used to perform classification tasks separately. Finally, we evaluated the performance of different method using different feature set and different classification technique by comparing the metrics including accuracy and F-measure.In terms of accuracy, the accuracy of SVM classifier was higher than 0.8, the accuracy of C4.5 classifier or Naïve Bayes classifier was lower than 0.8; meanwhile, the combination feature sets including n-gram-based feature set and domain-specific feature set consistently outperformed single feature set. In terms of F-measure, the highest F-measure is 0.895 which was achieved by using combination feature sets and a SVM classifier. In all, we can get the best classification performance by using combination feature sets and SVM classifier.By using combination feature sets and SVM classifier, we can get an effective method to identify messages related to ADRs automatically from online user reviews.

  2. Automatic Whole-Spectrum Matching Techniques for Identification of Pure and Mixed Minerals using Raman Spectroscopy

    Science.gov (United States)

    Dyar, M. D.; Carey, C. J.; Breitenfeld, L.; Tague, T.; Wang, P.

    2015-12-01

    In situuse of Raman spectroscopy on Mars is planned for three different instruments in the next decade. Although implementations differ, they share the potential to identify surface minerals and organics and inform Martian geology and geochemistry. Their success depends on the availability of appropriate databases and software for phase identification. For this project, we have consolidated all known publicly-accessible Raman data on minerals for which independent confirmation of phase identity is available, and added hundreds of additional spectra acquired using varying instruments and laser energies. Using these data, we have developed software tools to improve mineral identification accuracy. For pure minerals, whole-spectrum matching algorithms far outperform existing tools based on diagnostic peaks in individual phases. Optimal matching accuracy does depend on subjective end-user choices for data processing (such as baseline removal, intensity normalization, and intensity squashing), as well as specific dataset characteristics. So, to make this tuning process amenable to automated optimization methods, we developed a machine learning-based generalization of these choices within a preprocessing and matching framework. Our novel method dramatically reduces the burden on the user and results in improved matching accuracy. Moving beyond identifying pure phases into quantification of relative abundances is a complex problem because relationships between peak intensity and mineral abundance are obscured by complicating factors: exciting laser frequency, the Raman cross section of the mineral, crystal orientation, and long-range chemical and structural ordering in the crystal lattices. Solving this un-mixing problem requires adaptation of our whole-spectrum algorithms and a large number of test spectra of minerals in known volume proportions, which we are creating for this project. Key to this effort is acquisition of spectra from mixtures of pure minerals paired

  3. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Directory of Open Access Journals (Sweden)

    Malik M. Naeem Mannan

    2016-02-01

    Full Text Available Contamination of eye movement and blink artifacts in Electroencephalogram (EEG recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI. In this paper, we proposed an automatic framework based on independent component analysis (ICA and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.

  4. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    Science.gov (United States)

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  5. Hybrid EEG--Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal.

    Science.gov (United States)

    Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad

    2016-01-01

    Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276

  6. Hybrid EEG—Eye Tracker: Automatic Identification and Removal of Eye Movement and Blink Artifacts from Electroencephalographic Signal

    Science.gov (United States)

    Mannan, Malik M. Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M. Ahmad

    2016-01-01

    Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276

  7. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID of Penguins.

    Directory of Open Access Journals (Sweden)

    Vsevolod Afanasyev

    Full Text Available A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  8. AN AUTOMATIC LEAF RECOGNITION SYSTEM FOR PLANT IDENTIFICATION USING MACHINE VISION TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    VIJAY SATTI

    2013-04-01

    Full Text Available Plants are the backbone of all life on Earth and an essential resource for human well-being. Plant recognition is very important in agriculture for the management of plant species whereas botanists can use this application for medicinal purposes. Leaf of different plants have different characteristics which can be used to classify them.This paper presents a simple and computationally efficient method for plant identification using digital image processing and machine vision technology. The proposed approach consists of three phases: pre-processing, feature extraction and classification. Pre- processing is the technique of enhancing data images prior to computational processing. The feature extraction phase derives features based on color and shape of the leaf image. These features are used as inputs to the classifier for efficient classification and the results were tested and compared using Artificial Neural Network (ANN and Euclidean (KNN classifier. The network was trained with 1907 sample leaves of 33 different plant species taken form Flavia dataset. The proposed approach is 93.3 percent accurate using ANN classifier and the comparison of classifiers shows that ANN takes less average time for execution than Euclidean distance method.

  9. Large data analysis: automatic visual personal identification in a demography of 1.2 billion persons

    Science.gov (United States)

    Daugman, John

    2014-05-01

    The largest biometric deployment in history is now underway in India, where the Government is enrolling the iris patterns (among other data) of all 1.2 billion citizens. The purpose of the Unique Identification Authority of India (UIDAI) is to ensure fair access to welfare benefits and entitlements, to reduce fraud, and enhance social inclusion. Only a minority of Indian citizens have bank accounts; only 4 percent possess passports; and less than half of all aid money reaches its intended recipients. A person who lacks any means of establishing their identity is excluded from entitlements and does not officially exist; thus the slogan of UIDAI is: To give the poor an identity." This ambitious program enrolls a million people every day, across 36,000 stations run by 83 agencies, with a 3-year completion target for the entire national population. The halfway point was recently passed with more than 600 million persons now enrolled. In order to detect and prevent duplicate identities, every iris pattern that is enrolled is first compared against all others enrolled so far; thus the daily workflow now requires 600 trillion (or 600 million-million) iris cross-comparisons. Avoiding identity collisions (False Matches) requires high biometric entropy, and achieving the tremendous match speed requires phase bit coding. Both of these requirements are being delivered operationally by wavelet methods developed by the author for encoding and comparing iris patterns, which will be the focus of this Large Data Award" presentation.

  10. Multispectral hypercolorimetry and automatic guided pigment identification: some masterpieces case studies

    Science.gov (United States)

    Melis, Marcello; Miccoli, Matteo; Quarta, Donato

    2013-05-01

    A couple of years ago we proposed, in this same session, an extension to the standard colorimetry (CIE '31) that we called Hypercolorimetry. It was based on an even sampling of the 300-1000nm wavelength range, with the definition of 7 hypercolor matching functions optimally shaped to minimize the methamerism. Since then we consolidated the approach through a large number of multispectral analysis and specialized the system to the non invasive diagnosis for paintings and frescos. In this paper we describe the whole process, from the multispectral image acquisition to the final 7 bands computation and we show the results on paintings from Masters of the colour. We describe and propose in this paper a systematic approach to the non invasive diagnosis that is able to change a subjective analysis into a repeatable measure indipendent from the specific lighting conditions and from the specific acquisition system. Along with the Hypercolorimetry and its consolidation in the field of non invasive diagnosis, we developed also a standard spectral reflectance database of pure pigments and pigments painted with different bindings. As we will see, this database could be compared to the reflectances of the painting to help the diagnostician in identifing the proper matter. We used a Nikon D800FR (Full Range) camera. This is a 36megapixel reflex camera modified under a Nikon/Profilocolore common project, to achieve a 300-1000nm range sensitivity. The large amount of data allowed us to perform very accurate pixels comparisions, based on their spectral reflectance. All the original pigments and their binding have been provided by the Opificio delle Pietre Dure, Firenze, Italy, while the analyzed masterpieces belong to the collection of the Pinacoteca Nazionale of Bologna, Italy.

  11. Automatic identification and placement of measurement stations for hydrological discharge simulations at basin scale

    Science.gov (United States)

    Grassi, P. R.; Ceppi, A.; Cancarè, F.; Ravazzani, G.; Mancini, M.; Sciuto, D.

    2012-04-01

    corresponding data is used, and false that it is not used. Using this definition of the solution space it is possible to apply various optimization algorithms such as genetics and simulated annealing. Iterating on a large set of possible configurations these algorithms provide the set of Pareto-optimal solutions, i.e. the number of measuring points is minimized while the forecasting accuracy is maximised. The identified Pareto curve is approximate, since the identification of the complete Pareto curve is practically impossible due to the large amount of possible configurations. From the experimental results, as expected, we notice that a certain set of weather data are essential for hydrological simulations while other are negligible. Combining the outcome of different optimization algorithms is possible to extract a reliable set of rules to place measurement stations for forecasting monitoring.

  12. Hybrid ICA – regression: automatic identification and removal of ocular artifacts from electroencephalographic signals

    Directory of Open Access Journals (Sweden)

    Malik Muhammad Naeem Mannan

    2016-05-01

    Full Text Available Electroencephalography (EEG is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI. It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA, regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA and regression-ICA (REGICA confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG.

  13. Hybrid ICA-Regression: Automatic Identification and Removal of Ocular Artifacts from Electroencephalographic Signals.

    Science.gov (United States)

    Mannan, Malik M Naeem; Jeong, Myung Y; Kamran, Muhammad A

    2016-01-01

    Electroencephalography (EEG) is a portable brain-imaging technique with the advantage of high-temporal resolution that can be used to record electrical activity of the brain. However, it is difficult to analyze EEG signals due to the contamination of ocular artifacts, and which potentially results in misleading conclusions. Also, it is a proven fact that the contamination of ocular artifacts cause to reduce the classification accuracy of a brain-computer interface (BCI). It is therefore very important to remove/reduce these artifacts before the analysis of EEG signals for applications like BCI. In this paper, a hybrid framework that combines independent component analysis (ICA), regression and high-order statistics has been proposed to identify and eliminate artifactual activities from EEG data. We used simulated, experimental and standard EEG signals to evaluate and analyze the effectiveness of the proposed method. Results demonstrate that the proposed method can effectively remove ocular artifacts as well as it can preserve the neuronal signals present in EEG data. A comparison with four methods from literature namely ICA, regression analysis, wavelet-ICA (wICA), and regression-ICA (REGICA) confirms the significantly enhanced performance and effectiveness of the proposed method for removal of ocular activities from EEG, in terms of lower mean square error and mean absolute error values and higher mutual information between reconstructed and original EEG. PMID:27199714

  14. Short-term price overreaction: Identification, testing, exploitation

    OpenAIRE

    Caporale, Guglielmo Maria; Gil-Alana, Luis; Plastun, Alex

    2014-01-01

    This paper examines short-term price reactions after one-day abnormal price changes and whether they create exploitable profit opportunities in various financial markets. A t-test confirms the presence of overreactions and also suggests that there is an “inertia anomaly”, i.e. after an overreaction day prices tend to move in the same direction for some time. A trading robot approach is then used to test two trading strategies aimed at exploiting the detected anomalies to make abnormal profits...

  15. Short-Term Price Overreactions: Identification, Testing, Exploitation

    OpenAIRE

    Caporale, Guglielmo Maria; Luis A. Gil-Alana; Plastun, Alex

    2014-01-01

    This paper examines short-term price reactions after one-day abnormal price changes and whether they create exploitable profit opportunities in various financial markets. A t-test confirms the presence of overreactions and also suggests that there is an “inertia anomaly”, i.e. after an overreaction day prices tend to move in the same direction for some time. A trading robot approach is then used to test two trading strategies aimed at exploiting the detected anomalies to make abnormal profits...

  16. Identification of terms to define unconstrained air transportation demands

    Science.gov (United States)

    Jacobson, I. D.; Kuhilhau, A. R.

    1982-01-01

    The factors involved in the evaluation of unconstrained air transportation systems were carefully analyzed. By definition an unconstrained system is taken to be one in which the design can employ innovative and advanced concepts no longer limited by present environmental, social, political or regulatory settings. Four principal evaluation criteria are involved: (1) service utilization, based on the operating performance characteristics as viewed by potential patrons; (2) community impacts, reflecting decisions based on the perceived impacts of the system; (3) technological feasibility, estimating what is required to reduce the system to practice; and (4) financial feasibility, predicting the ability of the concepts to attract financial support. For each of these criteria, a set of terms or descriptors was identified, which should be used in the evaluation to render it complete. It is also demonstrated that these descriptors have the following properties: (a) their interpretation may be made by different groups of evaluators; (b) their interpretations and the way they are used may depend on the stage of development of the system in which they are used; (c) in formulating the problem, all descriptors should be addressed independent of the evaluation technique selected.

  17. Distributed and Overlapping Neural Substrates for Object Individuation and Identification in Visual Short-Term Memory.

    Science.gov (United States)

    Naughtin, Claire K; Mattingley, Jason B; Dux, Paul E

    2016-02-01

    Object individuation and identification are 2 key processes involved in representing visual information in short-term memory (VSTM). Individuation involves the use of spatial and temporal cues to register an object as a distinct perceptual event relative to other stimuli, whereas object identification involves extraction of featural and related conceptual properties of a stimulus. Together, individuation and identification provide the "what," "where," and "when" of visual perception. In the current study, we asked whether individuation and identification processes are underpinned by distinct neural substrates, and to what extent brain regions that reflect these 2 operations are consistent across encoding, maintenance, and retrieval stages of VSTM. We used functional magnetic resonance imaging to identify brain regions that represent the number of objects (individuation) and/or object features (identification) in an array. Using univariate and multivariate analyses, we found substantial overlap between these 2 operations in the brain. Moreover, we show that regions supporting individuation and identification vary across distinct stages of information processing. Our findings challenge influential models of multiple-object encoding in VSTM, which argue that individuation and identification are underpinned by a limited set of nonoverlapping brain regions. PMID:25217471

  18. 彩色比特码自动识别技术在图书馆中的应用研究%Application Research on Color Bit Code Automatic Identification technology in the Libraries

    Institute of Scientific and Technical Information of China (English)

    李海华

    2012-01-01

    This paper introduces the basic identification principles and features of Color Bit Code Automatic Identification technology the application of the technology in many fields of foreign, analyzes the feasibility of domestic libraries by using Color Bit Code Automatic Identification technology. In the end of this paper, the paper has put forward the format of Color Bit Code Automatic Identification technology protocol.%对彩色比特码的定义及工作原理进行了概述,简单介绍了该技术在国外各领域的应用现状,分析了目前国内在图书馆领域应用彩色比特码自动识别技术的可行性,并提出了一种基于彩色比特码技术的图书管理协议格式。

  19. Automatic Assessment of Global Craniofacial Differences between Crouzon mice and Wild-type mice in terms of the Cephalic Index

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Oubel, Estanislao; Frangi, Alejandro F.;

    2006-01-01

    This paper presents the automatic assessment of differences between Wild-Type mice and Crouzon mice based on high-resolution 3D Micro CT data. One factor used for the diagnosis of Crouzon syndrome in humans is the cephalic index, which is the skull width/length ratio. This index has traditionally...... been computed by time-consuming manual measurements that prevent large-scale populational studies. In this study, an automatic method to estimate cephalic index for this mouse model of Crouzon syndrome is presented. The method is based on constructing a craniofacial atlas of Wild-type mice...... and then registering each mouse to the atlas using affine transformations. The skull length and width are then measured on the atlas and propagated to all subjects to obtain automatic measurements of the cephalic index. The registration accuracy was estimated by RMS landmark errors. Even though the accuracy...

  20. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    Science.gov (United States)

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery.

  1. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution DSMs and multispectral imagery obtained from an unmanned aerial vehicle.

    Science.gov (United States)

    Diaz-Varela, R A; Zarco-Tejada, P J; Angileri, V; Loudjani, P

    2014-02-15

    Agricultural terraces are features that provide a number of ecosystem services. As a result, their maintenance is supported by measures established by the European Common Agricultural Policy (CAP). In the framework of CAP implementation and monitoring, there is a current and future need for the development of robust, repeatable and cost-effective methodologies for the automatic identification and monitoring of these features at farm scale. This is a complex task, particularly when terraces are associated to complex vegetation cover patterns, as happens with permanent crops (e.g. olive trees). In this study we present a novel methodology for automatic and cost-efficient identification of terraces using only imagery from commercial off-the-shelf (COTS) cameras on board unmanned aerial vehicles (UAVs). Using state-of-the-art computer vision techniques, we generated orthoimagery and digital surface models (DSMs) at 11 cm spatial resolution with low user intervention. In a second stage, these data were used to identify terraces using a multi-scale object-oriented classification method. Results show the potential of this method even in highly complex agricultural areas, both regarding DSM reconstruction and image classification. The UAV-derived DSM had a root mean square error (RMSE) lower than 0.5 m when the height of the terraces was assessed against field GPS data. The subsequent automated terrace classification yielded an overall accuracy of 90% based exclusively on spectral and elevation data derived from the UAV imagery. PMID:24473345

  2. Computer Domain Term Automatic Extraction and Hierarchical Structure Building%计算机领域术语的自动获取与层次构建

    Institute of Scientific and Technical Information of China (English)

    林源; 陈志泊; 孙俏

    2011-01-01

    This paper presents a computer domain term automatic extraction method based on roles and statistics.It uses computer book titles from Amazon.com website as corpus, data are preprocessed by words splitting, stop words and special characters filtering.Terms are extracted by a set of rules and frequency statistics and inserted into a word tree from ODP to build the hierarchical structure.Experimental results show high precision and recall of the automatically extracted results compared with manual tagged terms.%设计一种能够自动获取计算机领域术语的方案,提出基于规则与统计相结合的抽取方法,使用亚马逊网站的计算机类图书作为语料库,通过分词、去停止词预处理以及词频统计的方法提取出计算机类领域术语,并插入到由ODP构建的树中,形成计算机领域术语的层次结构.实验结果表明,与人工标注结果相比,使用该方法自动获取的术语有很高的准确率与召回率.

  3. Automatic methods for long-term tracking and the detection and decoding of communication dances in honeybees

    Directory of Open Access Journals (Sweden)

    Fernando eWario

    2015-09-01

    Full Text Available The honeybee waggle dance communication system is an intriguing example of abstract animal communication and has been investigated thoroughly throughout the last seven decades. Typically, observables such as durations or angles are extracted manually directly from the observation hive or from video recordings to quantify dance properties, particularly to determine where bees have foraged. In recent years, biology has profited from automation, improving measurement precision, removing human bias, and accelerating data collection. As a further step, we have developed technologies to track all individuals of a honeybee colony and detect and decode communication dances automatically. In strong contrast to conventional approaches that focus on a small subset of the hive life, whether this regards time, space, or animal identity, our more inclusive system will help the understanding of the dance comprehensively in its spatial, temporal, and social context. In this contribution, we present full specifications of the recording setup and the software for automatic recognition and decoding of tags and dances, and we discuss potential research directions that may benefit from automation. Lastly, to exemplify the power of the methodology, we show experimental data and respective analyses for a continuous, experimental recording of nine weeks duration.

  4. Proliferating cell nuclear antigen (PCNA) allows the automatic identification of follicles in microscopic images of human ovarian tissue

    CERN Document Server

    Kelsey, Thomas W; Castillo, Luis; Wallace, W Hamish B; Gonzálvez, Francisco Cóppola; 10.2147/PLMI.S11116

    2010-01-01

    Human ovarian reserve is defined by the population of nongrowing follicles (NGFs) in the ovary. Direct estimation of ovarian reserve involves the identification of NGFs in prepared ovarian tissue. Previous studies involving human tissue have used hematoxylin and eosin (HE) stain, with NGF populations estimated by human examination either of tissue under a microscope, or of images taken of this tissue. In this study we replaced HE with proliferating cell nuclear antigen (PCNA), and automated the identification and enumeration of NGFs that appear in the resulting microscopic images. We compared the automated estimates to those obtained by human experts, with the "gold standard" taken to be the average of the conservative and liberal estimates by three human experts. The automated estimates were within 10% of the "gold standard", for images at both 100x and 200x magnifications. Automated analysis took longer than human analysis for several hundred images, not allowing for breaks from analysis needed by humans. O...

  5. An MRI-derived definition of MCI-to-AD conversion for long-term, automatic prognosis of MCI patients.

    Directory of Open Access Journals (Sweden)

    Yaman Aksu

    Full Text Available Alzheimer's disease (AD and mild cognitive impairment (MCI are of great current research interest. While there is no consensus on whether MCIs actually "convert" to AD, this concept is widely applied. Thus, the more important question is not whether MCIs convert, but what is the best such definition. We focus on automatic prognostication, nominally using only a baseline brain image, of whether an MCI will convert within a multi-year period following the initial clinical visit. This is not a traditional supervised learning problem since, in ADNI, there are no definitive labeled conversion examples. It is not unsupervised, either, since there are (labeled ADs and Controls, as well as cognitive scores for MCIs. Prior works have defined MCI subclasses based on whether or not clinical scores significantly change from baseline. There are concerns with these definitions, however, since, e.g., most MCIs (and ADs do not change from a baseline CDR = 0.5 at any subsequent visit in ADNI, even while physiological changes may be occurring. These works ignore rich phenotypical information in an MCI patient's brain scan and labeled AD and Control examples, in defining conversion. We propose an innovative definition, wherein an MCI is a converter if any of the patient's brain scans are classified "AD" by a Control-AD classifier. This definition bootstraps design of a second classifier, specifically trained to predict whether or not MCIs will convert. We thus predict whether an AD-Control classifier will predict that a patient has AD. Our results demonstrate that this definition leads not only to much higher prognostic accuracy than by-CDR conversion, but also to subpopulations more consistent with known AD biomarkers (including CSF markers. We also identify key prognostic brain region biomarkers.

  6. Video-based automatic front-view human identification%视频下的正面人体身份自动识别

    Institute of Scientific and Technical Information of China (English)

    贲晛烨; 王科俊; 马慧

    2012-01-01

    A system was designed to automatically identify a person from a front-view angle in a video sequence, including the modules of Adaboost pedestrian detection, Adaboost face detection, complexion verification, gait preprocessing, period detection, feature extraction, and decision-making level amalgamation and identification. The face detection module and gait period detection module can be activated automatically by the pedestrian detection module. The experimental results show that the swinging arm region can be detected for obtaining the front-view gait period accurately with minimal computation, which is suitable for real-time gait recognition. Applying gait features assisted by face features to the decision-making level amalgamation method to solve human identification in a video sequence is a new idea. Even in gait recognition with a single sample per person, this proposed scheme can achieve an improvement in the correct recognition rate when face and gait information are integrated as opposed to using gait features alone.%为了能够实现视频下正面人体身份的自动识别,设计的系统包括Adaboost行人检测、Adaboost人脸检测、肤色验证、步态预处理、周期检测、特征提取以及决策级融合识别等模块.通过行人检测模块可以自动开启人脸检测模块和步态周期检测模块.实验结果表明,提出的根据下臂摇摆区域确定步态周期的方法对正面步态周期检测准确,计算量小,适用于实时的步态识别.采用人脸特征辅助步态特征在决策级的融合方法是解决视频下身份识别的新思路,在单样本的步态识别中,融合人脸特征可以提高识别精度.

  7. Automatic segmentation of the hippocampus for preterm neonates from early-in-life to term-equivalent age

    Directory of Open Access Journals (Sweden)

    Ting Guo

    2015-01-01

    Conclusions: MAGeT-Brain is capable of segmenting hippocampi accurately in preterm neonates, even at early-in-life. Hippocampal asymmetry with a larger right side is demonstrated on early-in-life images, suggesting that this phenomenon has its onset in the 3rd trimester of gestation. Hippocampal volume assessed at the time of early-in-life and term-equivalent age is linearly associated with GA at birth, whereby smaller volumes are associated with earlier birth.

  8. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  9. Automatized near-real-time short-term Probabilistic Volcanic Hazard Assessment of tephra dispersion before eruptions: BET_VHst for Vesuvius and Campi Flegrei during recent exercises

    Science.gov (United States)

    Selva, Jacopo; Costa, Antonio; Sandri, Laura; Rouwet, Dmtri; Tonini, Roberto; Macedonio, Giovanni; Marzocchi, Warner

    2015-04-01

    Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at mitigating the risk posed by volcanic activity at different time scales. The definition of the space-time window for PVHA is related to the kind of risk mitigation actions that are under consideration. Short temporal intervals (days to weeks) are important for short-term risk mitigation actions like the evacuation of a volcanic area. During volcanic unrest episodes or eruptions, it is of primary importance to produce short-term tephra fallout forecast, and frequently update it to account for the rapidly evolving situation. This information is obviously crucial for crisis management, since tephra may heavily affect building stability, public health, transportations and evacuation routes (airports, trains, road traffic) and lifelines (electric power supply). In this study, we propose a methodology named BET_VHst (Selva et al. 2014) for short-term PVHA of volcanic tephra dispersal based on automatic interpretation of measures from the monitoring system and physical models of tephra dispersal from all possible vent positions and eruptive sizes based on frequently updated meteorological forecasts. The large uncertainty at all the steps required for the analysis, both aleatory and epistemic, is treated by means of Bayesian inference and statistical mixing of long- and short-term analyses. The BET_VHst model is here presented through its implementation during two exercises organized for volcanoes in the Neapolitan area: MESIMEX for Mt. Vesuvius, and VUELCO for Campi Flegrei. References Selva J., Costa A., Sandri L., Macedonio G., Marzocchi W. (2014) Probabilistic short-term volcanic hazard in phases of unrest: a case study for tephra fallout, J. Geophys. Res., 119, doi: 10.1002/2014JB011252

  10. A New Color Facial Identification Feature Extraction Method' and Automatic Identification%一种改进的彩色人脸鉴别特征抽取方法及自动识别

    Institute of Scientific and Technical Information of China (English)

    高燕; 明曙军; 刘永俊

    2011-01-01

    Currently face recognition has made some success, algorithms are constantly being improved. According to the common needs of the average sample solution in traditional linear analysis methods, this paper proposes the face recognition based on intermediate samples. This method can remove the influence of average samples to interference samples. Combined with the color of face recognition, the paper proposes color facial identification feature extraction and automatic identification based on the middle samples. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%针对传统的线性分析方法中都需要的平均样本的共性,提出了基于中间样本的人脸识别.这种方法有效去除了干扰样本对平均样本的影响,并结合彩色人脸识别,提出了基于中间样本的彩色人脸鉴别特征抽取及自动识别方法.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了算法的有效性.

  11. Automatic identification of organ/tissue regions in CT image data for the implementation of patient specific phantoms for treatment planning in cancer therapy

    Science.gov (United States)

    Sparks, Richard Blaine

    In vivo targeted radiotherapy has the potential to be an effective treatment for many types of cancer. Agents which show preferred uptake by cancerous tissue are labeled with radio-nuclides and administered to the patient. The preferred uptake by the cancerous tissue allows for the delivery of therapeutically effective radiation absorbed doses to tumors, while sparing normal tissue. Accurate absorbed dose estimation for targeted radiotherapy would be of great clinical value in a patient's treatment planning. One of the problems with calculating absorbed dose involves the use of geometric mathematical models of the human body for the simulation of the radiation transport. Since many patients differ markedly from these models, errors in the absorbed dose estimation procedure result from using these models. Patient specific models developed using individual patient's anatomical structure would greatly enhance the accuracy of dosimetry calculations. Patient specific anatomy data is available from CT or MRI images, but the very time consuming process of manual organ and tissue identification limits its practicality for routine clinical use. This study uses a statistical classifier to automatically identify organs and tissues from CT image data. In this study, image ``slices'' from thirty- five different subjects at approximately the same anatomical position are used to ``train'' the statistical classifier. Multi-dimensional probability distributions of image characteristics, such as location and intensity, are generated from the training images. Statistical classification rules are then used to identify organs and tissues in five previously unseen images. A variety of pre-processing and post-processing techniques are then employed to enhance the classification procedure. This study demonstrated the promise of statistical classifiers for solving segmentation problems involving human anatomy where there is an underlying pattern of structure. Despite the poor quality of

  12. Automatic classification and robust identification of vestibulo-ocular reflex responses: from theory to practice: introducing GNL-HybELS.

    Science.gov (United States)

    Ghoreyshi, Atiyeh; Galiana, Henrietta

    2011-10-01

    The Vestibulo-Ocular Reflex (VOR) stabilizes images of the world on our retinae when our head moves. Basic daily activities are thus impaired if this reflex malfunctions. During the past few decades, scientists have modeled and identified this system mathematically to diagnose and treat VOR deficits. However, traditional methods do not analyze VOR data comprehensively because they disregard the switching nature of nystagmus; this can bias estimates of VOR dynamics. Here we propose, for the first time, an automated tool to analyze entire VOR responses (slow and fast phases), without a priori classification of nystagmus segments. We have developed GNL-HybELS (Generalized NonLinear Hybrid Extended Least Squares), an algorithmic tool to simultaneously classify and identify the responses of a multi-mode nonlinear system with delay, such as the horizontal VOR and its alternating slow and fast phases. This algorithm combines the procedures of Generalized Principle Component Analysis (GPCA) for classification, and Hybrid Extended Least Squares (HybELS) for identification, by minimizing a cost function in an optimization framework. It is validated here on clean and noisy VOR simulations and then applied to clinical VOR tests on controls and patients. Prediction errors were less than 1 deg for simulations and ranged from .69 deg to 2.1 deg for the clinical data. Nonlinearities, asymmetries, and dynamic parameters were detected in normal and patient data, in both fast and slow phases of the response. This objective approach to VOR analysis now allows the design of more complex protocols for the testing of oculomotor and other hybrid systems.

  13. 智能建筑区门禁系统自动化识别技术分析%Analysis on Automatic Identification Technology of Intelligent Building Access Control System

    Institute of Scientific and Technical Information of China (English)

    张卉

    2015-01-01

    门禁系统是智能建筑区必备设施,可对建筑区提供安全防护、自动调控等多方面功能.指纹识别系统是人工智能改造的新系统,为门禁系统自动识别提供了科技化措施.本文分析了智能建筑发展趋势及指纹识别系统的基本构成,介绍了智能建筑门禁系统自动化识别技术的应用方法.%The access control system of intelligent building is a necessary facility, which provides security protection, automatic control and so on. Fingerprint identification system is a new artificial intelligence system, providing technological measures for the automatic identification of access control system. This paper analyzes the development trend of intelligent building and the basic structure of fingerprint identification system, introduces the application of automatic recognition technology in intelligent building access control system.

  14. 基于船舶AIS信息的可疑船只监测研究%Monitoring of Intrusive Vessels Based on an Automatic Identification System (AIS)

    Institute of Scientific and Technical Information of China (English)

    郭浩; 张晰; 安居白; 李冠宇

    2013-01-01

    中国海洋资源丰富,邻国船只时常非法航入中国领海或经济专属区.为了有效地保护和开发海洋资源,利用船舶自动识别系统(AIS)提供的船位、船速及航向等动态信息与船名、呼号、吃水及危险货物等静态信息,对某邻国船只于2012年4月在其专属经济区以及中国海域航行特征和船只特征进行分析.%Vessels from the neighboring countries often enter into the territorial waters and exclusive economic zone of China illegally.In order to protect our marine resources,this paper analyzes the characteristics and sailing features of ships from one neighboring country of China that entered the exclusive economic zone and the sea of China in April 2012.In particular,an automatic identification system (AIS) is used to collect the related information regarding ships,such as position,speed,heading,name,call sign,draft and dangerous goods carried,etc.Then,the geographic distribution,velocity and regular route pattern of vessels are used to develop a ship traffic information database.This paper provides an effective way for monitoring intrusive vessels,in order to protect China's marine rights.

  15. Identification of Biocontrol Bacteria against Soybean Root Rot with Biolog Automatic Microbiology Analysis System%拮抗大豆根腐病细菌的Biolog鉴定

    Institute of Scientific and Technical Information of China (English)

    许艳丽; 刘海龙; 李春杰; 潘凤娟; 李淑娴; 刘新晶

    2012-01-01

    In order to identify the systematic position of taxonomy of two biocontrol bacteria against soybean root rot. Traditional morphological identification and BIOLOG automatic microbiology analysis system were used to identify strain B021a and B04b. The results showed that similarity value of strain B021a with Vibrio tubiashii was 0. 634, possibility to 86% and genetic distance to 4.00,and similarity value of strain B04b with Pasteurella trehalosi was 0. 610,probability to 75% and genetic distance to 2. 77. Strain B021a was identified as Vibrio tubiashii and strain B04b as Pasteurella trehalosi by colony morphological propertie and BIOLOC analysis system.%为明确2株生防细菌的分类地位,采用传统形态学方法结合Biolog微生物自动分析系统,鉴定了大豆根腐病的2株生防细菌.结果表明,菌株B021a与塔式弧菌相似度值为0.634,可能性是86%,遗传距离为4.00.菌株B04b与海藻巴斯德菌相似度值为0.610,可能性是75%,遗传距离为2.77.综合形态学和Biolog鉴定结果,认为菌株B021a是塔式弧菌,菌株B04b是海藻巴斯德菌.

  16. Automatic Number Plate Recognition System

    OpenAIRE

    Rajshree Dhruw; Dharmendra Roy

    2014-01-01

    Automatic Number Plate Recognition (ANPR) is a mass surveillance system that captures the image of vehicles and recognizes their license number. The objective is to design an efficient automatic authorized vehicle identification system by using the Indian vehicle number plate. In this paper we discus different methodology for number plate localization, character segmentation & recognition of the number plate. The system is mainly applicable for non standard Indian number plates by recognizing...

  17. REMI and ROUSE: Quantitative models for long-term priming in perceptual identification.

    NARCIS (Netherlands)

    E.J.M. Wagenmakers; R. Zeelenberg; D. Huber; J.G.W. Raaijmakers; R.M. Shiffrin; L.J. Schooler

    2003-01-01

    (from the chapter) The REM model originally developed for recognition memory (R. M. Shiffrin and M. Steyvers, 1997) has recently been extended to implicit memory phenomena observed during threshold identification of words. The authors discuss 2 REM models based on Bayesian principles: a model for lo

  18. Numerical method of identification of an unknown source term in a heat equation

    Directory of Open Access Journals (Sweden)

    Fatullayev Afet Golayo?lu

    2002-01-01

    Full Text Available A numerical procedure for an inverse problem of identification of an unknown source in a heat equation is presented. Approach of proposed method is to approximate unknown function by polygons linear pieces which are determined consecutively from the solution of minimization problem based on the overspecified data. Numerical examples are presented.

  19. Automatic segmentation of diatom images for classification

    NARCIS (Netherlands)

    Jalba, Andrei C.; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    2004-01-01

    A general framework for automatic segmentation of diatom images is presented. This segmentation is a critical first step in contour-based methods for automatic identification of diatoms by computerized image analysis. We review existing results, adapt popular segmentation methods to this difficult p

  20. 基于图像的昆虫自动识别与计数研究进展%Progress in Research on Digital Image Processing Technology for Automatic Insect Identification and Counting

    Institute of Scientific and Technical Information of China (English)

    姚青; 吕军; 杨保军; 薛杰; 郑宏海; 唐健

    2011-01-01

    As the rapid development of information technology, digitization, precision and intelligence are important characteristics for modem agriculture and automatic identification and counting of agricultural insects has become a hot research topic. Main methods and applications for automatic insect identification and counting by image processing technology were reviewed. The advantages and disadvantages of these methods were compared and the relevant problems and prospect were also discussed.%随着计算机技术的快速发展,现代农业逐步走向数字化、精准化和智能化,昆虫自动识别和计数成为国内外研究的热点.论文综述了国内外基于图像的昆虫自动识别与计数技术研究的主要方法和应用,概述了各种方法的原理,比较了它们的优缺点,最后讨论了存在的问题及研究展望.

  1. 钢管焊缝超声自动检测系统能力的鉴定%Identification for the Ability of Steel Pipe Weld Automatic Ultrasonic Testing System

    Institute of Scientific and Technical Information of China (English)

    甘正红; 方晓东; 余洋; 苏继权

    2013-01-01

    In this article, it introduced the main contents to be detected in multichannel steel pipe weld automatic ultrasonic testing system, calibration method to detecting system(equipment), and service conditions of detecting system . Combined with steel pipe weld automatic ultrasonic testing requirements specified in API SPEC 5L/IS0 3183 standard, it discussed the main properties and identification method of multichannel steel pipe weld automatic ultrasonic testing system, provided specific requirements for linearity, horizontal linearity, dynamic range, comprehensive property and others. The feasibility of identification ability was proved through actual application.%介绍了多通道钢管焊缝超声波自动检测系统待检测的主要内容、对检测系统(设备)进行校准的方法以及检测系统的使用条件.结合API SPEC 5L/ISO 3183标准对钢管焊缝超声自动检测的要求,探讨了多通道钢管焊缝超声波自动检测系统的主要性能指标及鉴定方法,给出了主要性能指标如直线性和水平线性、动态范围、综合性能等的具体要求.并通过实际应用表明了鉴定能力的可行性.

  2. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  3. Paraphrase Identification using Semantic Heuristic Features

    Directory of Open Access Journals (Sweden)

    Zia Ul-Qayyum

    2012-11-01

    Full Text Available Paraphrase Identification (PI problem is to classify that whether or not two sentences are close enough in meaning to be termed as paraphrases. PI is an important research dimension with practical applications in Information Extraction (IE, Machine Translation, Information Retrieval, Automatic Identification of Copyright Infringement, Question Answering Systems and Intelligent Tutoring Systems, to name a few. This study presents a novel approach of paraphrase identification using semantic heuristic features envisaging improving the accuracy compared to state-of-the-art PI systems. Finally, a comprehensive critical analysis of misclassifications is carried out to provide insightful evidence about the proposed approach and the corpora used in the experiments.

  4. Automatic structural modeling method based on process manufacturing system identifications%流程制造系统辨识的自动结构性建模方法

    Institute of Scientific and Technical Information of China (English)

    韩中; 赵升吨; 张贵成; 阮卫平; 李建平; 沈红立

    2014-01-01

    模型能够解决系统工程中的许多问题,为此,提出了一种新的流程制造系统辨识的自动结构性建模方法。通过对系统的结构组成和单元关系进行辨识,提炼出模型的结构性数据,并以此自动地形成系统仿真模型。建模采用了图论作为工业系统的数学表达形式。研究对系统单元进行规则性编码,并根据系统结构所具有的特性定义了建模的辨识函数。实例证明了提出的方法是可行的,并能够满足系统建模的有用性、高效性、准确性的要求。%Models can solve many problems in systems engineering, so a new automatic structural modeling technology based on process manufacturing system identifications is presented. Through carrying out identifications to system compo-sitions and unit relationships, model structure data is refined, and a system simulation model is automatically generated. The graph theory is adapted and regarded as a math expression format of the industrial system in the modeling. In addi-tion, the rule codes are implemented for system units in the research, the identification functions are defined according to system composition properties. The auto-modeling processes are achieved through iterative computation. Finally, the example is given to verify that the presented method is feasible, and can satisfy these requirements of the availability, efficiency and accuracy.

  5. The effect of generation on long-term repetition priming in auditory and visual perceptual identification.

    Science.gov (United States)

    Mulligan, Neil W

    2011-05-01

    Perceptual implicit memory is typically most robust when the perceptual processing at encoding matches the perceptual processing required during retrieval. A consistent exception is the robust priming that semantic generation produces on the perceptual identification test (Masson & MacLeod, 2002), a finding which has been attributed to either (1) conceptual influences in this nominally perceptual task, or (2) covert orthographic processing during generative encoding. The present experiments assess these possibilities using both auditory and visual perceptual identification, tests in which participants identify auditory words in noise or rapidly-presented visual words. During the encoding phase of the experiments, participants generated some words and perceived others in an intermixed study list. The perceptual control condition was visual (reading) or auditory (hearing), and varied across participants. The reading and hearing conditions exhibited the expected modality-specificity, producing robust intra-modal priming and non-significant cross-modal priming. Priming in the generate condition depended on the perceptual control condition. With a read control condition, semantic generation produced robust visual priming but no auditory priming. With a hear control condition, the results were reversed: semantic generation produced robust auditory priming but not visual priming. This set of results is not consistent with a straightforward application of either the conceptual-influence or covert-orthography account, and implies that the nature of encoding in the generate condition is influenced by the broader list context. PMID:21388613

  6. 21 CFR 892.1900 - Automatic radiographic film processor.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic radiographic film processor. 892.1900... (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1900 Automatic radiographic film processor. (a) Identification. An automatic radiographic film processor is a device intended to be used...

  7. Modeling of Automatic Generation Control for Power System Transient, Medium-Term and Long-Term Stabilities Simulations%电力系统全过程动态仿真中的自动发电控制模型

    Institute of Scientific and Technical Information of China (English)

    宋新立; 王成山; 仲悟之; 汤涌; 卓峻峰; 旸吴国; 苏志达

    2013-01-01

    针对大规模电力系统二次调频控制的动态仿真问题,采用混杂系统的建模方法,提出一种适于机电暂态及中长期动态全过程仿真的自动发电控制模型。模型主要由属于连续动态系统的区域控制偏差计算、属于离散动态系统的控制策略和机组调节指令计算3个模块组成。通过与电力系统全过程动态仿真程序中已有模型的接口,该模型可以模拟大规模电网中基于A标准和CPS控制性能评价标准的控制策略,以及定频率控制、定交换功率控制和联络线功率频率偏差控制等多种方式。与我国特高压交流联络线相关的2个算例仿真表明,该模型可为大规模电网联络线功率波动限制、多区域AGC控制策略的协调配合和二次调频的优化控制等实际电网问题提供有效的仿真手段。%In order to dynamically simulate secondary power frequency control in large power systems, a new automatic generation control (AGC) model, which can be applied for power system electro-mechanical transient, medium-term and long-term dynamics simulation, is proposed based on the modeling method of hybrid system. It mainly consists of three parts:calculation of area control error (ACE), simulation of control strategy, and calculation of generating power regulation. The first module is modeled by the method of continuous dynamic systems, and the last two modules are modeled by the method of discrete event dynamic systems. By interfacing to the existing models in the power system unified dynamic simulation program, it is capable of simulating not only the three main control modes of AGC for large power systems, i.e., flat frequency control (FFC), constant net interchange control (CIC), and tie line bias frequency control (TBC), but also the widely-used control strategies based on CPS and A standard. Two simulation cases, which are related to the active power control for the tie-line in China UHVAC interconnected

  8. Long-term forecasting of hourly electricity load: Identification of consumption profiles and segmentation of customers

    DEFF Research Database (Denmark)

    Møller Andersen, Frits; Larsen, Helge V.; Boomsma, Trine Krogh

    2013-01-01

    Data for aggregated hourly electricity demand shows systematic variations over the day, week, and seasons, and forecasting of aggregated hourly electricity load has been the subject of many studies. With hourly metering of individual customers, data for individual consumption profiles is available....... Using this data and analysing the case of Denmark, we show that consumption profiles for categories of customers are equally systematic but very different for distinct categories, that is, distinct categories of customers contribute differently to the aggregated electricity load profile. Therefore......, to model and forecast long-term changes in the aggregated electricity load profile, we identify profiles for different categories of customers and link these to projections of the aggregated annual consumption by categories of customers. Long-term projection of the aggregated load is important for future...

  9. Liabilities identification and long-term management - Review of French situation

    International Nuclear Information System (INIS)

    In France, long term liabilities due to nuclear activities concern four main operators: Electricite de France (EDF), AREVA (an industrial group created on September 3, 2001 and covering the entire fuel cycle from ore extraction and transformation to the recycling of spent fuel), the Atomic Energy Commission (CEA, the French public research organism in the nuclear sector) and the French Agency for radioactive waste management (ANDRA, in charge with the long term operation of radioactive waste installations). Long term liabilities are due to the financing of both decommissioning of nuclear installations and radioactive waste long term management. In the current French organisational scheme, the different operators must take the responsibility of these long term liabilities. The setting of national policies and the establishment of the legislation are carried out at a national level by the French state. These include the supervision of the three operators through different Ministries and the regulatory control of safety trough the Nuclear Safety Authority (ASN). EDF, AREVA, CEA and ANDRA are responsible for all aspects of the decommissioning (from a technical and financial point of view). Within a safety regulatory frame, they have their own initiative concerning future expenses, based on estimated costs and the expected operational lifetime of the installations. They are responsible of the definition and implementation of the technical options. Through its supervision activities, the French State regularly requires updating studies of these estimated costs, which are conducted by the operators. A general review of the management of these long-term liabilities is also carried out on a four years basis by the French Court of Accounts. Operators are due to constitute provisions during the life cycle of their installations. Provisions are calculated for each installation on the basis of the decommissioning expenses and of the reasonably estimated lifetime. They are re

  10. Screening local Lactobacilli from Iran in terms of production of lactic acid and identification of superior strains

    Directory of Open Access Journals (Sweden)

    Fatemeh Soleimanifard

    2015-12-01

    Full Text Available Introduction: Lactobacilli are a group of lactic acid bacteria that their final product of fermentation is lactic acid. The objective of this research is selection of local Lactobacilli producing L (+ lactic acid. Materials and methods: In this research the local strains were screened based on the ability to produce lactic acid. The screening was performed in two stages. The first stage was the titration method and the second stage was the enzymatic method. The superior strains obtained from titration method were selected to do enzymatic test. Finally, the superior strains in the second stage (enzymatic which had the ability to produce L(+ lactic acid were identified by biochemical tests. Then, molecular identification of strains was performed by using 16S rRNA sequencing. Results: In this study, the ability of 79 strains of local Lactobacilli in terms of production of lactic acid was studied. The highest and lowest rates of lactic acid production was 34.8 and 12.4 mg/g. Superior Lactobacilli in terms of production of lactic acid ability of producing had an optical isomer L(+, the highest levels of L(+ lactic acid were with 3.99 and the lowest amount equal to 1.03 mg/g. The biochemical and molecular identification of superior strains showed that strains are Lactobacillus paracasei. Then the sequences of 16S rRNA of superior strains were reported in NCBI with accession numbers KF735654، KF735655، KJ508201and KJ508202. Discussion and conclusion: The amounts of lactic acid production by local Lactobacilli were very different and producing some of these strains on available reports showed more products. The results of this research suggest the use of superior strains of Lactobacilli for production of pure L(+ lactic acid.

  11. Identification and localization of netrin-4 and neogenin in human first trimester and term placenta.

    Science.gov (United States)

    Dakouane-Giudicelli, M; Duboucher, C; Fortemps, J; Salama, S; Brulé, A; Rozenberg, P; de Mazancourt, P

    2012-09-01

    We describe here for the first time the characterization of family member of netrins, netrin-4 and its receptor neogenin, during the development of the placenta. By using western blots and RT-PCR, we demonstrated the presence of netrin-4 and its receptor neogenin protein as well as their transcripts. Using immunohistochemistry, we studied the distribution of netrin-4 and neogenin in both the first trimester and term placenta. We observed staining of netrin-4 in villous and extravillous cytotrophoblasts, syncytiotrophoblast, and endothelial cells whereas staining in stromal cells was faint. In decidua, we observed netrin-4 labelling in glandular epithelial cells, perivascular decidualized cells, and endothelial cells. However, neogenin was absent in villous and extravillous cytotrophoblasts and was expressed only on syncytiotrophoblast and placental stromal cells in the first trimester and at term placenta. The pattern of distribution suggests that a functional netrin-4-neogenin pathway might be restricted to syncytiotrophoblasts, mesenchymal cells, and villous endothelial cells. This pathway function might vary with its localization in the placenta. It is possibly involved in angiogenesis, morphogenesis, and differentiation.

  12. Identification of long-term containment/stabilization technology performance issues

    International Nuclear Information System (INIS)

    U.S. Department of Energy (DOE) faces a somewhat unique challenge when addressing in situ remedial alternatives that leave long-lived radionuclides and hazardous contaminants onsite. These contaminants will remain a potential hazard for thousands of years. However, the risks, costs, and uncertainties associated with removal and offsite disposal are leading many sites to select in situ disposal alternatives. Improvements in containment, stabilization, and monitoring technologies will enhance the viability of such alternatives for implementation. DOE's Office of Science and Technology sponsored a two day workshop designed to investigate issues associated with the long-term in situ stabilization and containment of buried, long-lived hazardous and radioactive contaminants. The workshop facilitated communication among end users representing most sites within the DOE, regulators, and technologists to define long-term performance issues for in situ stabilization and containment alternatives. Participants were divided into groups to identify issues and a strategy to address priority issues. This paper presents the results of the working groups and summarizes the conclusions. A common issue identified by the work groups is communication. Effective communication between technologists, risk assessors, end users, regulators, and other stakeholders would contribute greatly to resolution of both technical and programmatic issues

  13. Automatic identification of fault zone head waves and direct P waves and its application in the Parkfield section of the San Andreas Fault, California

    Science.gov (United States)

    Li, Zefeng; Peng, Zhigang

    2016-06-01

    Fault zone head waves (FZHWs) are observed along major strike-slip faults and can provide high-resolution imaging of fault interface properties at seismogenic depth. In this paper, we present a new method to automatically detect FZHWs and pick direct P waves secondary arrivals (DWSAs). The algorithm identifies FZHWs by computing the amplitude ratios between the potential FZHWs and DSWAs. The polarities, polarizations and characteristic periods of FZHWs and DSWAs are then used to refine the picks or evaluate the pick quality. We apply the method to the Parkfield section of the San Andreas Fault where FZHWs have been identified before by manual picks. We compare results from automatically and manually picked arrivals and find general agreement between them. The obtained velocity contrast at Parkfield is generally 5-10 per cent near Middle Mountain while it decreases below 5 per cent near Gold Hill. We also find many FZHWs recorded by the stations within 1 km of the background seismicity (i.e. the Southwest Fracture Zone) that have not been reported before. These FZHWs could be generated within a relatively wide low velocity zone sandwiched between the fast Salinian block on the southwest side and the slow Franciscan Mélange on the northeast side. Station FROB on the southwest (fast) side also recorded a small portion of weak precursory signals before sharp P waves. However, the polarities of weak signals are consistent with the right-lateral strike-slip mechanisms, suggesting that they are unlikely genuine FZHW signals.

  14. Comparison of Short-Term Estrogenicity Tests for Identification of Hormone-Disrupting Chemicals

    Science.gov (United States)

    Andersen, Helle Raun; Andersson, Anna-Maria; Arnold, Steven F.; Autrup, Herman; Barfoed, Marianne; Beresford, Nicola A.; Bjerregaard, Poul; Christiansen, Lisette B.; Gissel, Birgitte; Hummel, René; Jørgensen, Eva Bonefeld; Korsgaard, Bodil; Le Guevel, Remy; Leffers, Henrik; McLachlan, John; Møller, Anette; Bo Nielsen, Jesper; Olea, Nicolas; Oles-Karasko, Anita; Pakdel, Farzad; Pedersen, Knud L.; Perez, Pilar; Skakkebœk, Niels Erik; Sonnenschein, Carlos; Soto, Ana M.; Sumpter, John P.; Thorpe, Susan M.; Grandjean, Philippe

    1999-01-01

    The aim of this study was to compare results obtained by eight different short-term assays of estrogenlike actions of chemicals conducted in 10 different laboratories in five countries. Twenty chemicals were selected to represent direct-acting estrogens, compounds with estrogenic metabolites, estrogenic antagonists, and a known cytotoxic agent. Also included in the test panel were 17β-estradiol as a positive control and ethanol as solvent control. The test compounds were coded before distribution. Test methods included direct binding to the estrogen receptor (ER), proliferation of MCF-7 cells, transient reporter gene expression in MCF-7 cells, reporter gene expression in yeast strains stably transfected with the human ER and an estrogen-responsive reporter gene, and vitellogenin production in juvenile rainbow trout. 17β-Estradiol, 17α-ethynyl estradiol, and diethylstilbestrol induced a strong estrogenic response in all test systems. Colchicine caused cytotoxicity only. Bisphenol A induced an estrogenic response in all assays. The results obtained for the remaining test compounds—tamoxifen, ICI 182.780, testosterone, bisphenol A dimethacrylate, 4-n-octylphenol, 4-n-nonylphenol, nonylphenol dodecylethoxylate, butylbenzylphthalate, dibutylphthalate, methoxychlor, o,p′-DDT, p,p′-DDE, endosulfan, chlomequat chloride, and ethanol—varied among the assays. The results demonstrate that careful standardization is necessary to obtain a reasonable degree of reproducibility. Also, similar methods vary in their sensitivity to estrogenic compounds. Thus, short-term tests are useful for screening purposes, but the methods must be further validated by additional interlaboratory and interassay comparisons to document the reliability of the methods. ImagesFigure 2Figure 5Figure 6Figure 7 PMID:10229711

  15. 通道行人集聚型异常事件自动识别算法设计%Design of Automatic Identification Algorithm for Pedestrian Clustering in Channel

    Institute of Scientific and Technical Information of China (English)

    李鑫; 陈艳艳; 陈宁; 刘小明; 冯国臣

    2016-01-01

    为了对城市轨道交通枢纽通道内的集聚型异常事件进行合理的疏导和客流组织,保障城市轨道交通枢纽的安全、高效运行,本文提出了一种通道内行人集聚型异常事件的自动识别算法.该算法首先通过对通道客流基础数据平稳性和突变性的分析,创建了一种兼具平稳性和突变性特征的新数据类型,然后基于双截面客流数据设计了自动识别算法的关键参数—偏移空间差值.最后通过对关键参数变化特征的分析,建立了通道行人集聚型异常事件自动识别算法.仿真试验结果显示:该算法的检测精度为100%,反应时间均值为65 s,表明该算法对通道行人集聚事件有极强的自动检测能力和较短的反应时间.%In order to carry out reasonable guidance and passenger flow organization in the traffic hub channel of urban rail transit, ensure the safe and efficient operation of urban rail transit hub, we put forward an algorithm that can recognize the abnormal events of crowds gathering in the transfer channel automatically. Basic information like stability and mutability of pedestrian volume is analysed firstly, creating a new type data set characterized by stability and mutability based on the calculated result, and then the key parameter-difference of space offset of automatic identification algorithm is designed based on the double-section pedestrian volume, and variation characteristics analysis of the key parameter will help to establish the algorithm for automatic identifying crowds gathering abnormal events. The simulation experiment result shows that the detection accuracy of the algorithm is 100%, and the reaction time is 65 s, which shows that the algorithm has a strong automatic detection ability and a shorter reaction time for the pedestrian clustering events.

  16. Identification of long-term trends in vegetation dynamics in the Guinea savannah region of Nigeria

    Science.gov (United States)

    Osunmadewa, Babatunde A.; Wessollek, Christine; Karrasch, Pierre

    2014-10-01

    The availability of newly generated data from Advanced Very High Resolution Radiometer (AVHRR) covering the last three decades has broaden our understanding of vegetation dynamics (greening) from global to regional scale through quantitative analysis of seasonal trends in vegetation time series and climatic variability especially in the Guinea savannah region of Nigeria where greening trend is inconsistent. Due to the impact of changes in global climate and sustainability of means of human livelihood, increasing interest on vegetation productivity has become important. The aim of this study is to examine association between NDVI and rainfall using remotely sensed data, since vegetation dynamics (greening) has a high degree of association with weather parameters. This study therefore analyses trends in regional vegetation dynamics in Kogi state, Nigeria using bi-monthly AVHRR GIMMS 3g (Global Inventory Modelling and Mapping Studies) data and TAMSAT (Tropical Applications of Meteorology Satellite) monthly data both from 1983 to 2011 to identify changes in vegetation greenness over time. Analysis of changes in the seasonal variation of vegetation greenness and climatic drivers was conducted for selected locations to further understand the causes of observed interannual changes in vegetation dynamics. For this study, Mann-Kendall (MK) monotonic method was used to analyse long-term inter-annual trends of NDVI and climatic variable. The Theil-Sen median slope was used to calculate the rate of change in slopes between all pair wise combination and then assessing the median over time. Trends were also analysed using a linear model method, after seasonality had been removed from the original NDVI and rainfall data. The result of the linear model are statistically significant (p <0.01) in all the study location which can be interpreted as increase in vegetation trend over time (greening). Also the result of the NDVI trend analysis using Mann-Kendall test shows an increasing

  17. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  18. A 100-m Fabry–Pérot Cavity with Automatic Alignment Controls for Long-Term Observations of Earth’s Strain

    Directory of Open Access Journals (Sweden)

    Akiteru Takamori

    2014-08-01

    Full Text Available We have developed and built a highly accurate laser strainmeter for geophysical observations. It features the precise length measurement of a 100-m optical cavity with reference to a stable quantum standard. Unlike conventional laser strainmeters based on simple Michelson interferometers that require uninterrupted fringe counting to track the evolution of ground deformations, this instrument is able to determine the absolute length of a cavity at any given time. The instrument offers advantage in covering a variety of geophysical events, ranging from instantaneous earthquakes to crustal deformations associated with tectonic strain changes that persist over time. An automatic alignment control and an autonomous relocking system have been developed to realize stable performance and maximize observation times. It was installed in a deep underground site at the Kamioka mine in Japan, and an effective resolution of 2 × (10−8 − 10−7 m was achieved. The regular tidal deformations and co-seismic strain changes were in good agreement with those from a theoretical model and a co-located conventional laser strainmeter. Only the new instrument was able to record large strain steps caused by a nearby large earthquake because of its capability of absolute length determination.

  19. Rapid identification of bacteria from positive blood culture bottles by MALDI-TOF MS following short-term incubation on solid media.

    Science.gov (United States)

    Altun, Osman; Botero-Kleiven, Silvia; Carlsson, Sarah; Ullberg, Måns; Özenci, Volkan

    2015-11-01

    Rapid identification of bacteria from blood cultures enables early initiation of appropriate antibiotic treatment in patients with bloodstream infections (BSI). The objective of the present study was to evaluate the use of matrix-associated laser desorption ionization-time of flight (MALDI-TOF) MS after a short incubation on solid media for rapid identification of bacteria from positive blood culture bottles. MALDI-TOF MS was performed after 2.5 and 5.5 h plate incubation of samples from positive blood cultures. Identification scores with values ≥ 1.7 were accepted as successful identification if the results were confirmed by conventional methods. Conventional methods included MALDI-TOF MS, Vitek 2, and diverse biochemical and agglutination tests after overnight culture. In total, 515 positive blood cultures with monomicrobial bacterial growth representing one blood culture per patient were included in the study. There were 229/515 (44.5%) and 286/515 (55.5%) blood culture bottles with Gram-negative bacteria (GNB) and Gram-positive bacteria (GPB), respectively. MALDI-TOF MS following short-term culture could accurately identify 300/515 (58.3%) isolates at 2.5 h, GNB being identified in greater proportion (180/229; 78.6%) than GPB (120/286; 42.0%). In an additional 124/515 bottles (24.1%), identification was successful at 5.5 h, leading to accurate identification of bacteria from 424/515 (82.3%) blood cultures after short-term culture. Interestingly, 11/24 of the isolated anaerobic bacteria could be identified after 5.5 h. The present study demonstrates, in a large number of clinical samples, that MALDI-TOF MS following short-term culture on solid medium is a reliable and rapid method for identification of bacteria from blood culture bottles with monomicrobial bacterial growth.

  20. Liabilities identification and long-term management at national level (Spain)

    International Nuclear Information System (INIS)

    economic uncertainties in high level waste disposal systems is a constant line of work, and in this respect ENRESA attempts to incorporate the most adequate techniques for cost analysis in a probabilistic framework. Even though the economical calculations are revised every year, tempering forecasting inaccuracies, in the longer term, it is felt that problems might arise if there were a particularly significant time difference between the dates of plant decommissioning and the initiation of repository construction work. Under these conditions, any delay in constructing the definitive disposal facility might lead to not having sufficient financial resources available for its construction, operation or dismantling. The Spanish legislation includes no indications in this respect. Conceptually, various treatment hypothesis could be envisaged, such as legally increasing the period of fee collection, the creation of an extra fee during the last few years of collection, the obligation for the waste producers to contract additional guarantees in order to address uncovered risks, or acceptance by the State of responsibilities in relation to this issue. Obviously, the case of a surplus of money after the completion of waste disposal is also to be taken into account. In relation to this hypothesis, criteria and procedures for liquidation or distribution should have to be set out. It is considered that, at present, it is to soon to approach such a question

  1. SU-E-J-182: A Feasibility Study Evaluating Automatic Identification of Gross Tumor Volume for Breast Cancer Radiotherapy Using Dynamic Contrast-Enhanced MR Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C; Horton, J; Yin, F; Blitzblau, R; Palta, M; Chang, Z [Duke University Medical Center, Durham, NC (United States)

    2014-06-01

    Purpose: To develop a computerized pharmacokinetic model-free Gross Tumor Volume (GTV) segmentation method based on dynamic contrastenhanced MRI (DCE-MRI) data that can improve physician GTV contouring efficiency. Methods: 12 patients with biopsy-proven early stage breast cancer with post-contrast enhanced DCE-MRI images were analyzed in this study. A fuzzy c-means (FCM) clustering-based method was applied to segment 3D GTV from pre-operative DCE-MRI data. A region of interest (ROI) is selected by a clinician/physicist, and the normalized signal evolution curves were calculated by dividing the signal intensity enhancement value at each voxel by the pre-contrast signal intensity value at the corresponding voxel. Three semi-quantitative metrics were analyzed based on normalized signal evolution curves: initial Area Under signal evolution Curve (iAUC), Immediate Enhancement Ratio (IER), and Variance of Enhancement Slope (VES). The FCM algorithm wass applied to partition ROI voxels into GTV voxels and non-GTV voxels by using three analyzed metrics. The partition map for the smaller cluster is then generated and binarized with an automatically calculated threshold. To reduce spurious structures resulting from background, a labeling operation was performed to keep the largest three-dimensional connected component as the identified target. Basic morphological operations including hole-filling and spur removal were useutilized to improve the target smoothness. Each segmented GTV was compared to that drawn by experienced radiation oncologists. An agreement index was proposed to quantify the overlap between the GTVs identified using two approaches and a thershold value of 0.4 is regarded as acceptable. Results: The GTVs identified by the proposed method were overlapped with the ones drawn by radiation oncologists in all cases, and in 10 out of 12 cases, the agreement indices were above the threshold of 0.4. Conclusion: The proposed automatic segmentation method was shown to

  2. 被淹没地震信号的小波熵检测与自动识别方法%METHOD OF DETECTION BY WAVELET ENTROPY AND IDENTIFICATION AUTOMATICALLY FOR SUBMERGED SEISMIC SIGNAL

    Institute of Scientific and Technical Information of China (English)

    杨建平; 帅晓勇; 陶黄林

    2015-01-01

    In order to detect the micro-seismic before large earthquake, protect the important facilities, such as the large coal, oil, mine and so on. It’s an urgent need for seismic data processing technique, such as real-time process, recognize automatically and extract the submerged seismic onset point. A multi-resolution complexity parameter was acquired based on the wavelet transform and the theory of information entropy, the parameter can clearly shows the change in the exploration data from the arrivals of seismic waves. A simulation was done with the exploration data, Comparison of the monitoring effect of wavelet transform or digital band-pass filter, the results show that the parameter can be very good at the micro seismic onset point for automatic identification.%为探测大震前的微震,保护大型煤矿、油田和矿山等重要设施,急需地震信号的实时处理、自动识别和提取地震初至点等地震数据处理技术。采用了小波变换和信息熵理论相结合的一种具有多分辨率的复杂度参数——小波熵,该参数能够从被淹没环境中清晰地显示出勘探数据中地震波到达所带来的变化。结合实测数据进行了仿真,并对比了单一的小波变换、数字带通滤波器的监测效果,结果表明小波熵参数能够更好地自动识别微震初至点。

  3. 低空运动目标的多传感器自动识别和实时跟踪%Automatic identification and real-time tracking based on multiple sensors for low-altitude moving targets

    Institute of Scientific and Technical Information of China (English)

    张作楠; 刘国栋; 娄建

    2011-01-01

    This paper discussed a method for low altitude moving target detection and tracking in TV tracking system. In order to increase the ability of automatic tracking and anti-interferene, based on a variety of sensors and electronic measuring devices, such as acoustic sensors, image sensors and laser range finder,proposed a multi-sensor integrated automatic identification and real-time servo algorithm. Firstly located the target initially by the positive acoustic localization technology, secondly used the dynamic and static image features as well as the sound source characteristics of the target in target classification and recognition. According to video tracking and trajectory prediction algorithm, the desired target error signal control servo for precise tracking was used to control the servo mechanism to track precisely. Experiments show thattthe algorithm is simple and effective to achieve enough precision and reliability, and also validate the feasibility for multiple sensors being used in full-automatic intelligent tracking system.%讨论了一种用于低空运动目标检测和跟踪的电视跟踪系统.为了提高系统自动跟踪和抗干扰能力,基于声—光—电多种传感器和测量装置如声波传感器、图像传感器和激光测距仪等,提出一种多传感器综合的自动目标识别和实时跟踪算法.该方法将被动声定位技术用于目标初定位,结合目标图像动静态特征和目标声源特征用于目标的特征提取和自动识别,根据视频跟踪和轨迹预测算法,得出期望的目标误差信号控制伺服机构进行精确跟踪.实验结果表明该算法简捷有效、精度和可靠性达到要求,验证了多传感器应用于全自动智能跟踪系统的可行性.

  4. 16S rRNA Gene Sequence-Based Identification of Bacteria in Automatically Incubated Blood Culture Materials from Tropical Sub-Saharan Africa.

    Directory of Open Access Journals (Sweden)

    Hagen Frickmann

    Full Text Available The quality of microbiological diagnostic procedures depends on pre-analytic conditions. We compared the results of 16S rRNA gene PCR and sequencing from automatically incubated blood culture materials from tropical Ghana with the results of cultural growth after automated incubation.Real-time 16S rRNA gene PCR and subsequent sequencing were applied to 1500 retained blood culture samples of Ghanaian patients admitted to a hospital with an unknown febrile illness after enrichment by automated culture.Out of all 1500 samples, 191 were culture-positive and 98 isolates were considered etiologically relevant. Out of the 191 culture-positive samples, 16S rRNA gene PCR and sequencing led to concordant results in 65 cases at species level and an additional 62 cases at genus level. PCR was positive in further 360 out of 1309 culture-negative samples, sequencing results of which suggested etiologically relevant pathogen detections in 62 instances, detections of uncertain relevance in 50 instances, and DNA contamination due to sample preparation in 248 instances. In two instances, PCR failed to detect contaminants from the skin flora that were culturally detectable. Pre-analytical errors caused many Enterobacteriaceae to be missed by culture.Potentially correctable pre-analytical conditions and not the fastidious nature of the bacteria caused most of the discrepancies. Although 16S rRNA gene PCR and sequencing in addition to culture led to an increase in detections of presumably etiologically relevant blood culture pathogens, the application of this procedure to samples from the tropics was hampered by a high contamination rate. Careful interpretation of diagnostic results is required.

  5. Automatic identification of origins of left and right coronary arteries in CT angiography for coronary arterial tree tracking and plaque detection

    Science.gov (United States)

    Zhou, Chuan; Chan, Heang-Ping; Chightai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Agarwal, Prachi; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    Automatic tracking and segmentation of the coronary arterial tree is the basic step for computer-aided analysis of coronary disease. The goal of this study is to develop an automated method to identify the origins of the left coronary artery (LCA) and right coronary artery (RCA) as the seed points for the tracking of the coronary arterial trees. The heart region and the contrast-filled structures in the heart region are first extracted using morphological operations and EM estimation. To identify the ascending aorta, we developed a new multiscale aorta search method (MAS) method in which the aorta is identified based on a-priori knowledge of its circular shape. Because the shape of the ascending aorta in the cCTA axial view is roughly a circle but its size can vary over a wide range for different patients, multiscale circularshape priors are used to search for the best matching circular object in each CT slice, guided by the Hausdorff distance (HD) as the matching indicator. The location of the aorta is identified by finding the minimum HD in the heart region over the set of multiscale circular priors. An adaptive region growing method is then used to extend the above initially identified aorta down to the aortic valves. The origins at the aortic sinus are finally identified by a morphological gray level top-hat operation applied to the region-grown aorta with morphological structuring element designed for coronary arteries. For the 40 test cases, the aorta was correctly identified in 38 cases (95%). The aorta can be grown to the aortic root in 36 cases, and 36 LCA origins and 34 RCA origins can be identified within 10 mm of the locations marked by radiologists.

  6. 采用CRF技术的军事情报术语自动抽取研究%Research on automatic military intelligence term extraction using CRF model

    Institute of Scientific and Technical Information of China (English)

    贾美英; 杨炳儒; 郑德权; 杨靖

    2009-01-01

    针对军事情报领域,提出了一种基于条件随机场的术语抽取方法,该方法将领域术语抽取看作一个序列标注问题,将领域术语分布的特征量化作为训练的特征.利用CRF工具包训练出一个领域术语特征模板,然后利用该模板进行领域术语抽取.实验采用的训练语料来自"搜狐网络军事频道"的新闻数据,测试语料选取杂志2007年第1-8期的所有文章.实验取得了良好的结果,准确率为73.24%,召回率为69.57%,F-测度为71.36%,表明该方法简单易行,且具有领域通用性.%This paper introduces a Conditional Random Fields(CRF) based method for term extraction,which intends to be used in military intelligent process.This method takes the field term extraction as an issue of sequence marking,quantitates the characters of field term distribution and takes it as the training chaxacters,leverages the CRF toolkit to generate a field term character template and uses the template for field term extraction.In the experiment,the materials for training are the news data from the military channel of Sohu Networks,the materials for testing axe all of the articles from magazine of Modern Military 2007,issues 1 to 8.The experimental result is positive with precision rate of 73.24%,recall rate of 69.75%,and F-measure of 71.36%.h turns out that this method is simple and feasible,and can be used on other fields.

  7. Study on Automatic English Synonym Terms Discovery from Web and the System Implementation%互联网环境下的英文同义术语自动发现研究与系统实现

    Institute of Scientific and Technical Information of China (English)

    刘伟; 黄小江; 万小军; 王星

    2012-01-01

    There are extremely abundant synonym term resources in the Web. Three effective approaches have been proposed in this paper, which are syntactical pattern learning, online synonym dictionary extraction, and static synonym category crawling. On this basis, a prototype system, Web Synonym Term Searcher, has been implemented. The experimental results show it is a promising way to automatically obtain synonym terms from the Web.%以英文同义术语为例,提出三种有效的自动获取互联网术语资源的技术手段,包括语法模式的自学习,在线同义词典的抽取,静态同义术语分类的爬取。在此基础上,设计并实现互联网同义术语检索原型系统(Web Synonym Searcher)。实验测试表明,从互联网中自动获取同义术语是一种非常有前景的途径。

  8. Checker pattern improvement and fully-automatic identification for camera calibration%摄像机标定的棋盘格模板的改进和自动识别

    Institute of Scientific and Technical Information of China (English)

    张浩鹏; 王宗义; 吴攀超; 林欣堂

    2012-01-01

    为了克服在摄像机标定过程中需要使用者给出标定模板的附加信息,或全自动标定点识别算法在遮挡、不均匀照明、大视角和摄像机镜头畸变情况下不能检测出标定点的缺点,提出一种改进的基于基准点标记的棋盘格模板以及相应的全自动识别算法.新的摄像机标定模板以基准点标记代替传统棋盘格的黑白方块,从而使全自动识别算法识别出标记的位置.利用模板中标记按照标记ID从小到大的顺序排列的先验知识,估计丢失的标定点位置.为了提高丢失标定点在图像中初始位置的估计,算法估计径向畸变参数,从而克服了畸变对识别的影响.为了提高标定点的定位精度,利用高精度的鞍点检测器,从而标定点的定位精度小于0.05像素.为了检测鞍点的有效性,算法提出2种滤波准则,最终得到有效的标定点.识别算法是有效的且不需要任何参数.实验结果表明,对于同样的摄像机和背景,使用改进的棋盘格模板及其识别算法获得的标定点进行摄像机标定的投影误差比ARTag减少70%.%In order to overcome the shortcomings that in camera calibration the user needs to give additional information of calibration pattern or fully-automatic identification algorithm of calibration points can not detect calibration points under the conditions of significant occlusions, uneven illumination, observation with extremely viewing angles and lens distortion, an improved checker pattern based on fiducial markers is designed, and the corresponding fully-automatic identification algorithm of the calibration points is proposed. The new camera calibration pattern replaces the black and white squares in traditional checker pattern with the fiducial markers, so the fully-automatic identification algorithm can locate the positions of the markers. Using the priori knowledge that the markers are arranged sequentially in the calibration pattern according to

  9. 全自动微生物分析系统对布鲁杆菌属和种鉴定效果的研究%Identification effects of automatic microbial analysis system on brucella genus and species

    Institute of Scientific and Technical Information of China (English)

    肖春霞; 赵鸿雁; 侯临平; 荣蓉; 刘熹; 赵赤鸿; 朴东日; 赵娜; 姜海

    2015-01-01

    Objective To identify and analyse the biochemical characterization of brucella and to evaluate its clinical application by VITEK2 COMPACT automatic microbial identification analyzer.Methods Seventeen strains of standard strains and 121 strains of experimental strains were from bacteria storehouse of brucella disease,Institute of Infectious Diseases Prevention and Control,China Center for Disease Control and Prevention.Experimental strains were from 26 provinces (municipalities and autonomous regions) from 1957 to 2014,including all previous strains from patients and goats,antelope,sheep,cattle,and pig.Reference standard strains and experimental strains were analyzed using the GN identification card on VITEK2 COMPACT automatic microbial identification analyzer,and biochemical identification of brucella strains was done.Identified abnormal strains were rechecked by traditional test methods,including oxidase experiment,urease experiment,semisolid experiment,determination of hydrogen sulfide experiment,basic fuchsin susceptibility experiment,phage lysis experiment,and A/M single-phase specific serum agglutination experiment.Results Of the 138 strains of brucella analyzed by the automatic microbial identification system,the results showed that the main identification indicators of brucella genus were:L-proline arylamidase (ProA),tyrosine arylamidase (TyrA),urease (URE),glycine arylamidase (GlyA),L-lactate alkalinisation (1LATK),and ELLMAN (ELLM).Compared with the system values,all strains biochemical function similar rate was 97.99% (135.23/138),including standard strains was 96.71% (16.44/17),experimental strains was 98.17% (118.79/ 121);time required for strains identification was 6.1-7.7 h,including standard strains was 7.3 h,experimental strains was 6.9 h.Identification indicators for distinguish brucella species were:ProA,TyrA,URE,and GlyA;for distinguish brucella melitensis was ELLM;for distinguish brucella abortus was 1LATK;for distinguish brucella suis was

  10. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  11. 21 CFR 870.5925 - Automatic rotating tourniquet.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automatic rotating tourniquet. 870.5925 Section 870.5925 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... rotating tourniquet. (a) Identification. An automatic rotating tourniquet is a device that prevents...

  12. Automatic determination of total alkalinity based on image identification technology%基于图像识别技术的总碱度自动测定方法

    Institute of Scientific and Technical Information of China (English)

    秦玉华; 王东兵; 张海燕; 欧佳; 徐志明

    2011-01-01

    提出了一种基于图像识别技术的水中总碱度的自动测量方法及装置.以盐酸为滴定剂,溴甲酚绿为指示剂,根据酸碱中和原理,利用等当点时溶液R、G、B值的突跃确定滴定终点,从而测出水中总碱度.实验结果表明,该方法测量碱度的线性范围为0.2~40 mmoL/L,相对标准偏差为0.43%,加标回收率为96.4%~102.6%.用于工业循环冷却水中总碱度的测定,具有操作简便、准确度高的特点,可实现碱度的自动测量.%A new automatic measuring method and device of alkalinity in water based on image identification technology are proposed in this paper. Based on the principles of acid-base titration, hydrochloric acid is used as the titrant and bromcresol green as the indicator, the equivalent point of titration is identified by a RGB-based value that is calculated using a proposed procedure based on red, green and blue color system; and the total alkalinity of the water can be measured. Experimental results show that the linear range of alkalinity detection ranks is from 0.2 to 40 mmol/L. This method gives a reproducibility of 0.43% R. S. D and a recovery rate of 96.4% -102. 6% . This method has the advantage of simplicity and accuracy when applied in total alkalinity measurement of industrial circulating cooling water. The automatic measurement of alkalinity can be achieved.

  13. 基于定点数据的道路瓶颈拥挤自动识别算法%Automatic Identification Algorithm for Road Bottlenecks Based on Detector Data

    Institute of Scientific and Technical Information of China (English)

    弓晋丽; 彭贤武

    2013-01-01

    为研究道路瓶颈处的交通拥挤现象,掌握由道路瓶颈引发的常发性拥挤的分布特点和变化规律,提出了道路瓶颈拥挤的自动识别算法.基于检测线圈历史数据,将交通状态定性划分为畅通和拥挤2种,根据瓶颈拥挤原理,识别道路瓶颈所在,并同时对由其引发的拥挤持续时长和拥挤范围进行鉴别.算法运算结果包含瓶颈定位及由其引发的拥挤持续时长和空间影响范围.以上海市南北高架路东侧10 d线圈检测数据为例,验证了算法的有效性和实用性.%To study the problem of traffic congestion at road bottlenecks and know the distribution properties as to change regulation of the Recurrent Congestion, a new automatic identification algorithm for road bottlenecks was proposed. Based on historical traffic data from the dual-loop detectors on road, the algorithm differentiates uncongested traffic state from congested state, thereby identifying traffic bottlenecks according to principle of congestion. It will also compute the congestion duration and influence range caused by bottlenecks at the same time. By using field data from dual-loop detectors on Shanghai North-South elevated road for 10 days, the effectiveness and practicality of the algorithm have been verified.

  14. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  15. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo;

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...... demonstrate that these techniques suffice for identifying a number of authentication flaws in symmetric key protocols such as Needham-Schroeder, Otway-Rees, Yahalom and Andrew Secure RPC....

  16. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  17. Automatic input rectification

    OpenAIRE

    Long, Fan; Ganesh, Vijay; Carbin, Michael James; Sidiroglou, Stelios; Rinard, Martin

    2012-01-01

    We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the ...

  18. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  19. Experimental research on nozzle device in mixed waste plastic automatic identification separator%废旧混合塑料自动识别分选机喷嘴装置实验研究

    Institute of Scientific and Technical Information of China (English)

    胡彪; 王树桐; 李健毅; 于立云; 汤桂兰; 张毅民

    2013-01-01

    In order to determine mixing plastic automatic identification separators best nozzle shape, firstly, analysising output pressure is the key to sorting plastic and we should calculate the minimum output pressure. Then discuss the influence of the output pressure of relevant parameters, through the calculation and simulation that the input and the output pressure attenuation degree, preliminary estimate input pressure and then through the experiment to get the date of output pressure with nozzle diameter and length change under the same input pressure, using the method of correlation factor summary to draw the diameter of the relationship between the output pressure and length of tubes. Based on the ahove, we analysis the curve rely on experimental data by fitting method to select the best nozzle parameters. In the end we measure its jet range and give the specific nozzle distribution scheme.%为确定废旧混合塑料自动识别分选机上喷嘴的最佳形状,我们首先分析得出输出压强是分选塑料的关键,并计算出最小输出压强.然后讨论影响输出压强的相关参数,通过计算和模拟得出输出压强相对输入压强的衰减程度,初步估算输入压强.然后通过实验获取相同输入压强下输出压强随喷嘴直径及管长变化的数据,用相关系数法总结绘制出直径和管长与输出压强的关系曲线.对实验数据进行拟合度分析,选取最佳的喷嘴参数.最后对其喷射范围进行测量,给出具体的喷嘴分布方案.

  20. Study on Ground Automatic Identification Technology for Intelligent Vehicle Based on Vision Sensor%基于视觉传感器的自主车辆地面自动辨识技术研究

    Institute of Scientific and Technical Information of China (English)

    崔根群; 余建明; 赵娴; 赵丛琳

    2011-01-01

    The ground automatic identification technology for intelligent vehicle is iaking Leobor-Edu autonomous vehicle as a test vector and using DH-HV2003UC-T vision sensor to collect image infarmaiion of five common lane roads( cobbled road, concrete road, dirt road, grass road, tile road) , then using MATLAB image processing module to perform coding compression, recovery reconstruction, smoothing, sharpening, enhancement, feature extraction and other related processing,then using MATLAB BP neural network module to carry on pattern recognition.Through analyzing the pattern recognition result, lt shows that the objective error is 20%, the road recognition rate has reached the intended requirement in the system,and it can be universally applied in the smart vehicle or robots and other related fields.%谊自主车辆地面自动辨识技术是以Leobot-Edu自主车辆作为试验载体,并应用DH-HV2003UC-T视觉传感器对常见的5种行车路面(石子路面、水泥路面、土壤路面、草地路面、砖地路面)进行图像信息的采集,应用Matlab图像处理模块对其依次进行压缩编码、复原重建、平滑、锐化、增强、特征提取等相关处理后,再应用Matlab BP神经网络模块进行模式识别.通过对模式识别结果分析可知,网络训练目标的函数误差为20%,该系统路面识别率达到预定要求,可以在智能车辆或移动机器人等相关领域普及使用.

  1. MassToMI - a Mathematica package for an automatic Mass Insertion expansion

    CERN Document Server

    Rosiek, Janusz

    2015-01-01

    We present a Mathematica package designed to automatize the expansion of QFT transition amplitudes calculated in the mass eigenstates basis (i.e. expressed in terms of physical masses and mixing matrices) into series of "mass insertions", defined as off-diagonal entries of mass matrices in Lagrangian before diagonalization and identification of the physical states. The algorithm implemented in this package is based on the general "Flavor Expansion Theorem" proven in Ref.~\\cite{FET}. The supplied routines are able to automatically analyze the structure of the amplitude, identify the parts which could be expanded and expand them to any required order. They are capable of dealing with amplitudes depending on both scalar or vector (Hermitian) and Dirac or Majorana fermion (complex) mass matrices. The package can be downloaded from the address www.fuw.edu.pl/masstomi.

  2. Fast automatic analysis of antenatal dexamethasone on micro-seizure activity in the EEG

    International Nuclear Information System (INIS)

    Full text: In this work wc develop an automatic scheme for studying the effect of the antenatal Dexamethasone on the EEG activity. To do so an FFT (Fast Fourier Transform) based detector was designed and applied to the EEG recordings obtained from two groups of fetal sheep. Both groups received two injections with a time delay of 24 h between them. However the applied medicine was different for each group (Dex and saline). The detector developed was used to automatically identify and classify micro-seizures that occurred in the frequency bands corresponding to the EEG transients known as slow waves (2.5 14 Hz). For each second of the data recordings the spectrum was computed and the rise of the energy in each predefined frequency band then counted when the energy level exceeded a predefined corresponding threshold level (Where the threshold level was obtained from the long term average of the spectral points at each band). Our results demonstrate that it was possible to automatically count the micro-seizures for the three different bands in a time effective manner. It was found that the number of transients did not strongly depend on the nature of the injected medicine which was consistent with the results manually obtained by an EEG expert. Tn conclusion, the automatic detection scheme presented here would allow for rapid micro-seizure event identification of hours of highly sampled EEG data thus providing a valuable time-saving device.

  3. Automatic stereoscopic system for person recognition

    Science.gov (United States)

    Murynin, Alexander B.; Matveev, Ivan A.; Kuznetsov, Victor D.

    1999-06-01

    A biometric access control system based on identification of human face is presented. The system developed performs remote measurements of the necessary face features. Two different scenarios of the system behavior are implemented. The first one assumes the verification of personal data entered by visitor from console using keyboard or card reader. The system functions as an automatic checkpoint, that strictly controls access of different visitors. The other scenario makes it possible to identify visitors without any person identifier or pass. Only person biometrics are used to identify the visitor. The recognition system automatically finds necessary identification information preliminary stored in the database. Two laboratory models of recognition system were developed. The models are designed to use different information types and sources. In addition to stereoscopic images inputted to computer from cameras the models can use voice data and some person physical characteristics such as person's height, measured by imaging system.

  4. Automatic basal slice detection for cardiac analysis

    Science.gov (United States)

    Paknezhad, Mahsa; Marchesseau, Stephanie; Brown, Michael S.

    2016-03-01

    Identification of the basal slice in cardiac imaging is a key step to measuring the ejection fraction (EF) of the left ventricle (LV). Despite research on cardiac segmentation, basal slice identification is routinely performed manually. Manual identification, however, has been shown to have high inter-observer variability, with a variation of the EF by up to 8%. Therefore, an automatic way of identifying the basal slice is still required. Prior published methods operate by automatically tracking the mitral valve points from the long-axis view of the LV. These approaches assumed that the basal slice is the first short-axis slice below the mitral valve. However, guidelines published in 2013 by the society for cardiovascular magnetic resonance indicate that the basal slice is the uppermost short-axis slice with more than 50% myocardium surrounding the blood cavity. Consequently, these existing methods are at times identifying the incorrect short-axis slice. Correct identification of the basal slice under these guidelines is challenging due to the poor image quality and blood movement during image acquisition. This paper proposes an automatic tool that focuses on the two-chamber slice to find the basal slice. To this end, an active shape model is trained to automatically segment the two-chamber view for 51 samples using the leave-one-out strategy. The basal slice was detected using temporal binary profiles created for each short-axis slice from the segmented two-chamber slice. From the 51 successfully tested samples, 92% and 84% of detection results were accurate at the end-systolic and the end-diastolic phases of the cardiac cycle, respectively.

  5. Automatic polar ice thickness estimation from SAR imagery

    Science.gov (United States)

    Rahnemoonfar, Maryam; Yari, Masoud; Fox, Geoffrey C.

    2016-05-01

    Global warming has caused serious damage to our environment in recent years. Accelerated loss of ice from Greenland and Antarctica has been observed in recent decades. The melting of polar ice sheets and mountain glaciers has a considerable influence on sea level rise and altering ocean currents, potentially leading to the flooding of the coastal regions and putting millions of people around the world at risk. Synthetic aperture radar (SAR) systems are able to provide relevant information about subsurface structure of polar ice sheets. Manual layer identification is prohibitively tedious and expensive and is not practical for regular, longterm ice-sheet monitoring. Automatic layer finding in noisy radar images is quite challenging due to huge amount of noise, limited resolution and variations in ice layers and bedrock. Here we propose an approach which automatically detects ice surface and bedrock boundaries using distance regularized level set evolution. In this approach the complex topology of ice and bedrock boundary layers can be detected simultaneously by evolving an initial curve in radar imagery. Using a distance regularized term, the regularity of the level set function is intrinsically maintained that solves the reinitialization issues arising from conventional level set approaches. The results are evaluated on a large dataset of airborne radar imagery collected during IceBridge mission over Antarctica and Greenland and show promising results in respect to hand-labeled ground truth.

  6. Second-Language Learners' Identification of Target-Language Phonemes: A Short-Term Phonetic Training Study

    Science.gov (United States)

    Cebrian, Juli; Carlet, Angelica

    2014-01-01

    This study examined the effect of short-term high-variability phonetic training on the perception of English /b/, /v/, /d/, /ð/, /ae/, /? /, /i/, and /i/ by Catalan/Spanish bilinguals learning English as a foreign language. Sixteen English-major undergraduates were tested before and after undergoing a four-session perceptual training program…

  7. Uterine electromyography for identification of first-stage labor arrest in term nulliparous women with spontaneous onset of labor

    NARCIS (Netherlands)

    Vasak, Blanka; Graatsma, Elisabeth M.; Hekman-Drost, Elske; Eijkemans, Marinus J.; van Leeuwen, Jules H. Schagen; Visser, Gerard H.; Jacod, Benoit C.

    2013-01-01

    OBJECTIVE: We sought to study whether uterine electromyography (EMG) can identify inefficient contractions leading to first-stage labor arrest followed by cesarean delivery in term nulliparous women with spontaneous onset of labor. STUDY DESIGN: EMG was recorded during spontaneous labor in 119 nulli

  8. 基于语序位置特征的汉英术语对自动抽取研究%Research on automatic Chinese-English term extraction based on order and position feature of words

    Institute of Scientific and Technical Information of China (English)

    张莉; 刘昱显

    2015-01-01

    With the explosion of information and in current society,knowledge is spreading among information in various areas and also in different languages.The characteristic of knowledge spreading brings people tremendous obstacles in understanding,retrieving and exchanging their thinking.Bilingual terminology is an important language resource for natural language processing tasks such as machine translation,data mining and bilingual information re-trieval.The collecting of bilingual terminology is often challenging and time-consuming because texts to be aligned are usually in different languages such as Chinese and English and there are significant differences in many cases. Thus bilingual terminology extraction and alignment becomes more important and brings more and more attention in the information processing and it plays an important role in cross-language retrieval,building bilingual dictionaries and machine translation research.The development of bilingual terminology extraction and alignment will benefit the building of translation memory in the field of machine-assisted translation and it can improve the quality of the machine translations while adding the bilingual terminology information.We propose an automatic Chinese-English terminology alignment algorithm based on the order and position feature information of words.The algorithm improves the terminology alignment of two-step strategy about extracting bilingual terms by integrating the order and position feature information of words in phrase-basedmachine translation.The experimental corpus we used is the journals in CSSCI from the year of 1 998 to 2012,mainly including the titles and abstracts in Chinese and English.In our experiment,37206 complete English titles and abstracts of many papers are launched including a total of 1.63 million words in Chinese and 1 910000 words in English.The algorithm improves accuracy rate of term alignment especially in the case of lower probability of terms translation while

  9. Automatic query formulations in information retrieval.

    Science.gov (United States)

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice. PMID:10299297

  10. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 2 is a collection of papers that discusses the controversy about the suitability of COBOL as a common business oriented language, and the development of different common languages for scientific computation. A couple of papers describes the use of the Genie system in numerical calculation and analyzes Mercury autocode in terms of a phrase structure language, such as in the source language, target language, the order structure of ATLAS, and the meta-syntactical language of the assembly program. Other papers explain interference or an ""intermediate

  11. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  12. Automatic quantitative analysis of morphology of apoptotic HL-60 cells

    OpenAIRE

    Liu, Yahui; Lin, Wang; Yang, Xu; Liang, Weizi; Zhang, Jun; Meng, Maobin; Rice, John R.; Sa, Yu; Feng, Yuanming

    2014-01-01

    Morphological identification is a widespread procedure to assess the presence of apoptosis by visual inspection of the morphological characteristics or the fluorescence images. The procedure is lengthy and results are observer dependent. A quantitative automatic analysis is objective and would greatly help the routine work. We developed an image processing and segmentation method which combined the Otsu thresholding and morphological operators for apoptosis study. An automatic determina...

  13. Automatic Identification of Digital Label Assembly Drawings of Mechanical Parts Based on Computer Vision Technology%基于计算机视觉技术的机械零件装配图数字标号的自动识别

    Institute of Scientific and Technical Information of China (English)

    江能兴

    2011-01-01

    In order to realize precisely the automatic identification of numeric characters in the assembly drawings of mechanical parts, the technology of Open Computer Vision libraries (OpenCV) are developed. This paper not only introduces the basic framework of OpenCV and its typical application areas, also, it compares and analyses the numeric characters in the assembly drawings of mechanical parts, which has great significance to the improvement on the current development in the area of the automatic identification of digital label assembly drawings of mechanical parts.%为精准快速地对机械零件装配图中的数字字符进行自动识别,提出一种基于开源计算机视觉库OpenCV的模板匹配方法.本文介绍OpenCV的基本框架、典型运用领域和利用OpenCV开发库对机械零件装配图中的数字字符进行自动识别的比较分析,该项工作对改进目前对机械图进行人工数字识别的现状具有重要的意义.

  14. Automatic Identification Method of Micro-blog Messages Containing Geographical Events%蕴含地理事件微博客消息的自动识别方法

    Institute of Scientific and Technical Information of China (English)

    仇培元; 陆锋; 张恒才; 余丽

    2016-01-01

    微博客文本蕴含类型丰富的地理事件信息,能够弥补传统定点监测手段的不足,提高事件应急响应质量。然而,由于大规模标注语料的普遍匮乏,无法利用监督学习过程识别蕴含地理事件信息的微博客文本。为此,本文提出一种蕴含地理事件微博客消息的自动识别方法,通过快速获取的语料资源增强识别效果。该方法利用主题模型具有提取文档中主题集合的优势,通过主题过滤候选语料文本,实现地理事件语料的自动提取。同时,将分布式表达词向量模型引入事件相关性计算过程,借助词向量隐含的语义信息丰富微博客短文本的上下文内容,进一步增强事件消息的识别效果。通过以新浪微博为数据源开展的实验分析表明,本文提出的蕴含地理事件信息微博客消息识别方法,识别来自事件微博话题的消息文本的F-1值可达到71.41%,比经典的基于SVM模型的监督学习方法提高了10.79%。在模拟真实微博环境的500万微博客数据集上的识别准确率达到60%。%Micro-blogs usually contain abundant types of geographical event information, which could compensate for the shortco-mings of traditional fixed point monitoring technologies and improve the quality of emergency response. Identify the micro-blog messages that containing the geographical event information is the prerequisite for fully utilizing this data source. The trigger-based and the supervised machine learning methods are commonly adopted to identify the event related texts. Comparatively, the super-vised machine learning methods have better performance than the trigger-based ones for unrestricted texts. Unfortunately, the lack of large-scale tagged corpuses cause the supervised machine learning methods cannot be implemented to identify the geographical event related messages. In this paper, we propose an automatic method for recognizing micro-blogs that are

  15. Automatically predicting mood from expressed emotions

    NARCIS (Netherlands)

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused aff

  16. Automated vertebra identification in CT images

    Science.gov (United States)

    Ehm, Matthias; Klinder, Tobias; Kneser, Reinhard; Lorenz, Cristian

    2009-02-01

    In this paper, we describe and compare methods for automatically identifying individual vertebrae in arbitrary CT images. The identification is an essential precondition for a subsequent model-based segmentation, which is used in a wide field of orthopedic, neurological, and oncological applications, e.g., spinal biopsies or the insertion of pedicle screws. Since adjacent vertebrae show similar characteristics, an automated labeling of the spine column is a very challenging task, especially if no surrounding reference structures can be taken into account. Furthermore, vertebra identification is complicated due to the fact that many images are bounded to a very limited field of view and may contain only few vertebrae. We propose and evaluate two methods for automatically labeling the spine column by evaluating similarities between given models and vertebral objects. In one method, object boundary information is taken into account by applying a Generalized Hough Transform (GHT) for each vertebral object. In the other method, appearance models containing mean gray value information are registered to each vertebral object using cross and local correlation as similarity measures for the optimization function. The GHT is advantageous in terms of computational performance but cuts back concerning the identification rate. A correct labeling of the vertebral column has been successfully performed on 93% of the test set consisting of 63 disparate input images using rigid image registration with local correlation as similarity measure.

  17. Source term identification of environmental radioactive Pu/U particles by their characterization with non-destructive spectrochemical analytical techniques

    Science.gov (United States)

    Eriksson, M.; Osán, J.; Jernström, J.; Wegrzynek, D.; Simon, R.; Chinea-Cano, E.; Markowicz, A.; Bamford, S.; Tamborini, G.; Török, S.; Falkenberg, G.; Alsecz, A.; Dahlgaard, H.; Wobrauschek, P.; Streli, C.; Zoeger, N.; Betti, M.

    2005-04-01

    Six radioactive particles stemming from Thule area (NW-Greenland) were investigated by gamma-ray and L X-ray spectrometry based on radioactive disintegration, scanning electron microscopy coupled with energy-dispersive and wavelength-dispersive X-ray spectrometer, synchrotron radiation based techniques as microscopic X-ray fluorescence, microscopic X-ray absorption near-edge structure (μ-XANES) as well as combined X-ray absorption and fluorescence microtomography. Additionally, one particle from Mururoa atoll was examined by microtomography. From the results obtained, it was found out that the U and Pu were mixed in the particles. The U/Pu intensity ratios in the Thule particles varied between 0.05 and 0.36. The results from the microtomography showed that U/Pu ratio was not homogeneously distributed. The 241Am/ 238 + 239 + 240 Pu activity ratios varied between 0.13 and 0.17, indicating that the particles originate from different source terms. The oxidation states of U and Pu as determined by μ-XANES showed that U(IV) is the preponderant species and for Pu, two types of particles could be evidenced. One set had about 90% Pu(IV) while in the other the ratio Pu(IV)/Pu(VI) was about one third.

  18. Short-term ECG recording for the identification of cardiac autonomic neuropathy in people with diabetes mellitus

    Science.gov (United States)

    Jelinek, Herbert F.; Pham, Phuong; Struzik, Zbigniew R.; Spence, Ian

    2007-07-01

    Diabetes mellitus (DM) is a serious and increasing health problem worldwide. Compared to non-diabetics, patients experience an increased risk of all cardiovascular diseases, including dysfunctional neural control of the heart. Poor diagnoses of cardiac autonomic neuropathy (CAN) may result in increased incidence of silent myocardial infarction and ischaemia, which can lead to sudden death. Traditionally the Ewing battery of tests is used to identify CAN. The purpose of this study is to examine the usefulness of heart rate variability (HRV) analyses of short-term ECG recordings as a method for detecting CAN. HRV may be able to identify asymptomatic individuals, which the Ewing battery is not able to do. Several HRV parameters are assessed, including time and frequency domain, as well as nonlinear parameters. Eighteen out of thirty-eight individuals with diabetes were positive for two or more of the Ewing battery of tests indicating CAN. Approximate Entropy (ApEn), log normalized total power (LnTP) and log normalized high frequency (LnHF) power demonstrate a significant difference at p ECG recordings. Our study paves the way to assess the utility of nonlinear parameters in identifying asymptomatic CAN.

  19. Metaphor identification in large texts corpora.

    Science.gov (United States)

    Neuman, Yair; Assaf, Dan; Cohen, Yohai; Last, Mark; Argamon, Shlomo; Howard, Newton; Frieder, Ophir

    2013-01-01

    Identifying metaphorical language-use (e.g., sweet child) is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus.

  20. Metaphor identification in large texts corpora.

    Directory of Open Access Journals (Sweden)

    Yair Neuman

    Full Text Available Identifying metaphorical language-use (e.g., sweet child is one of the challenges facing natural language processing. This paper describes three novel algorithms for automatic metaphor identification. The algorithms are variations of the same core algorithm. We evaluate the algorithms on two corpora of Reuters and the New York Times articles. The paper presents the most comprehensive study of metaphor identification in terms of scope of metaphorical phrases and annotated corpora size. Algorithms' performance in identifying linguistic phrases as metaphorical or literal has been compared to human judgment. Overall, the algorithms outperform the state-of-the-art algorithm with 71% precision and 27% averaged improvement in prediction over the base-rate of metaphors in the corpus.

  1. VEHICLE IDENTIFICATION TASK SOLUTION BY WINDSCREEN MARKING WITH A BARCODE

    Directory of Open Access Journals (Sweden)

    A. Levterov

    2012-01-01

    Full Text Available The vehicle identification means are considered and the present-day traffic requirements are set. The vehicle automatic identification method concerned with barcode use is proposed and described.

  2. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  3. Automatic utilities auditing

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Colin Boughton [Energy Metering Technology (United Kingdom)

    2000-08-01

    At present, energy audits represent only snapshot situations of the flow of energy. The normal pattern of energy audits as seen through the eyes of an experienced energy auditor is described. A brief history of energy auditing is given. It is claimed that the future of energy auditing lies in automatic meter reading with expert data analysis providing continuous automatic auditing thereby reducing the skill element. Ultimately, it will be feasible to carry out auditing at intervals of say 30 minutes rather than five years.

  4. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  5. An Automat for the Semantic Processing of Structured Information

    OpenAIRE

    Leiva-Mederos, Amed; Senso, Jos?? A.; Dom??nguez-Velasco, Sandor; H??pola, Pedro

    2012-01-01

    Using the database of the PuertoTerm project, an indexing system based on the cognitive model of Brigitte Enders was built. By analyzing the cognitive strategies of three abstractors, we built an automat that serves to simulate human indexing processes. The automat allows the texts integrated in the system to be assessed, evaluated and grouped by means of the Bipartite Spectral Graph Partitioning algorithm, which also permits visualization of the terms and the documents. The system features a...

  6. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  7. Profiling School Shooters: Automatic Text-Based Analysis

    Directory of Open Access Journals (Sweden)

    Yair eNeuman

    2015-06-01

    Full Text Available School shooters present a challenge to both forensic psychiatry and law enforcement agencies. The relatively small number of school shooters, their various charateristics, and the lack of in-depth analysis of all of the shooters prior to the shooting add complexity to our understanding of this problem. In this short paper, we introduce a new methodology for automatically profiling school shooters. The methodology involves automatic analysis of texts and the production of several measures relevant for the identification of the shooters. Comparing texts written by six school shooters to 6056 texts written by a comparison group of male subjects, we found that the shooters' texts scored significantly higher on the Narcissistic Personality dimension as well as on the Humilated and Revengeful dimensions. Using a ranking/priorization procedure, similar to the one used for the automatic identification of sexual predators, we provide support for the validity and relevance of the proposed methodology.

  8. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  9. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  10. Genotypic Identification

    Science.gov (United States)

    In comparison with traditional, phenotype-based procedures for detection and identification of foodborne pathogen Listeria monocytogenes, molecular techniques are superior in terms of sensitivity, specificity and speed. This chapter provides a comprehensive review on the use of molecular methods for...

  11. Automatic summarising factors and directions

    CERN Document Server

    Jones, K S

    1998-01-01

    This position paper suggests that progress with automatic summarising demands a better research methodology and a carefully focussed research strategy. In order to develop effective procedures it is necessary to identify and respond to the context factors, i.e. input, purpose, and output factors, that bear on summarising and its evaluation. The paper analyses and illustrates these factors and their implications for evaluation. It then argues that this analysis, together with the state of the art and the intrinsic difficulty of summarising, imply a nearer-term strategy concentrating on shallow, but not surface, text analysis and on indicative summarising. This is illustrated with current work, from which a potentially productive research programme can be developed.

  12. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  13. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  14. Automatic Program Reports

    OpenAIRE

    Lígia Maria da Silva Ribeiro; Gabriel de Sousa Torcato David

    2007-01-01

    To profit from the data collected by the SIGARRA academic IS, a systematic setof graphs and statistics has been added to it and are available on-line. Thisanalytic information can be automatically included in a flexible yearly report foreach program as well as in a synthesis report for the whole school. Somedifficulties in the interpretation of some graphs led to the definition of new keyindicators and the development of a data warehouse across the university whereeffective data consolidation...

  15. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  16. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  17. The ALDB box: automatic testing of cognitive performance in groups of aviary-housed pigeons.

    Science.gov (United States)

    Huber, Ludwig; Heise, Nils; Zeman, Christopher; Palmers, Christian

    2015-03-01

    The combination of highly controlled experimental testing and the voluntary participation of unrestrained animals has many advantages over traditional, laboratory-based learning environments in terms of animal welfare, learning speed, and resource economy. Such automatic learning environments have recently been developed for primates (Fagot & Bonté, 2010; Fagot & Paleressompoulle, 2009;) but, so far, has not been achieved with highly mobile creatures such as birds. Here, we present a novel testing environment for pigeons. Living together in small groups in outside aviaries, they can freely choose to participate in learning experiments by entering and leaving the automatic learning box at any time. At the single-access entry, they are individualized using radio frequency identification technology and then trained or tested in a stress-free and self-terminating manner. The voluntary nature of their participation according to their individual biorhythm guarantees high motivation levels and good learning and test performance. Around-the-clock access allows for massed-trials training, which in baboons has been proven to have facilitative effects on discrimination learning. The performance of 2 pigeons confirmed the advantages of the automatic learning device for birds box. The latter is the result of a development process of several years that required us to deal with and overcome a number of technical challenges: (1) mechanically controlled access to the box, (2) identification of the birds, (3) the release of a bird and, at the same time, prevention of others from entering the box, and (4) reliable functioning of the device despite long operation times and exposure to high dust loads and low temperatures.

  18. The ALDB box: automatic testing of cognitive performance in groups of aviary-housed pigeons.

    Science.gov (United States)

    Huber, Ludwig; Heise, Nils; Zeman, Christopher; Palmers, Christian

    2015-03-01

    The combination of highly controlled experimental testing and the voluntary participation of unrestrained animals has many advantages over traditional, laboratory-based learning environments in terms of animal welfare, learning speed, and resource economy. Such automatic learning environments have recently been developed for primates (Fagot & Bonté, 2010; Fagot & Paleressompoulle, 2009;) but, so far, has not been achieved with highly mobile creatures such as birds. Here, we present a novel testing environment for pigeons. Living together in small groups in outside aviaries, they can freely choose to participate in learning experiments by entering and leaving the automatic learning box at any time. At the single-access entry, they are individualized using radio frequency identification technology and then trained or tested in a stress-free and self-terminating manner. The voluntary nature of their participation according to their individual biorhythm guarantees high motivation levels and good learning and test performance. Around-the-clock access allows for massed-trials training, which in baboons has been proven to have facilitative effects on discrimination learning. The performance of 2 pigeons confirmed the advantages of the automatic learning device for birds box. The latter is the result of a development process of several years that required us to deal with and overcome a number of technical challenges: (1) mechanically controlled access to the box, (2) identification of the birds, (3) the release of a bird and, at the same time, prevention of others from entering the box, and (4) reliable functioning of the device despite long operation times and exposure to high dust loads and low temperatures. PMID:24737096

  19. A Joint Approach for Single-Channel Speaker Identification and Speech Separation

    DEFF Research Database (Denmark)

    Mowlaee, Pejman; Saeidi, Rahim; Christensen, Mads Græsbøll;

    2012-01-01

    ) accuracy, here, we report the objective and subjective results as well. The results show that the proposed system performs as well as the best of the state-of-the-art in terms of perceived quality while its performance in terms of speaker identification and automatic speech recognition results...... are generally lower. It outperforms the state-of-the-art in terms of intelligibility showing that the ASR results are not conclusive. The proposed method achieves on average, 52.3% ASR accuracy, 41.2 points in MUSHRA and 85.9% in speech intelligibility....... a situation where we have prior information of codebook indices, speaker identities and SSR-level, and then, by relaxing these assumptions one by one, we demonstrate the efficiency of the proposed fully blind system. In contrast to previous studies that mostly focus on automatic speech recognition (ASR...

  20. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2015-10-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  1. Automatic Caption Generation for Electronics Textbooks

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2014-12-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  2. Automatic identification and real-time tracking based on multiple sensors for low-altitude moving targets%一种多传感器反直升机智能雷伺服跟踪系统

    Institute of Scientific and Technical Information of China (English)

    张作楠; 刘国栋; 王婷婷

    2011-01-01

    讨论一种基于多传感器的反直升机智能雷AHM(Anti-Helicopter Mine)系统.为了提高智能雷的全自动智能跟踪能力和打击精度,在传统的被动声探测技术的基础上,结合图像传感器的视觉信息和激光测距仪的深度信息,提出一种基于声-光-电多传感器联合的自动目标探测、识别、跟踪算法.首先将五元十字声源定位技术用于低空目标探测和初始定位,然后对目标进行图像处理与特征提取,最后基于图像特征的视觉伺服跟踪算法得出伺服机构的旋转角以实现精确跟踪.%Discussed a tracking system for anti-helicopter mine (AHM) tracking system based on multi-sensors, in order to increase the ability of automatic tracking and the higher firing accuracy. Based on the traditional passive acoustic localization technology, a multi-sensor integrated automatic detection and real-time tracking algorithm is proposed with a variety of sensors and electronic measuring devices, such as acoustic sensors, image sensors and laser range finder. Firstly the target is initially located by the positive acoustic localization technology, then attract the target image feature by image processing, According to based-on-image visual servoing algorithm, the desired target error signal for precise tracking is used to control the servo mechanism to track precisely.

  3. Automatic Configuration in NTP

    Institute of Scientific and Technical Information of China (English)

    Jiang Zongli(蒋宗礼); Xu Binbin

    2003-01-01

    NTP is nowadays the most widely used distributed network time protocol, which aims at synchronizing the clocks of computers in a network and keeping the accuracy and validation of the time information which is transmitted in the network. Without automatic configuration mechanism, the stability and flexibility of the synchronization network built upon NTP protocol are not satisfying. P2P's resource discovery mechanism is used to look for time sources in a synchronization network, and according to the network environment and node's quality, the synchronization network is constructed dynamically.

  4. Automatically predicting mood from expressed emotions

    OpenAIRE

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused affective state, as opposed to the short-term, volatile and intense emotion. This is emphasized, because mood and emotion often tend to be used as synonyms. However, since their differences are well e...

  5. System Identification

    NARCIS (Netherlands)

    Keesman, K.J.

    2011-01-01

    Summary System Identification Introduction.- Part I: Data-based Identification.- System Response Methods.- Frequency Response Methods.- Correlation Methods.- Part II: Time-invariant Systems Identification.- Static Systems Identification.- Dynamic Systems Identification.- Part III: Time-varying Syste

  6. Ballistics Image Processing and Analysis for Firearm Identification

    OpenAIRE

    Li, Dongguang

    2009-01-01

    Firearm identification is an intensive and time-consuming process that requires physical interpretation of forensic ballistics evidence. Especially as the level of violent crime involving firearms escalates, the number of firearms to be identified accumulates dramatically. The demand for an automatic firearm identification system arises. This chapter proposes a new, analytic system for automatic firearm identification based on the cartridge and projectile specimens. Not only do we present an ...

  7. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Institute of Scientific and Technical Information of China (English)

    Yuangang TANG; Fuchun SUN; Zengqi SUN

    2005-01-01

    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  8. Photo-identification methods reveal seasonal and long-term site-fidelity of Risso’s dolphins (Grampus griseus) in shallow waters (Cardigan Bay, Wales)

    NARCIS (Netherlands)

    Boer, de M.N.; Leopold, M.F.; Simmonds, M.P.; Reijnders, P.J.H.

    2013-01-01

    A photo-identification study on Risso’s dolphins was carried out off Bardsey Island in Wales (July to September, 1997-2007). Their local abundance was estimated using two different analytical techniques: 1) mark-recapture of well-marked dolphins using a “closed-population” model; and 2) a census tec

  9. Optimal Coordination of Automatic Line Switches for Distribution Systems

    OpenAIRE

    Jyh-Cherng Gu; Ming-Ta Yang

    2012-01-01

    For the Taiwan Power Company (Taipower), the margins of coordination times between the lateral circuit breakers (LCB) of underground 4-way automatic line switches and the protection equipment of high voltage customers are often too small. This could lead to sympathy tripping by the feeder circuit breaker (FCB) of the distribution feeder and create difficulties in protection coordination between upstream and downstream protection equipment, identification of faults, and restoration operations....

  10. Automatic segmentation of relevant textures in agricultural images

    OpenAIRE

    Guijarro, Maria; Pajares, Gonzalo; Riomoros, I.; Herrera, P.J.; Burgos Artizzu, Xavier; Ribeiro Seijas, Angela

    2011-01-01

    One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulat...

  11. Automatic target validation based on neuroscientific literature mining for tractography

    OpenAIRE

    Xavier Vasques; Renaud Richardet; Etienne Pralong; LAURA CIF

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so t...

  12. Hydra: Automatic algorithm exploration from linear algebra equations

    OpenAIRE

    Duchâteau, Alexandre; Padua, David; Barthou, Denis

    2013-01-01

    International audience Hydra accepts an equation written in terms of operations on matrices and automatically produces highly efficient code to solve these equations. Processing of the equation starts by tiling the matrices. This transforms the equation into either a single new equation containing terms involving tiles or into multiple equations some of which can be solved in parallel with each other. Hydra continues transforming the equations using tiling and seeking terms that Hydra know...

  13. Semi-automatic analysis of fire debris

    Science.gov (United States)

    Touron; Malaquin; Gardebas; Nicolai

    2000-05-01

    Automated analysis of fire residues involves a strategy which deals with the wide variety of received criminalistic samples. Because of unknown concentration of accelerant in a sample and the wide range of flammable products, full attention from the analyst is required. Primary detection with a photoionisator resolves the first problem, determining the right method to use: the less responsive classical head-space determination or absorption on active charcoal tube, a better fitted method more adapted to low concentrations can thus be chosen. The latter method is suitable for automatic thermal desorption (ATD400), to avoid any risk of cross contamination. A PONA column (50 mx0.2 mm i.d.) allows the separation of volatile hydrocarbons from C(1) to C(15) and the update of a database. A specific second column is used for heavy hydrocarbons. Heavy products (C(13) to C(40)) were extracted from residues using a very small amount of pentane, concentrated to 1 ml at 50 degrees C and then placed on an automatic carousel. Comparison of flammables with referenced chromatograms provided expected identification, possibly using mass spectrometry. This analytical strategy belongs to the IRCGN quality program, resulting in analysis of 1500 samples per year by two technicians. PMID:10802196

  14. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  15. The Automatic Telescope Network (ATN)

    CERN Document Server

    Mattox, J R

    1999-01-01

    Because of the scheduled GLAST mission by NASA, there is strong scientific justification for preparation for very extensive blazar monitoring in the optical bands to exploit the opportunity to learn about blazars through the correlation of variability of the gamma-ray flux with flux at lower frequencies. Current optical facilities do not provide the required capability.Developments in technology have enabled astronomers to readily deploy automatic telescopes. The effort to create an Automatic Telescope Network (ATN) for blazar monitoring in the GLAST era is described. Other scientific applications of the networks of automatic telescopes are discussed. The potential of the ATN for science education is also discussed.

  16. Effects of moderate maternal energy restriction on the offspring metabolic health, in terms of obesity and related diseases, and identification of determinant factors and early biomarkers

    OpenAIRE

    Torrens García, Juana María

    2015-01-01

    Introduction A growing body of evidence, from epidemiological studies in humans and animal models, indicate that maternal health and nutritional status during gestation and lactation can program the propensity to develop obesity in their offspring. Huge efforts are now being directed toward understanding the molecular mechanisms underlying this developmental programming. Identification of these mechanisms could give some clues about potential strategies to prevent or revert programmed prop...

  17. Building an Automatic Thesaurus to Enhance Information Retrieval

    Directory of Open Access Journals (Sweden)

    Essam Said Hanandeh

    2013-01-01

    Full Text Available One of the major problems of modern Information Retrieval (IR systems is the vocabulary Problem that concerns with the discrepancies between terms used for describing documents and the terms used by the researcher to describe their information need. We have implemented an automatic thesurs, the system was built using Vector Space Model (VSM. In this model, we used Cosine measure similarity. In this paper we use selected 242 Arabic abstract documents. All these abstracts involve computer science and information system. The main goal of this paper is to design and build automatic Arabic thesauri using term-term similarity that can be used in any special field or domain to improve the expansion process and to get more relevance documents for the user's query. The study concluded that the similarl thesaurus improved the recall and precision more than traditional information retrieval system in terms of recall and precision level.

  18. Automatic programming of simulation models

    Science.gov (United States)

    Schroer, Bernard J.; Tseng, Fan T.; Zhang, Shou X.; Dwan, Wen S.

    1988-01-01

    The objective of automatic programming is to improve the overall environment for describing the program. This improved environment is realized by a reduction in the amount of detail that the programmer needs to know and is exposed to. Furthermore, this improved environment is achieved by a specification language that is more natural to the user's problem domain and to the user's way of thinking and looking at the problem. The goal of this research is to apply the concepts of automatic programming (AP) to modeling discrete event simulation system. Specific emphasis is on the design and development of simulation tools to assist the modeler define or construct a model of the system and to then automatically write the corresponding simulation code in the target simulation language, GPSS/PC. A related goal is to evaluate the feasibility of various languages for constructing automatic programming simulation tools.

  19. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  20. Photo-identification methods reveal seasonal and long-term site-fidelity of Risso’s dolphins (Grampus griseus) in shallow waters (Cardigan Bay, Wales)

    OpenAIRE

    Boer; Leopold, M.F.; Simmonds, M.P.; Reijnders, P.J.H.

    2013-01-01

    A photo-identification study on Risso’s dolphins was carried out off Bardsey Island in Wales (July to September, 1997-2007). Their local abundance was estimated using two different analytical techniques: 1) mark-recapture of well-marked dolphins using a “closed-population” model; and 2) a census technique based on the total number of iden-tified individual dolphins sighted over the study period. The mark-recapture estimates of 121 (left sides; 64 - 178, 95% CI; CV 0.24) and 145 dolphins (righ...

  1. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  2. THEORETICAL CONSIDERATIONS REGARDING THE AUTOMATIC FISCAL STABILIZERS OPERATING MECHANISM

    Directory of Open Access Journals (Sweden)

    Gondor Mihaela

    2012-07-01

    Full Text Available This paper examines the role of Automatic Fiscal Stabilizers (AFS for stabilizing the cyclical fluctuations of macroeconomic output as an alternative to discretionary fiscal policy, admitting its huge potential of being an anti crisis solution. The objectives of the study are the identification of the general features of the concept of automatic fiscal stabilizers and the logical assessment of them from economic perspectives. Based on the literature in the field, this paper points out the disadvantages of fiscal discretionary policy and argue the need of using Automatic Fiscal Stabilizers in order to provide a faster decision making process, shielded from political interference, and reduced uncertainty for households and business environment. The paper conclude about the need of using fiscal policy for smoothing the economic cycle, but in a way which includes among its features transparency, responsibility and clear operating mechanisms. Based on the research results the present paper assumes that pro-cyclicality reduces de effectiveness of the Automatic Fiscal Stabilizer and as a result concludes that it is very important to avoid the pro-cyclicality in fiscal rule design. Moreover, by committing in advance to specific fiscal policy action contingent on economic developments, uncertainty about the fiscal policy framework during a recession should be reduced. Being based on logical analysis and not focused on empirical, contextualized one, the paper presents some features of AFS operating mechanism and also identifies and systematizes the factors which provide its importance and national individuality. Reaching common understanding on the Automatic Fiscal Stabilizer concept as a institutional device for smoothing the gap of the economic cycles across different countries, particularly for the European Union Member States, will facilitate efforts to coordinate fiscal policy responses during a crisis, especially in the context of the fiscal

  3. Exploring Behavioral Markers of Long-Term Physical Activity Maintenance: A Case Study of System Identification Modeling within a Behavioral Intervention

    Science.gov (United States)

    Hekler, Eric B.; Buman, Matthew P.; Poothakandiyil, Nikhil; Rivera, Daniel E.; Dzierzewski, Joseph M.; Aiken Morgan, Adrienne; McCrae, Christina S.; Roberts, Beverly L.; Marsiske, Michael; Giacobbi, Peter R., Jr.

    2013-01-01

    Efficacious interventions to promote long-term maintenance of physical activity are not well understood. Engineers have developed methods to create dynamical system models for modeling idiographic (i.e., within-person) relationships within systems. In behavioral research, dynamical systems modeling may assist in decomposing intervention effects…

  4. Automatic measurement system for long term LED parameters

    Science.gov (United States)

    Budzyński, Łukasz; Zajkowski, Maciej

    2015-09-01

    During the past years significantly increased the number of LED models available on the market. However, not all of them have parameters which allow for use in professional lighting systems. The article discusses the international standards which should be met by modern LEDs. Among them, one of the most important parameters is factor of luminous flux decline in value during the operation of the LEDs. Its value is influenced by many factors, among others, the junction temperature of the diode and average and maximum values of supply current. Other important, for lighting reasons, parameters are stability of correlated color temperature and stability of chromaticity coordinates of the emitted light. The paper presents a system to measure luminous flux and colorimetric parameters of LEDs. Measurement system also allows for measuring a change in these parameters during operation of the LED.

  5. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS......). In the second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable...

  6. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  7. Algorithms for skiascopy measurement automatization

    Science.gov (United States)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  8. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  9. Face Prediction Model for an Automatic Age-invariant Face Recognition System

    OpenAIRE

    Yadav, Poonam

    2015-01-01

    Automated face recognition and identification softwares are becoming part of our daily life; it finds its abode not only with Facebook's auto photo tagging, Apple's iPhoto, Google's Picasa, Microsoft's Kinect, but also in Homeland Security Department's dedicated biometric face detection systems. Most of these automatic face identification systems fail where the effects of aging come into the picture. Little work exists in the literature on the subject of face prediction that accounts for agin...

  10. Topical Session on Liabilities identification and long-term management at national level - Topical Session held during the 36. Meeting of the RWMC

    International Nuclear Information System (INIS)

    These proceedings cover a topical session that was held at the March 2003 meeting of the Radioactive Waste Management Committee. The topical session focused on liability assessment and management for decommissioning of all types of nuclear installations, including decontamination of historic sites and waste management, as applicable. The presentations covered the current, national situations. The first oral presentation, from Switzerland, set the scene by providing a broad coverage of the relevant issues. The subsequent presentations - five from Member countries and one from the EC - described additional national positions and the evolving EC proposed directives. Each oral presentation was followed by a brief period of Q and As for clarification only. A plenary discussion took place on the ensemble of presentations and a Rapporteur provided a report on points made and lessons learnt. Additionally, written contributions were provided by RWMC delegates from several other countries. These are included in the proceedings as are the papers from the oral sessions, and the Rapporteur's report. These papers are not intended to be exhaustive, but to give an informed glimpse of NEA countries' approaches to liability identification and management in the context of nuclear facilities decommissioning and dismantling

  11. Early Automatic Detection of Parkinson's Disease Based on Sleep Recordings

    DEFF Research Database (Denmark)

    Kempfner, Jacob; Sorensen, Helge B D; Nikolic, Miki;

    2014-01-01

    SUMMARY: Idiopathic rapid-eye-movement (REM) sleep behavior disorder (iRBD) is most likely the earliest sign of Parkinson's Disease (PD) and is characterized by REM sleep without atonia (RSWA) and consequently increased muscle activity. However, some muscle twitching in normal subjects occurs...... during REM sleep. PURPOSE: There are no generally accepted methods for evaluation of this activity and a normal range has not been established. Consequently, there is a need for objective criteria. METHOD: In this study we propose a full-automatic method for detection of RSWA. REM sleep identification...... the number of outliers during REM sleep was used as a quantitative measure of muscle activity. RESULTS: The proposed method was able to automatically separate all iRBD test subjects from healthy elderly controls and subjects with periodic limb movement disorder. CONCLUSION: The proposed work is considered...

  12. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  13. Automatic Radiation Monitoring in Slovenia

    International Nuclear Information System (INIS)

    Full text: The automatic radiation monitoring system in Slovenia started in early nineties and now it comprises measurements of: 1. External gamma radiation: For the time being there are forty-three probes with GM tubes integrated into a common automatic network, operated at the SNSA. The probes measure dose rate in 30 minute intervals. 2. Aerosol radioactivity: Three automatic aerosol stations measure the concentration of artificial alpha and beta activity in the air, gamma emitting radionuclides, radioactive iodine 131 in the air (in all chemical forms, - natural radon and thoron progeny, 3. Radon progeny concentration: Radon progeny concentration is measured hourly and results are displayed as the equilibrium equivalent concentrations (EEC), 4. Radioactive deposition measurements: As a support to gamma dose rate measurements - the SNSA developed and installed an automatic measuring station for surface contamination equipped with gamma spectrometry system (with 3x3' NaI(Tl) detector). All data are transferred through the different communication pathways to the SNSA. They are collected in 30 minute intervals. Within these intervals the central computer analyses and processes the collected data, and creates different reports. Every month QA/QC analysis of data is performed, showing the statistics of acquisition errors and availability of measuring results. All results are promptly available at the our WEB pages. The data are checked and daily sent to the EURDEP system at Ispra (Italy) and also to the Austrian, Croatian and Hungarian authorities. (author)

  14. Automatically Preparing Safe SQL Queries

    Science.gov (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  15. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...

  16. Automatic quantification of iris color

    DEFF Research Database (Denmark)

    Christoffersen, S.; Harder, Stine; Andersen, J. D.;

    2012-01-01

    An automatic algorithm to quantify the eye colour and structural information from standard hi-resolution photos of the human iris has been developed. Initially, the major structures in the eye region are identified including the pupil, iris, sclera, and eyelashes. Based on this segmentation, the ...

  17. Automatic Association of News Items.

    Science.gov (United States)

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  18. Automatic milking : a better understanding

    NARCIS (Netherlands)

    Meijering, A.; Hogeveen, H.; Koning, de C.J.A.M.

    2004-01-01

    In 2000 the book Robotic Milking, reflecting the proceedings of an International Symposium which was held in The Netherlands came out. At that time, commercial introduction of automatic milking systems was no longer obstructed by technological inadequacies. Particularly in a few west-European countr

  19. Towards automatic identification of mismatched image pairs through loop constraints

    Science.gov (United States)

    Elibol, Armagan; Kim, Jinwhan; Gracias, Nuno; Garcia, Rafael

    2013-12-01

    Obtaining image sequences has become easier and easier thanks to the rapid progress on optical sensors and robotic platforms. Processing of image sequences (e.g., mapping, 3D reconstruction, Simultaneous Localisation and Mapping (SLAM)) usually requires 2D image registration. Recently, image registration is accomplished by detecting salient points in two images and nextmatching their descriptors. To eliminate outliers and to compute a planar transformation (homography) between the coordinate frames of images, robust methods (such as Random Sample Consensus (RANSAC) and Least Median of Squares (LMedS)) are employed. However, image registration pipeline can sometimes provide sufficient number of inliers within the error bounds even when images do not overlap. Such mismatches occur especially when the scene has repetitive texture and shows structural similarity. In this study, we present a method to identify the mismatches using closed-loop (cycle) constraints. The method exploits the fact that images forming a cycle should have identity mapping when all the homographies between images in the cycle multiplied. Cycles appear when the camera revisits an area that was imaged before, which is a common practice especially for mapping purposes. Our proposal extracts several cycles to obtain error statistics for each matched image pair. Then, it searches for image pairs that have extreme error histogram comparing to the other pairs. We present experimental results with artificially added mismatched image pairs on real underwater image sequences.

  20. Automatic Identification used in Audio-Visual indexing and Analysis

    Directory of Open Access Journals (Sweden)

    A. Satish Chowdary

    2011-09-01

    Full Text Available To locate a video clip in large collections is very important for retrieval applications, especially for digital rights management. We attempt to provide a comprehensive and high-level review of audiovisual features that can be extracted from the standard compressed domains, such as MPEG-1 and MPEG-2. This paper presents a graph transformation and matching approach to identify the occurrence of potentially different ordering or length due to content editing. With a novel batch query algorithm to retrieve similar frames, the mapping relationship between the query and database video is first represented by a bipartite graph. The densely matched parts along the long sequence are then extracted, followed by a filter-and-refine search strategy to prune some irrelevant subsequences. During the filtering stage, Maximum Size Matching is deployed for each sub graph constructed by the query and candidate subsequence to obtain a smaller set of candidates. During the refinement stage, Sub-Maximum Similarity Matching is devised to identify the subsequence with the highest aggregate score from all candidates, according to a robust video similarity model that incorporates visual content, temporal order, and frame alignment information. This new algorithm is based on dynamic programming that fully uses the temporal dimension to measure the similarity between two video sequences. A normalized chromaticity histogram is used as a feature which is illumination invariant. Dynamic programming is applied on shot level to find the optimal nonlinear mapping between video sequences. Two new normalized distance measures are presented for video sequence matching. One measure is based on the normalization of the optimal path found by dynamic programming. The other measure combines both the visual features and the temporal information. The proposed distance measures are suitable for variable-length comparisons.

  1. Strengthen the Supervision over Pharmaceuticals via Modern Automatic Identification

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Fake pharmaceuticals inflicts severely upon people(?)~-s health through its circulation in markets.To strengthen the supervision of the pharmaceutical market,China is improving and is perfecting its national coding system in the field of pharmaceuticals. Bar-code tag and IC tag are available to the coding system.This paper summarizes the significance of IC tag to the supervision of pharmaceuticals and gives us a strategically general prospect of pharmaceutical supervision.

  2. 33 CFR 401.20 - Automatic Identification System.

    Science.gov (United States)

    2010-07-01

    ... close to the primary conning position in the navigation bridge and a standard 120 Volt, AC, 3-prong power receptacle accessible for the pilot's laptop computer; and (5) The Minimum Keyboard Display (MKD) shall be located as close as possible to the primary conning position and be visible; (6) Computation...

  3. Automatic Control of Configuration of Web Anonymization

    Directory of Open Access Journals (Sweden)

    Tomas Sochor

    2013-01-01

    Full Text Available Anonymization of the Internet traffic usually hides details about the request originator from the target server. Such a disguise might be required in some situations, especially in the case of web browsing. Although the web traffic anonymization is not a part of the http specification, it could be achieved using a certain extra tool. Significant deceleration of anonymized traffic compared to normal traffic is inevitable but it can be controlled in some cases as this article suggests. The results presented here focus on measuring the parameters of such deceleration in terms of response time, transmission speed and latency and proposing the way how to control it. This study focuses on TOR primarily because recent studies have concluded that other tools (like I2P and JAP provide worse service. Sets of 14 file locations and 30 web pages have been formed and the latency, response time and transmission speed during the page or file download were measured repeatedly both with TOR active in various configurations and without TOR. The main result presented here comprises several ways how to improve the TOR anonymization efficiency and the proposal for its automatic control. In spite of the fact that efficiency still remains too low compared to normal web traffic for ordinary use, its automatic control could make TOR a useful tool in special cases.

  4. Automatic Induction of Rule Based Text Categorization

    Directory of Open Access Journals (Sweden)

    D.Maghesh Kumar

    2010-12-01

    Full Text Available The automated categorization of texts into predefined categories has witnessed a booming interest in the last 10 years, due to the increased availability of documents in digital form and the ensuingneed to organize them. In the research community the dominant approach to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. This paper describes, a novel method for the automatic induction of rule-based text classifiers. This method supports a hypothesis language of the form "if T1, … or Tn occurs in document d, and none of T1+n,... Tn+m occurs in d, then classify d under category c," where each Ti is a conjunction of terms. This survey discusses the main approaches to text categorization that fall within the machine learning paradigm. Issues pertaining tothree different problems, namely, document representation, classifier construction, and classifier evaluation were discussed in detail.

  5. Automatic sensor placement

    Science.gov (United States)

    Abidi, Besma R.

    1995-10-01

    Active sensing is the process of exploring the environment using multiple views of a scene captured by sensors from different points in space under different sensor settings. Applications of active sensing are numerous and can be found in the medical field (limb reconstruction), in archeology (bone mapping), in the movie and advertisement industry (computer simulation and graphics), in manufacturing (quality control), as well as in the environmental industry (mapping of nuclear dump sites). In this work, the focus is on the use of a single vision sensor (camera) to perform the volumetric modeling of an unknown object in an entirely autonomous fashion. The camera moves to acquire the necessary information in two ways: (a) viewing closely each local feature of interest using 2D data; and (b) acquiring global information about the environment via 3D sensor locations and orientations. A single object is presented to the camera and an initial arbitrary image is acquired. A 2D optimization process is developed. It brings the object in the field of view of the camera, normalizes it by centering the data in the image plane, aligns the principal axis with one of the camera's axes (arbitrarily chosen), and finally maximizes its resolution for better feature extraction. The enhanced image at each step is projected along the corresponding viewing direction. The new projection is intersected with previously obtained projections for volume reconstruction. During the global exploration of the scene, the current image as well as previous images are used to maximize the information in terms of shape irregularity as well as contrast variations. The scene on the borders of occlusion (contours) is modeled by an entropy-based objective functional. This functional is optimized to determine the best next view, which is recovered by computing the pose of the camera. A criterion based on the minimization of the difference between consecutive volume updates is set for termination of the

  6. Identification and quantification of phytochelatins in roots of rice to long-term exposure: evidence of individual role on arsenic accumulation and translocation.

    Science.gov (United States)

    Batista, Bruno Lemos; Nigar, Meher; Mestrot, Adrien; Rocha, Bruno Alves; Barbosa Júnior, Fernando; Price, Adam H; Raab, Andrea; Feldmann, Jörg

    2014-04-01

    Rice has the predilection to take up arsenic in the form of methylated arsenic (o-As) and inorganic arsenic species (i-As). Plants defend themselves using i-As efflux systems and the production of phytochelatins (PCs) to complex i-As. Our study focused on the identification and quantification of phytochelatins by HPLC-ICP-MS/ESI-MS, relating them to the several variables linked to As exposure. GSH, 11 PCs, and As-PC complexes from the roots of six rice cultivars (Italica Carolina, Dom Sofid, 9524, Kitrana 508, YRL-1, and Lemont) exposed to low and high levels of i-As were compared with total, i-As, and o-As in roots, shoots, and grains. Only Dom Sofid, Kitrana 508, and 9524 were found to produce higher levels of PCs even when exposed to low levels of As. PCs were only correlated to i-As in the roots (r=0.884, P <0.001). However, significant negative correlations to As transfer factors (TF) roots-grains (r= -0.739, P <0.05) and shoots-grains (r= -0.541, P <0.05), suggested that these peptides help in trapping i-As but not o-As in the roots, reducing grains' i-As. Italica Carolina reduced i-As in grains after high exposure, where some specific PCs had a special role in this reduction. In Lemont, exposure to elevated levels of i-As did not result in higher i-As levels in the grains and there were no significant increases in PCs or thiols. Finally, the high production of PCs in Kitrana 508 and Dom Sofid in response to high As treatment did not relate to a reduction of i-As in grains, suggesting that other mechanisms such as As-PC release and transport seems to be important in determining grain As in these cultivars.

  7. Proteomics-based identification of haptoglobin as a favourable serum biomarker for predicting long-term response to splenectomy in patients with primary immune thrombocytopenia

    Directory of Open Access Journals (Sweden)

    Zheng Chao-Xu

    2012-10-01

    Full Text Available Abstract Background Splenectomy is the most effective treatment for patients with primary immune thrombocytopenia (ITP who fail to respond to steroid therapy. Thus far, there is no effective means to predict the long-term haematological response of the procedure. The purpose of this study was to identify serum biomarkers as predictors of long-term response based on a proteomics approach. Methods The serum samples of ITP patients were collected before splenectomy and seven days after surgery. After depletion of the abundant serum proteins, pooled preoperative serum samples from four responders to splenectomy, four nonresponders and four healthy controls were subjected to two-dimensional gel electrophoresis (2-DE. Nine protein spots with at least a five-fold alteration in expression between responders and nonresponders were all identified as haptoglobin (Hp by matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF mass spectrometer (MS analysis. The validation of serum Hp expression was performed using enzyme-linked immunosorbent assays (ELISA in thirty-seven responders, thirteen nonresponders and twenty-one healthy controls. Results The preoperative serum levels of Hp in the nonresponders (925.9 ± 293.5 μg/ml were significantly lower than those in the responders (1417.4 ± 315.0 μg/ml, p p p>0.05. The preoperative serum levels of Hp did not significantly correlate with preoperative platelet count of the same blood samples (r = 0.244, p = 0.087, while it positively correlated with postoperative peak platelet count (r = 0.622, p Conclusions These results suggest that serum Hp levels may serve as a favourable predictor for the long-term response to splenectomy in ITP and may help to understand the pathophysiological differences between responders and nonresponders.

  8. CRISPR Recognition Tool (CRT): a tool for automatic detection ofclustered regularly interspaced palindromic repeats

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Charles; Ramsey, Teresa L.; Sabree, Fareedah; Lowe,Micheal; Brown, Kyndall; Kyrpides, Nikos C.; Hugenholtz, Philip

    2007-05-01

    Clustered Regularly Interspaced Palindromic Repeats (CRISPRs) are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT also demonstrated superior performance, especially for genomes containing large numbers of repeats. In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, was shown to be a significant improvement over the current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n) in space and O(nm/l) in time.

  9. Synthesis of digital locomotive receiver of automatic locomotive signaling

    Directory of Open Access Journals (Sweden)

    K. V. Goncharov

    2013-02-01

    Full Text Available Purpose. Automatic locomotive signaling of continuous type with a numeric coding (ALSN has several disadvantages: a small number of signal indications, low noise stability, high inertia and low functional flexibility. Search for new and more advanced methods of signal processing for automatic locomotive signaling, synthesis of the noise proof digital locomotive receiver are essential. Methodology. The proposed algorithm of detection and identification locomotive signaling codes is based on the definition of mutual correlations of received oscillation and reference signals. For selecting threshold levels of decision element the following criterion has been formulated: the locomotive receiver should maximum set the correct solution for a given probability of dangerous errors. Findings. It has been found that the random nature of the ALSN signal amplitude does not affect the detection algorithm. However, the distribution law and numeric characteristics of signal amplitude affect the probability of errors, and should be considered when selecting a threshold levels According to obtained algorithm of detection and identification ALSN signals the digital locomotive receiver has been synthesized. It contains band pass filter, peak limiter, normalizing amplifier with automatic gain control circuit, analog to digital converter and digital signal processor. Originality. The ALSN system is improved by the way of the transfer of technical means to modern microelectronic element base, more perfect methods of detection and identification codes of locomotive signaling are applied. Practical value. Use of digital technology in the construction of the locomotive receiver ALSN will expand its functionality, will increase the noise immunity and operation stability of the locomotive signal system in conditions of various destabilizing factors.

  10. Automatic feed system for ultrasonic machining

    Science.gov (United States)

    Calkins, Noel C.

    1994-01-01

    Method and apparatus for ultrasonic machining in which feeding of a tool assembly holding a machining tool toward a workpiece is accomplished automatically. In ultrasonic machining, a tool located just above a workpiece and vibrating in a vertical direction imparts vertical movement to particles of abrasive material which then remove material from the workpiece. The tool does not contact the workpiece. Apparatus for moving the tool assembly vertically is provided such that it operates with a relatively small amount of friction. Adjustable counterbalance means is provided which allows the tool to be immobilized in its vertical travel. A downward force, termed overbalance force, is applied to the tool assembly. The overbalance force causes the tool to move toward the workpiece as material is removed from the workpiece.

  11. Automatic Queuing Model for Banking Applications

    Directory of Open Access Journals (Sweden)

    Dr. Ahmed S. A. AL-Jumaily

    2011-08-01

    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  12. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  13. Automatic radar target recognition of objects falling on railway tracks

    International Nuclear Information System (INIS)

    This paper presents an automatic radar target recognition procedure based on complex resonances using the signals provided by ultra-wideband radar. This procedure is dedicated to detection and identification of objects lying on railway tracks. For an efficient complex resonance extraction, a comparison between several pole extraction methods is illustrated. Therefore, preprocessing methods are presented aiming to remove most of the erroneous poles interfering with the discrimination scheme. Once physical poles are determined, a specific discrimination technique is introduced based on the Euclidean distances. Both simulation and experimental results are depicted showing an efficient discrimination of different targets including guided transport passengers

  14. Identification of the growth hormone-releasing hormone analogue [Pro1, Val14]-hGHRH with an incomplete C-term amidation in a confiscated product.

    Science.gov (United States)

    Esposito, Simone; Deventer, Koen; Van Eenoo, Peter

    2014-01-01

    In this work, a modified version of the 44 amino acid human growth hormone-releasing hormone (hGHRH(1-44)) containing an N-terminal proline extension, a valine residue in position 14, and a C-terminus amidation (sequence: PYADAIFTNSYRKVVLGQLSARKLLQDIMSRQQGESNQERGARARL-NH2 ) has been identified in a confiscated product by liquid chromatography-high resolution mass spectrometry (LC-HRMS). Investigation of the product suggests also an incomplete C-term amidation. Similarly to other hGHRH analogues, available in black markets, this peptide can potentially be used as performance-enhancing drug due to its growth hormone releasing activity and therefore it should be considered as a prohibited substance in sport. Additionally, the presence of partially amidated molecule reveals the poor pharmaceutical quality of the preparation, an aspect which represents a big concern for public health as well. PMID:25283153

  15. Semi-automatic removal of foreground stars from images of galaxies

    CERN Document Server

    Frei, Z

    1996-01-01

    A new procedure, designed to remove foreground stars from galaxy profiles is presented. Although several programs exist for stellar and faint object photometry, none of them treat star removal from the images very carefully. I present my attempt to develop such a system, and briefly compare the performance of my software to one of the well known stellar photometry packages, DAOPhot. Major steps in my procedure are: (1) automatic construction of an empirical 2D point spread function from well separated stars that are situated off the galaxy; (2) automatic identification of those peaks that are likely to be foreground stars, scaling the PSF and removing these stars, and patching residuals (in the automatically determined smallest possible area where residuals are truly significant); and (3) cosmetic fix of remaining degradations in the image. The algorithm and software presented here is significantly better for automatic removal of foreground stars from images of galaxies than DAOPhot or similar packages, since...

  16. Long-term high frequency measurements of ethane, benzene and methyl chloride at Ragged Point, Barbados: Identification of long-range transport events

    Directory of Open Access Journals (Sweden)

    A.T. Archibald

    2015-09-01

    Full Text Available AbstractHere we present high frequency long-term observations of ethane, benzene and methyl chloride from the AGAGE Ragged Point, Barbados, monitoring station made using a custom built GC-MS system. Our analysis focuses on the first three years of data (2005–2007 and on the interpretation of periodic episodes of high concentrations of these compounds. We focus specifically on an exemplar episode during September 2007 to assess if these measurements are impacted by long-range transport of biomass burning and biogenic emissions. We use the Lagrangian Particle Dispersion model, NAME, run forwards and backwards in time to identify transport of air masses from the North East of Brazil during these events. To assess whether biomass burning was the cause we used hot spots detected using the MODIS instrument to act as point sources for simulating the release of biomass burning plumes. Excellent agreement for the arrival time of the simulated biomass burning plumes and the observations of enhancements in the trace gases indicates that biomass burning strongly influenced these measurements. These modelling data were then used to determine the emissions required to match the observations and compared with bottom up estimates based on burnt area and literature emission factors. Good agreement was found between the two techniques highlight the important role of biomass burning. The modelling constrained by in situ observations suggests that the emission factors were representative of their known upper limits, with the in situ data suggesting slightly greater emissions of ethane than the literature emission factors account for. Further analysis was performed concluding only a small role for biogenic emissions of methyl chloride from South America impacting measurements at Ragged Point. These results highlight the importance of long-term high frequency measurements of NMHC and ODS and highlight how these data can be used to determine sources of emissions

  17. Bilirubin nomograms for identification of neonatal hyperbilirubinemia in healthy term and late-preterm infants:a systematic review and meta-analysis

    Institute of Scientific and Technical Information of China (English)

    Zhang-Bin Yu; Shu-Ping Han; Chao Chen

    2014-01-01

    Background: Hyperbilirubinemia occurs in most healthy term and late-preterm infants, and must be monitored to identify those who might develop severe hyperbilirubinemia. Total serum bilirubin (TSB) or transcutaneous bilirubin (TcB) nomograms have been developed and validated to identify neonatal hyperbilirubinemia. This study aimed to review previously published studies and compare the TcB nomograms with the TSB nomogram, and to determine if the former has the same predictive value for signifi cant hyperbilirubinemia as TSB nomogram does. Methods: A predefined search strategy and inclusion criteria were set up. We selected studies assessing the predictive ability of TSB/TcB nomograms to identify significant hyperbilirubinemia in healthy term and latepreterm infants. Two independent reviewers assessed the quality and extracted the data from the included studies. Meta-Disc 1.4 analysis software was used to calculate the pooled sensitivity, specificity, and positive likelihood ratio of TcB/TSB nomograms. A pooled summary of the receiver operating characteristic of the TcB/TSB nomograms was created. Results: After screening 187 publications from electronic database searches and reference lists of eligible articles, we included 14 studies in the systematic review and meta-analysis. Eleven studies were of medium methodological quality. The remaining three studies were of low methodological quality. Seven studies evaluated the TcB nomograms, and seven studies assessed TSB nomograms. There were no differences between the predictive abilities of the TSB and TcB nomograms (the pooled area under curve was 0.819 vs. 0.817). Conclusions: This study showed that TcB nomograms had the same predictive value as TSB nomograms, both of which could be used to identify subsequent signifi cant hyperbilirubinemia. But this result should be interpreted cautiously because some methodological limitations of these included studies were identifi ed in this review.

  18. An Automatic Proof of Euler's Formula

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2005-05-01

    Full Text Available In this information age, everything is digitalized. The encoding of functions and the automatic proof of functions are important. This paper will discuss the automatic calculation for Taylor expansion coefficients, as an example, it can be applied to prove Euler's formula automatically.

  19. Self-Compassion and Automatic Thoughts

    Science.gov (United States)

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  20. Automatically Determining Scale Within Unstructured Point Clouds

    Science.gov (United States)

    Kadamen, Jayren; Sithole, George

    2016-06-01

    Three dimensional models obtained from imagery have an arbitrary scale and therefore have to be scaled. Automatically scaling these models requires the detection of objects in these models which can be computationally intensive. Real-time object detection may pose problems for applications such as indoor navigation. This investigation poses the idea that relational cues, specifically height ratios, within indoor environments may offer an easier means to obtain scales for models created using imagery. The investigation aimed to show two things, (a) that the size of objects, especially the height off ground is consistent within an environment, and (b) that based on this consistency, objects can be identified and their general size used to scale a model. To test the idea a hypothesis is first tested on a terrestrial lidar scan of an indoor environment. Later as a proof of concept the same test is applied to a model created using imagery. The most notable finding was that the detection of objects can be more readily done by studying the ratio between the dimensions of objects that have their dimensions defined by human physiology. For example the dimensions of desks and chairs are related to the height of an average person. In the test, the difference between generalised and actual dimensions of objects were assessed. A maximum difference of 3.96% (2.93cm) was observed from automated scaling. By analysing the ratio between the heights (distance from the floor) of the tops of objects in a room, identification was also achieved.

  1. SPHOTOM - Package for an Automatic Multicolour Photometry

    Science.gov (United States)

    Parimucha, Š.; Vaňko, M.; Mikloš, P.

    2012-04-01

    We present basic information about package SPHOTOM for an automatic multicolour photometry. This package is in development for the creation of a photometric pipe-line, which we plan to use in the near future with our new instruments. It could operate in two independent modes, (i) GUI mode, in which the user can select images and control functions of package through interface and (ii) command line mode, in which all processes are controlled using a main parameter file. SPHOTOM is developed as a universal package for Linux based systems with easy implementation for different observatories. The photometric part of the package is based on the Sextractor code, which allows us to detect all objects on the images and perform their photometry with different apertures. We can also perform astrometric solutions for all images for a correct cross-identification of the stars on the images. The result is a catalogue of all objects with their instrumental photometric measurements which are consequently used for a differential magnitudes calculations with one or more comparison stars, transformations to an international system, and determinations of colour indices.

  2. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  3. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  4. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  5. Automatic Schema Evolution in Root

    Institute of Scientific and Technical Information of China (English)

    ReneBrun; FonsRademakers

    2001-01-01

    ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.

  6. The Automaticity of Social Life

    OpenAIRE

    Bargh, John A.; Williams, Erin L.

    2006-01-01

    Much of social life is experienced through mental processes that are not intended and about which one is fairly oblivious. These processes are automatically triggered by features of the immediate social environment, such as the group memberships of other people, the qualities of their behavior, and features of social situations (e.g., norms, one's relative power). Recent research has shown these nonconscious influences to extend beyond the perception and interpretation of the social world to ...

  7. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  8. Automatic Generation of Technical Documentation

    OpenAIRE

    Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of ...

  9. Annual review in automatic programming

    CERN Document Server

    Halpern, Mark I; Bolliet, Louis

    2014-01-01

    Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown.

  10. The Automatic Galaxy Collision Software

    CERN Document Server

    Smith, Beverly J; Pfeiffer, Phillip; Perkins, Sam; Barkanic, Jason; Fritts, Steve; Southerland, Derek; Manchikalapudi, Dinikar; Baker, Matt; Luckey, John; Franklin, Coral; Moffett, Amanda; Struck, Curtis

    2009-01-01

    The key to understanding the physical processes that occur during galaxy interactions is dynamical modeling, and especially the detailed matching of numerical models to specific systems. To make modeling interacting galaxies more efficient, we have constructed the `Automatic Galaxy Collision' (AGC) code, which requires less human intervention in finding good matches to data. We present some preliminary results from this code for the well-studied system Arp 284 (NGC 7714/5), and address questions of uniqueness of solutions.

  11. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... of an integral operator and uses interval Bernstein polynomials for enclosing the solution. Two numerical examples are given, using two orders of approximation and using different numbers of discretization points....

  12. De Novo Transcriptome Assembly and Identification of Gene Candidates for Rapid Evolution of Soil Al Tolerance in Anthoxanthum odoratum at the Long-Term Park Grass Experiment.

    Directory of Open Access Journals (Sweden)

    Billie Gould

    Full Text Available Studies of adaptation in the wild grass Anthoxanthum odoratum at the Park Grass Experiment (PGE provided one of the earliest examples of rapid evolution in plants. Anthoxanthum has become locally adapted to differences in soil Al toxicity, which have developed there due to soil acidification from long-term experimental fertilizer treatments. In this study, we used transcriptome sequencing to identify Al stress responsive genes in Anthoxanhum and identify candidates among them for further molecular study of rapid Al tolerance evolution at the PGE. We examined the Al content of Anthoxanthum tissues and conducted RNA-sequencing of root tips, the primary site of Al induced damage. We found that despite its high tolerance Anthoxanthum is not an Al accumulating species. Genes similar to those involved in organic acid exudation (TaALMT1, ZmMATE, cell wall modification (OsSTAR1, and internal Al detoxification (OsNRAT1 in cultivated grasses were responsive to Al exposure. Expression of a large suite of novel loci was also triggered by early exposure to Al stress in roots. Three-hundred forty five transcripts were significantly more up- or down-regulated in tolerant vs. sensitive Anthoxanthum genotypes, providing important targets for future study of rapid evolution at the PGE.

  13. Metabolite Profiling of Diverse Rice Germplasm and Identification of Conserved Metabolic Markers of Rice Roots in Response to Long-Term Mild Salinity Stress

    Directory of Open Access Journals (Sweden)

    Myung Hee Nam

    2015-09-01

    Full Text Available The sensitivity of rice to salt stress greatly depends on growth stages, organ types and cultivars. Especially, the roots of young rice seedlings are highly salt-sensitive organs that limit plant growth, even under mild soil salinity conditions. In an attempt to identify metabolic markers of rice roots responding to salt stress, metabolite profiling was performed by 1H-NMR spectroscopy in 38 rice genotypes that varied in biomass accumulation under long-term mild salinity condition. Multivariate statistical analysis showed separation of the control and salt-treated rice roots and rice genotypes with differential growth potential. By quantitative analyses of 1H-NMR data, five conserved salt-responsive metabolic markers of rice roots were identified. Sucrose, allantoin and glutamate accumulated by salt stress, whereas the levels of glutamine and alanine decreased. A positive correlation of metabolite changes with growth potential and salt tolerance of rice genotypes was observed for allantoin and glutamine. Adjustment of nitrogen metabolism in rice roots is likely to be closely related to maintain the growth potential and increase the stress tolerance of rice.

  14. De Novo Transcriptome Assembly and Identification of Gene Candidates for Rapid Evolution of Soil Al Tolerance in Anthoxanthum odoratum at the Long-Term Park Grass Experiment.

    Science.gov (United States)

    Gould, Billie; McCouch, Susan; Geber, Monica

    2015-01-01

    Studies of adaptation in the wild grass Anthoxanthum odoratum at the Park Grass Experiment (PGE) provided one of the earliest examples of rapid evolution in plants. Anthoxanthum has become locally adapted to differences in soil Al toxicity, which have developed there due to soil acidification from long-term experimental fertilizer treatments. In this study, we used transcriptome sequencing to identify Al stress responsive genes in Anthoxanhum and identify candidates among them for further molecular study of rapid Al tolerance evolution at the PGE. We examined the Al content of Anthoxanthum tissues and conducted RNA-sequencing of root tips, the primary site of Al induced damage. We found that despite its high tolerance Anthoxanthum is not an Al accumulating species. Genes similar to those involved in organic acid exudation (TaALMT1, ZmMATE), cell wall modification (OsSTAR1), and internal Al detoxification (OsNRAT1) in cultivated grasses were responsive to Al exposure. Expression of a large suite of novel loci was also triggered by early exposure to Al stress in roots. Three-hundred forty five transcripts were significantly more up- or down-regulated in tolerant vs. sensitive Anthoxanthum genotypes, providing important targets for future study of rapid evolution at the PGE. PMID:26148203

  15. An efficient scheme for automatic web pages categorization using the support vector machine

    Science.gov (United States)

    Bhalla, Vinod Kumar; Kumar, Neeraj

    2016-07-01

    In the past few years, with an evolution of the Internet and related technologies, the number of the Internet users grows exponentially. These users demand access to relevant web pages from the Internet within fraction of seconds. To achieve this goal, there is a requirement of an efficient categorization of web page contents. Manual categorization of these billions of web pages to achieve high accuracy is a challenging task. Most of the existing techniques reported in the literature are semi-automatic. Using these techniques, higher level of accuracy cannot be achieved. To achieve these goals, this paper proposes an automatic web pages categorization into the domain category. The proposed scheme is based on the identification of specific and relevant features of the web pages. In the proposed scheme, first extraction and evaluation of features are done followed by filtering the feature set for categorization of domain web pages. A feature extraction tool based on the HTML document object model of the web page is developed in the proposed scheme. Feature extraction and weight assignment are based on the collection of domain-specific keyword list developed by considering various domain pages. Moreover, the keyword list is reduced on the basis of ids of keywords in keyword list. Also, stemming of keywords and tag text is done to achieve a higher accuracy. An extensive feature set is generated to develop a robust classification technique. The proposed scheme was evaluated using a machine learning method in combination with feature extraction and statistical analysis using support vector machine kernel as the classification tool. The results obtained confirm the effectiveness of the proposed scheme in terms of its accuracy in different categories of web pages.

  16. Identification of genes co-upregulated with Arc during BDNF-induced long-term potentiation in adult rat dentate gyrus in vivo.

    Science.gov (United States)

    Wibrand, Karin; Messaoudi, Elhoucine; Håvik, Bjarte; Steenslid, Vibeke; Løvlie, Roger; Steen, Vidar M; Bramham, Clive R

    2006-03-01

    Brain-derived neurotrophic factor (BDNF) is a critical regulator of transcription-dependent adaptive neuronal responses, such as long-term potentiation (LTP). Brief infusion of BDNF into the dentate gyrus of adult anesthetized rats triggers stable LTP at medial perforant path-granule synapses that is transcription-dependent and requires induction of the immediate early gene Arc. Rather than acting alone, Arc is likely to be part of a larger BDNF-induced transcriptional program. Here, we used cDNA microarray expression profiling to search for genes co-upregulated with Arc 3 h after BDNF-LTP induction. Of nine cDNAs encoding for known genes and up-regulated more than four-fold, we selected five genes, Narp, neuritin, ADP-ribosylation factor-like protein-4 (ARL4L), TGF-beta-induced immediate early gene-1 (TIEG1) and CARP, for further validation. Real-time PCR confirmed robust up-regulation of these genes in an independent set of BDNF-LTP experiments, whereas infusion of the control protein cytochrome C had no effect. In situ hybridization histochemistry further revealed up-regulation of all five genes in somata of post-synaptic granule cells following both BDNF-LTP and high-frequency stimulation-induced LTP. While Arc synthesis is critical for local actin polymerization and stable LTP formation, several of the co-upregulated genes have known functions in excitatory synaptogenesis, axon guidance and glutamate receptor clustering. These results provide novel insight into gene expression responses underlying BDNF-induced synaptic consolidation in the adult brain in vivo. PMID:16553613

  17. Extension Matching of Automatic Assembly for Mass Personalized Products Customization%大批量个性定制产品自动化装配的可拓匹配

    Institute of Scientific and Technical Information of China (English)

    张国伟

    2011-01-01

    The matter-element theory is applied to divide customized accessories to improve efficiency of mass manufacturing customized products, reduce inventory cost and complete rapid market response for individualized products. The control method of the extension theory is used to improve f the automatic assembly line in terms of similarity identification, accurate control and quality identification.%运用物元理论对定制配件进行划分,提高定制产品的大批量生产效率,减轻库存成本,完成个性化产品的快速市场响应。利用可拓理论的控制方法对自动化装配线从相似性识别、精准控制、质量鉴定进行改进。

  18. Multilabel Learning for Automatic Web Services Tagging

    Directory of Open Access Journals (Sweden)

    Mustapha AZNAG

    2014-08-01

    Full Text Available Recently, some web services portals and search engines as Biocatalogue and Seekda!, have allowed users to manually annotate Web services using tags. User Tags provide meaningful descriptions of services and allow users to index and organize their contents. Tagging technique is widely used to annotate objects in Web 2.0 applications. In this paper we propose a novel probabilistic topic model (which extends the CorrLDA model - Correspondence Latent Dirichlet Allocation- to automatically tag web services according to existing manual tags. Our probabilistic topic model is a latent variable model that exploits local correlation labels. Indeed, exploiting label correlations is a challenging and crucial problem especially in multi-label learning context. Moreover, several existing systems can recommend tags for web services based on existing manual tags. In most cases, the manual tags have better quality. We also develop three strategies to automatically recommend the best tags for web services. We also propose, in this paper, WS-Portal; An Enriched Web Services Search Engine which contains 7063 providers, 115 sub-classes of category and 22236 web services crawled from the Internet. In WS-Portal, severals technologies are employed to improve the effectiveness of web service discovery (i.e. web services clustering, tags recommendation, services rating and monitoring. Our experiments are performed out based on real-world web services. The comparisons of Precision@n, Normalised Discounted Cumulative Gain (NDCGn values for our approach indicate that the method presented in this paper outperforms the method based on the CorrLDA in terms of ranking and quality of generated tags.

  19. Digital movie-based on automatic titrations.

    Science.gov (United States)

    Lima, Ricardo Alexandre C; Almeida, Luciano F; Lyra, Wellington S; Siqueira, Lucas A; Gaião, Edvaldo N; Paiva Junior, Sérgio S L; Lima, Rafaela L F C

    2016-01-15

    This study proposes the use of digital movies (DMs) in a flow-batch analyzer (FBA) to perform automatic, fast and accurate titrations. The term used for this process is "Digital movie-based on automatic titrations" (DMB-AT). A webcam records the DM during the addition of the titrant to the mixing chamber (MC). While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 26 frames per second (FPS). The first frame is used as a reference to define the region of interest (ROI) of 28×13pixels and the R, G and B values, which are used to calculate the Hue (H) values for each frame. The Pearson's correlation coefficient (r) is calculated between the H values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the r values and the opening time of the titrant valve. The end point is estimated by the second derivative method. A software written in C language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by application in acid/base test samples and edible oils. Results were compared with classical titration and did not present statistically significant differences when the paired t-test at the 95% confidence level was applied. The proposed method is able to process about 117-128 samples per hour for the test and edible oil samples, respectively, and its precision was confirmed by overall relative standard deviation (RSD) values, always less than 1.0%. PMID:26592600

  20. The RNA world, automatic sequences and oncogenetics

    International Nuclear Information System (INIS)

    We construct a model of the RNA world in terms of naturally evolving nucleotide sequences assuming only Crick-Watson base pairing and self-cleaving/splicing capability. These sequences have the following properties. 1) They are recognizable by an automation (or automata). That is, to each k-sequence, there exist a k-automation which accepts, recognizes or generates the k-sequence. These are known as automatic sequences. Fibonacci and Morse-Thue sequences are the most natural outcome of pre-biotic chemical conditions. 2) Infinite (resp. large) sequences are self-similar (resp. nearly self-similar) under certain rewrite rules and consequently give rise to fractal (resp.fractal-like) structures. Computationally, such sequences can also be generated by their corresponding deterministic parallel re-write system, known as a DOL system. The self-similar sequences are fixed points of their respective rewrite rules. Some of these automatic sequences have the capability that they can read or 'accept' other sequences while others can detect errors and trigger error-correcting mechanisms. They can be enlarged and have block and/or palindrome structure. Linear recurring sequences such as Fibonacci sequence are simply Feed-back Shift Registers, a well know model of information processing machines. We show that a mutation of any rewrite rule can cause a combinatorial explosion of error and relates this to oncogenetical behavior. On the other hand, a mutation of sequences that are not rewrite rules, leads to normal evolutionary change. Known experimental results support our hypothesis. (author). Refs

  1. Unification of automatic target tracking and automatic target recognition

    Science.gov (United States)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  2. Automatic evaluations and exercise setting preference in frequent exercisers.

    Science.gov (United States)

    Antoniewicz, Franziska; Brand, Ralf

    2014-12-01

    The goals of this study were to test whether exercise-related stimuli can elicit automatic evaluative responses and whether automatic evaluations reflect exercise setting preference in highly active exercisers. An adapted version of the Affect Misattribution Procedure was employed. Seventy-two highly active exercisers (26 years ± 9.03; 43% female) were subliminally primed (7 ms) with pictures depicting typical fitness center scenarios or gray rectangles (control primes). After each prime, participants consciously evaluated the "pleasantness" of a Chinese symbol. Controlled evaluations were measured with a questionnaire and were more positive in participants who regularly visited fitness centers than in those who reported avoiding this exercise setting. Only center exercisers gave automatic positive evaluations of the fitness center setting (partial eta squared = .08). It is proposed that a subliminal Affect Misattribution Procedure paradigm can elicit automatic evaluations to exercising and that, in highly active exercisers, these evaluations play a role in decisions about the exercise setting rather than the amounts of physical exercise. Findings are interpreted in terms of a dual systems theory of social information processing and behavior.

  3. Semi-automatic classification of textures in thoracic CT scans

    Science.gov (United States)

    Kockelkorn, Thessa T. J. P.; de Jong, Pim A.; Schaefer-Prokop, Cornelia M.; Wittenberg, Rianne; Tiehuis, Audrey M.; Gietema, Hester A.; Grutters, Jan C.; Viergever, Max A.; van Ginneken, Bram

    2016-08-01

    The textural patterns in the lung parenchyma, as visible on computed tomography (CT) scans, are essential to make a correct diagnosis in interstitial lung disease. We developed one automatic and two interactive protocols for classification of normal and seven types of abnormal lung textures. Lungs were segmented and subdivided into volumes of interest (VOIs) with homogeneous texture using a clustering approach. In the automatic protocol, VOIs were classified automatically by an extra-trees classifier that was trained using annotations of VOIs from other CT scans. In the interactive protocols, an observer iteratively trained an extra-trees classifier to distinguish the different textures, by correcting mistakes the classifier makes in a slice-by-slice manner. The difference between the two interactive methods was whether or not training data from previously annotated scans was used in classification of the first slice. The protocols were compared in terms of the percentages of VOIs that observers needed to relabel. Validation experiments were carried out using software that simulated observer behavior. In the automatic classification protocol, observers needed to relabel on average 58% of the VOIs. During interactive annotation without the use of previous training data, the average percentage of relabeled VOIs decreased from 64% for the first slice to 13% for the second half of the scan. Overall, 21% of the VOIs were relabeled. When previous training data was available, the average overall percentage of VOIs requiring relabeling was 20%, decreasing from 56% in the first slice to 13% in the second half of the scan.

  4. Automatic evaluations and exercise setting preference in frequent exercisers.

    Science.gov (United States)

    Antoniewicz, Franziska; Brand, Ralf

    2014-12-01

    The goals of this study were to test whether exercise-related stimuli can elicit automatic evaluative responses and whether automatic evaluations reflect exercise setting preference in highly active exercisers. An adapted version of the Affect Misattribution Procedure was employed. Seventy-two highly active exercisers (26 years ± 9.03; 43% female) were subliminally primed (7 ms) with pictures depicting typical fitness center scenarios or gray rectangles (control primes). After each prime, participants consciously evaluated the "pleasantness" of a Chinese symbol. Controlled evaluations were measured with a questionnaire and were more positive in participants who regularly visited fitness centers than in those who reported avoiding this exercise setting. Only center exercisers gave automatic positive evaluations of the fitness center setting (partial eta squared = .08). It is proposed that a subliminal Affect Misattribution Procedure paradigm can elicit automatic evaluations to exercising and that, in highly active exercisers, these evaluations play a role in decisions about the exercise setting rather than the amounts of physical exercise. Findings are interpreted in terms of a dual systems theory of social information processing and behavior. PMID:25602145

  5. Unsupervised automatic music genre classification

    OpenAIRE

    Barreira, Luís Filipe Marques

    2010-01-01

    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática In this study we explore automatic music genre recognition and classification of digital music. Music has always been a reflection of culture di erences and an influence in our society. Today’s digital content development triggered the massive use of digital music. Nowadays,digital music is manually labeled without following a universa...

  6. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro

  7. The Automaticity of Social Life.

    Science.gov (United States)

    Bargh, John A; Williams, Erin L

    2006-02-01

    Much of social life is experienced through mental processes that are not intended and about which one is fairly oblivious. These processes are automatically triggered by features of the immediate social environment, such as the group memberships of other people, the qualities of their behavior, and features of social situations (e.g., norms, one's relative power). Recent research has shown these nonconscious influences to extend beyond the perception and interpretation of the social world to the actual guidance, over extended time periods, of one's important goal pursuits and social interactions.

  8. Automatic analysis of multiparty meetings

    Indian Academy of Sciences (India)

    Steve Renals

    2011-10-01

    This paper is about the recognition and interpretation of multiparty meetings captured as audio, video and other signals. This is a challenging task since the meetings consist of spontaneous and conversational interactions between a number of participants: it is a multimodal, multiparty, multistream problem. We discuss the capture and annotation of the Augmented Multiparty Interaction (AMI) meeting corpus, the development of a meeting speech recognition system, and systems for the automatic segmentation, summarization and social processing of meetings, together with some example applications based on these systems.

  9. Automatic Inference of DATR Theories

    CERN Document Server

    Barg, P

    1996-01-01

    This paper presents an approach for the automatic acquisition of linguistic knowledge from unstructured data. The acquired knowledge is represented in the lexical knowledge representation language DATR. A set of transformation rules that establish inheritance relationships and a default-inference algorithm make up the basis components of the system. Since the overall approach is not restricted to a special domain, the heuristic inference strategy uses criteria to evaluate the quality of a DATR theory, where different domains may require different criteria. The system is applied to the linguistic learning task of German noun inflection.

  10. The Automaticity of Social Life.

    Science.gov (United States)

    Bargh, John A; Williams, Erin L

    2006-02-01

    Much of social life is experienced through mental processes that are not intended and about which one is fairly oblivious. These processes are automatically triggered by features of the immediate social environment, such as the group memberships of other people, the qualities of their behavior, and features of social situations (e.g., norms, one's relative power). Recent research has shown these nonconscious influences to extend beyond the perception and interpretation of the social world to the actual guidance, over extended time periods, of one's important goal pursuits and social interactions. PMID:18568084

  11. Automatic Generation of Technical Documentation

    CERN Document Server

    Reiter, E R; Levine, J; Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of our experiences with IDAS and the lessons we have learned from it will be beneficial for other researchers who wish to build technical-documentation generation systems.

  12. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  13. Automatic transcription of polyphonic singing

    OpenAIRE

    Paščinski, Uroš

    2015-01-01

    In this work we focus on automatic transcription of polyphonic singing. In particular we do the multiple fundamental frequency (F0) estimation. From the terrain recordings a test set of Slovenian folk songs with polyphonic singing is extracted and manually transcribed. On the test set we try the general algorithm for multiple F0 detection. An interactive visualization of the main parts of the algorithm is made to analyse how it works and try to detect possible issues. As the data set is ne...

  14. Toward an Automatic Calibration of Dual Fluoroscopy Imaging Systems

    Science.gov (United States)

    Al-Durgham, Kaleel; Lichti, Derek; Kuntze, Gregor; Sharma, Gulshan; Ronsky, Janet

    2016-06-01

    High-speed dual fluoroscopy (DF) imaging provides a novel, in-vivo solution to quantify the six-degree-of-freedom skeletal kinematics of humans and animals with sub-millimetre accuracy and high temporal resolution. A rigorous geometric calibration of DF system parameters is essential to ensure precise bony rotation and translation measurements. One way to achieve the system calibration is by performing a bundle adjustment with self-calibration. A first-time bundle adjustment-based system calibration was recently achieved. The system calibration through the bundle adjustment has been shown to be robust, precise, and straightforward. Nevertheless, due to the inherent absence of colour/semantic information in DF images, a significant amount of user input is needed to prepare the image observations for the bundle adjustment. This paper introduces a semi-automated methodology to minimise the amount of user input required to process calibration images and henceforth to facilitate the calibration task. The methodology is optimized for processing images acquired over a custom-made calibration frame with radio-opaque spherical targets. Canny edge detection is used to find distinct structural components of the calibration images. Edge-linking is applied to cluster the edge pixels into unique groups. Principal components analysis is utilized to automatically detect the calibration targets from the groups and to filter out possible outliers. Ellipse fitting is utilized to achieve the spatial measurements as well as to perform quality analysis over the detected targets. Single photo resection is used together with a template matching procedure to establish the image-to-object point correspondence and to simplify target identification. The proposed methodology provided 56,254 identified-targets from 411 images that were used to run a second-time bundle adjustment-based DF system calibration. Compared to a previous fully manual procedure, the proposed methodology has

  15. A bar-code reader for an alpha-beta automatic counting system - FAG

    International Nuclear Information System (INIS)

    A bar-code laser system for sample number reading was integrated into the FAG Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. Installation of the bar-code reader system required several modifications: Mechanical changes in the automatic sample changer, design and production of new sample holders, modification of the sample planchettes, changes in the electronic system, update of the operating software of the system (authors)

  16. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  17. Automatic Speech Segmentation Based on HMM

    OpenAIRE

    M. Kroul

    2007-01-01

    This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is ...

  18. Towards unifying inheritance and automatic program specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2002-01-01

    Inheritance allows a class to be specialized and its attributes refined, but implementation specialization can only take place by overriding with manually implemented methods. Automatic program specialization can generate a specialized, effcient implementation. However, specialization of programs...... with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be e#ciently implemented...

  19. Automatic Control of Water Pumping Stations

    Institute of Scientific and Technical Information of China (English)

    Muhannad Alrheeh; JIANG Zhengfeng

    2006-01-01

    Automatic Control of pumps is an interesting proposal to operate water pumping stations among many kinds of water pumping stations according to their functions.In this paper, our pumping station is being used for water supply system. This paper is to introduce the idea of pump controller and the important factors that must be considering when we want to design automatic control system of water pumping stations. Then the automatic control circuit with the function of all components will be introduced.

  20. An automatic visual analysis system for tennis

    OpenAIRE

    Connaghan, Damien; Moran, Kieran; O''Connor, Noel E.

    2013-01-01

    This article presents a novel video analysis system for coaching tennis players of all levels, which uses computer vision algorithms to automatically edit and index tennis videos into meaningful annotations. Existing tennis coaching software lacks the ability to automatically index a tennis match into key events, and therefore, a coach who uses existing software is burdened with time-consuming manual video editing. This work aims to explore the effectiveness of a system to automatically de...

  1. 78 FR 58785 - Unique Device Identification System

    Science.gov (United States)

    2013-09-24

    ... device identification system, as required by section 519(f) of the FD&C Act (see 77 FR 40736). On July 9... how this term should be applied to HCT/Ps, ``where the donor identification is of singular importance... by lot or batch, unless the lot or batch identification is associated with a single donor, as...

  2. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    Science.gov (United States)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog

  3. The Parametric Identification Of A Stationary Process

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In the problems of identification it is supposed that the process has at least one measurable input size and at least one measurable output size. The identification of a process has three stages: the obtaining of a registration of process measurable sizes; the choice of a proper mathematical model for the process; the extract of the parameter values of the mathematical models from registered data. The parametric identification problem is an optimization problem, in which the best combination of values for the model parameters set is searched. In the paper is presented the parametric identification of a water flow process in a laboratory stand. The identification had the following dims: detailed understanding of how the stand works, finding a new illustrative experiment for the stand, the application of advanced techniques of automat control, and the development of a project of new stand, meant to allow a large variety of experiments.

  4. Automatic generation of stop word lists for information retrieval and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  5. Automatic image enhancement by artificial bee colony algorithm

    Science.gov (United States)

    Yimit, Adiljan; Hagihara, Yoshihiro; Miyoshi, Tasuku; Hagihara, Yukari

    2013-03-01

    With regard to the improvement of image quality, image enhancement is an important process to assist human with better perception. This paper presents an automatic image enhancement method based on Artificial Bee Colony (ABC) algorithm. In this method, ABC algorithm is applied to find the optimum parameters of a transformation function, which is used in the enhancement by utilizing the local and global information of the image. In order to solve the optimization problem by ABC algorithm, an objective criterion in terms of the entropy and edge information is introduced to measure the image quality to make the enhancement as an automatic process. Several images are utilized in experiments to make a comparison with other enhancement methods, which are genetic algorithm-based and particle swarm optimization algorithm-based image enhancement methods.

  6. Detection of Off-normal Images for NIF Automatic Alignment

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J V; Awwal, A S; McClay, W A; Ferguson, S W; Burkhart, S C

    2005-07-11

    One of the major purposes of National Ignition Facility at Lawrence Livermore National Laboratory is to accurately focus 192 high energy laser beams on a nanoscale (mm) fusion target at the precise location and time. The automatic alignment system developed for NIF is used to align the beams in order to achieve the required focusing effect. However, if a distorted image is inadvertently created by a faulty camera shutter or some other opto-mechanical malfunction, the resulting image termed ''off-normal'' must be detected and rejected before further alignment processing occurs. Thus the off-normal processor acts as a preprocessor to automatic alignment image processing. In this work, we discuss the development of an ''off-normal'' pre-processor capable of rapidly detecting the off-normal images and performing the rejection. Wide variety of off-normal images for each loop is used to develop the criterion for rejections accurately.

  7. Autoclass: An automatic classification system

    Science.gov (United States)

    Stutz, John; Cheeseman, Peter; Hanson, Robin

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework, and using various mathematical and algorithmic approximations, the AutoClass System searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit, or share, model parameters through a class hierarchy. The mathematical foundations of AutoClass are summarized.

  8. Automatic Sequencing for Experimental Protocols

    Science.gov (United States)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  9. Study on flaw identification of ultrasonic signal for large shafts based on optimal support vector machine

    Institute of Scientific and Technical Information of China (English)

    Zhao Xiufen; Yin Guofu; Tian Guiyun; Yin Ying

    2008-01-01

    Automatic identification of flaws is very important for ultrasonic nondestructive testing and evaluation of large shaft. A novel automatic defect identification system is presented. Wavelet packet analysis (WPA) was applied to feature extraction of ultrasonic signal, and optimal Support vector machine (SVM) was used to perform the identification task. Meanwhile, comparative study on convergent velocity and classified effect was done among SVM and several improved BP network models. To validate the method, some experiments were performed and the results show that the proposed system has very high identification performance for large shafts and the optimal SVM processes better classification performance and spreading potential than BP manual neural network under small study sample condition.

  10. An Automated System for Garment Texture Design Class Identification

    OpenAIRE

    Emon Kumar Dey; Md. Nurul Ahad Tawhid; Mohammad Shoyaib

    2015-01-01

    Automatic identification of garment design class might play an important role in the garments and fashion industry. To achieve this, essential initial works are found in the literature. For example, construction of a garment database, automatic segmentation of garments from real life images, categorizing them into the type of garments such as shirts, jackets, tops, skirts, etc. It is now essential to find a system such that it will be possible to identify the particular design (printed, stri...

  11. Solar Powered Automatic Shrimp Feeding System

    Directory of Open Access Journals (Sweden)

    Dindo T. Ani

    2015-12-01

    Full Text Available - Automatic system has brought many revolutions in the existing technologies. One among the technologies, which has greater developments, is the solar powered automatic shrimp feeding system. For instance, the solar power which is a renewable energy can be an alternative solution to energy crisis and basically reducing man power by using it in an automatic manner. The researchers believe an automatic shrimp feeding system may help solve problems on manual feeding operations. The project study aimed to design and develop a solar powered automatic shrimp feeding system. It specifically sought to prepare the design specifications of the project, to determine the methods of fabrication and assembly, and to test the response time of the automatic shrimp feeding system. The researchers designed and developed an automatic system which utilizes a 10 hour timer to be set in intervals preferred by the user and will undergo a continuous process. The magnetic contactor acts as a switch connected to the 10 hour timer which controls the activation or termination of electrical loads and powered by means of a solar panel outputting electrical power, and a rechargeable battery in electrical communication with the solar panel for storing the power. By undergoing through series of testing, the components of the modified system were proven functional and were operating within the desired output. It was recommended that the timer to be used should be tested to avoid malfunction and achieve the fully automatic system and that the system may be improved to handle changes in scope of the project.

  12. Automatic control of nuclear power plants

    International Nuclear Information System (INIS)

    The fundamental concepts in automatic control are surveyed, and the purpose of the automatic control of pressurized water reactors is given. The response characteristics for the main components are then studied and block diagrams are given for the main control loops (turbine, steam generator, and nuclear reactors)

  13. Peak fitting and identification software library for high resolution gamma-ray spectra

    Science.gov (United States)

    Uher, Josef; Roach, Greg; Tickner, James

    2010-07-01

    A new gamma-ray spectral analysis software package is under development in our laboratory. It can be operated as a stand-alone program or called as a software library from Java, C, C++ and MATLAB TM environments. It provides an advanced graphical user interface for data acquisition, spectral analysis and radioisotope identification. The code uses a peak-fitting function that includes peak asymmetry, Compton continuum and flexible background terms. Peak fitting function parameters can be calibrated as functions of energy. Each parameter can be constrained to improve fitting of overlapping peaks. All of these features can be adjusted by the user. To assist with peak identification, the code can automatically measure half-lives of single or multiple overlapping peaks from a time series of spectra. It implements library-based peak identification, with options for restricting the search based on radioisotope half-lives and reaction types. The software also improves the reliability of isotope identification by utilizing Monte-Carlo simulation results.

  14. AUTOMATIC DESIGNING OF POWER SUPPLY SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. I. Kirspou

    2016-01-01

    Full Text Available Development of automatic designing system for power supply of industrial enterprises is considered in the paper. Its complete structure and principle of operation are determined and established. Modern graphical interface and data scheme are developed, software is completely realized. Methodology and software correspond to the requirements of the up-to-date designing, describe a general algorithm of program process and also reveals properties of automatic designing system objects. Automatic designing system is based on module principle while using object-orientated programming. Automatic designing system makes it possible to carry out consistently designing calculations of power supply system and select the required equipment with subsequent output of all calculations in the form of explanatory note. Automatic designing system can be applied by designing organizations under conditions of actual designing.

  15. Automatic measurement and representation of prosodic features

    Science.gov (United States)

    Ying, Goangshiuan Shawn

    Effective measurement and representation of prosodic features of the acoustic signal for use in automatic speech recognition and understanding systems is the goal of this work. Prosodic features-stress, duration, and intonation-are variations of the acoustic signal whose domains are beyond the boundaries of each individual phonetic segment. Listeners perceive prosodic features through a complex combination of acoustic correlates such as intensity, duration, and fundamental frequency (F0). We have developed new tools to measure F0 and intensity features. We apply a probabilistic global error correction routine to an Average Magnitude Difference Function (AMDF) pitch detector. A new short-term frequency-domain Teager energy algorithm is used to measure the energy of a speech signal. We have conducted a series of experiments performing lexical stress detection on words in continuous English speech from two speech corpora. We have experimented with two different approaches, a segment-based approach and a rhythm unit-based approach, in lexical stress detection. The first approach uses pattern recognition with energy- and duration-based measurements as features to build Bayesian classifiers to detect the stress level of a vowel segment. In the second approach we define rhythm unit and use only the F0-based measurement and a scoring system to determine the stressed segment in the rhythm unit. A duration-based segmentation routine was developed to break polysyllabic words into rhythm units. The long-term goal of this work is to develop a system that can effectively detect the stress pattern for each word in continuous speech utterances. Stress information will be integrated as a constraint for pruning the word hypotheses in a word recognition system based on hidden Markov models.

  16. Towards an intelligent system for the automatic assignment of domains in globular proteins.

    Science.gov (United States)

    Sternberg, M J; Hegyi, H; Islam, S A; Luo, J; Russell, R B

    1995-01-01

    The automatic identification of protein domains from coordinates is the first step in the classification of protein folds and hence is required for databases to guide structure prediction. Most algorithms encode a single concept based and sometimes do not yield assignments that are consistent with the generally accepted perception. Our development of an automatic approach to identify reliably domains from protein coordinates is described. The algorithm is benchmarked against a manual identification of the domains in 284 representative protein chains. The first step is the domain assignment by distance (DAD) algorithm that considers the density of inter-residue contacts represented in a contact matrix. The algorithm yields 85% agreement with the manual assignment. The paper then considers how the reliability of these assignments could be evaluated. Finally the use of structural comparisons using the STAMP algorithm to validate domain assignment is reported on a test case. PMID:7584461

  17. An improved, SSH-based method to automatically identify mesoscale eddies in the ocean

    Institute of Scientific and Technical Information of China (English)

    WANG Xin; DU Yun-yan; ZHOU Cheng-hu; FAN Xing; YI Jia-wei

    2013-01-01

      Mesoscale eddies are an important component of oceanic features. How to automatically identify these mesoscale eddies from available data has become an important research topic. Through careful examination of existing methods, we propose an improved, SSH-based automatic identification method. Using the inclusion relation of enclosed SSH contours, the mesoscale eddy boundary and core(s) can be automatically identified. The time evolution of eddies can be examined by a threshold search algorithm and a tracking algorithm based on similarity. Sea-surface height (SSH) data from Naval Research Laboratory Layered Ocean Model (NLOM) and sea-level anomaly (SLA) data from altimeter are used in the many experiments, in which different automatic identification methods are compared. Our results indicate that the improved method is able to extract the mesoscale eddy boundary more precisely, retaining the multiple-core structure. In combination with the tracking algorithm, this method can capture complete mesoscale eddy processes. It can thus provide reliable information for further study of reconstructing eddy dynamics, merging, splitting, and evolution of a multi-core structure.

  18. CRISPR Recognition Tool (CRT: a tool for automatic detection of clustered regularly interspaced palindromic repeats

    Directory of Open Access Journals (Sweden)

    Brown Kyndall

    2007-06-01

    Full Text Available Abstract Background Clustered Regularly Interspaced Palindromic Repeats (CRISPRs are a novel type of direct repeat found in a wide range of bacteria and archaea. CRISPRs are beginning to attract attention because of their proposed mechanism; that is, defending their hosts against invading extrachromosomal elements such as viruses. Existing repeat detection tools do a poor job of identifying CRISPRs due to the presence of unique spacer sequences separating the repeats. In this study, a new tool, CRT, is introduced that rapidly and accurately identifies CRISPRs in large DNA strings, such as genomes and metagenomes. Results CRT was compared to CRISPR detection tools, Patscan and Pilercr. In terms of correctness, CRT was shown to be very reliable, demonstrating significant improvements over Patscan for measures precision, recall and quality. When compared to Pilercr, CRT showed improved performance for recall and quality. In terms of speed, CRT proved to be a huge improvement over Patscan. Both CRT and Pilercr were comparable in speed, however CRT was faster for genomes containing large numbers of repeats. Conclusion In this paper a new tool was introduced for the automatic detection of CRISPR elements. This tool, CRT, showed some important improvements over current techniques for CRISPR identification. CRT's approach to detecting repetitive sequences is straightforward. It uses a simple sequential scan of a DNA sequence and detects repeats directly without any major conversion or preprocessing of the input. This leads to a program that is easy to describe and understand; yet it is very accurate, fast and memory efficient, being O(n in space and O(nm/l in time.

  19. Automatic Dependent Surveillance-Broadcast for Sense and Avoid on Small Unmanned Aircraft

    OpenAIRE

    Duffield, Matthew; McLain, Timothy

    2015-01-01

    This paper presents a time-based path planning optimizer for separation assurance for unmanned aerial systems (UAS). Given Automatic Dependent Surveillance-Broadcast (ADS-B) as a sensor, position, velocity, and identification information is available at ranges on the order of 50 nautical miles. Such long-range intruder detection facilitates path planning for separation assurance, but also poses computational and robustness challenges. The time-based path optimizer presented in this paper prov...

  20. A Magnetic Resonance Image Based Atlas of the Rabbit Brain for Automatic Parcellation

    OpenAIRE

    Emma Muñoz-Moreno; Ariadna Arbat-Plana; Dafnis Batalle; Guadalupe Soria; Miriam Illa; Alberto Prats-Galino; Elisenda Eixarch; Eduard Gratacos

    2013-01-01

    Rabbit brain has been used in several works for the analysis of neurodevelopment. However, there are not specific digital rabbit brain atlases that allow an automatic identification of brain regions, which is a crucial step for various neuroimage analyses, and, instead, manual delineation of areas of interest must be performed in order to evaluate a specific structure. For this reason, we propose an atlas of the rabbit brain based on magnetic resonance imaging, including both structural and d...

  1. Automatic detection of esophageal pressure events. Is there an alternative to rule-based criteria?

    DEFF Research Database (Denmark)

    Kruse-Andersen, S; Rütz, K; Kolberg, Jens Godsk;

    1995-01-01

    Ambulatory long-term motility recording is used increasingly for evaluation of esophageal function. The enormous amount of motility data recorded by this method demands subsequent computer analysis. One of the most crucial steps of this analysis becomes the process of automatic selection of relev......Ambulatory long-term motility recording is used increasingly for evaluation of esophageal function. The enormous amount of motility data recorded by this method demands subsequent computer analysis. One of the most crucial steps of this analysis becomes the process of automatic selection...

  2. Progress on Statistical Learning Systems as Data Mining Tools for the Creation of Automatic Databases in Fusion Environments

    International Nuclear Information System (INIS)

    Fusion devices produce tens of thousands of discharges but only a very limited part of the collected information is analysed. The analysis of physical events requires their identification and temporal location and the generation of specialized databases in relation to these time instants. The automatic determination of precise time instants in which events happen and the automatic search for potential relevant time intervals could be made thanks to classification techniques and regression techniques. Classification and regression techniques have been used for the automatic creation of specialized databases for JET and have allowed the automatic determination of disruptive / non-disruptive character of discharges. The validation of the recognition method has been carried out with 4400 JET discharges and the global success rate has been 99.02 per cent

  3. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  4. Development of Automatic Extraction Weld for Industrial Radiographic Negative Inspection

    Institute of Scientific and Technical Information of China (English)

    张晓光; 林家骏; 李浴; 卢印举

    2003-01-01

    In industrial X-ray inspection, in order to identify weld defects automatically, raise the identification ratio, and avoid processing of complex background, it is an important step for sequent processing to extract weld from the image. According to the characteristics of weld radiograph image, median filter is adopted to reduce the noise with high frequency, then relative gray-scale of image is chosen as fuzzy characteristic, and image gray-scale fuzzy matrix is constructed and suitable membership function is selected to describe edge characteristic. A fuzzy algorithm is adopted for enhancing radiograph image processing. Based on the intensity distribution characteristic in weld, methodology of weld extraction is then designed. This paper describes the methodology of all the weld extraction, including reducing noise, fuzzy enhancement and weld extraction process. To prove its effectiveness, this methodology was tested with 64 weld negative images available for this study. The experimental results show that this methodology is very effective for extracting linear weld.

  5. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  6. An Approach for Automatic Classification of Radiology Reports in Spanish.

    Science.gov (United States)

    Cotik, Viviana; Filippo, Darío; Castaño, José

    2015-01-01

    Automatic detection of relevant terms in medical reports is useful for educational purposes and for clinical research. Natural language processing (NLP) techniques can be applied in order to identify them. In this work we present an approach to classify radiology reports written in Spanish into two sets: the ones that indicate pathological findings and the ones that do not. In addition, the entities corresponding to pathological findings are identified in the reports. We use RadLex, a lexicon of English radiology terms, and NLP techniques to identify the occurrence of pathological findings. Reports are classified using a simple algorithm based on the presence of pathological findings, negation and hedge terms. The implemented algorithms were tested with a test set of 248 reports annotated by an expert, obtaining a best result of 0.72 F1 measure. The output of the classification task can be used to look for specific occurrences of pathological findings. PMID:26262128

  7. Automatic molecular structure perception for the universal force field.

    Science.gov (United States)

    Artemova, Svetlana; Jaillet, Léonard; Redon, Stephane

    2016-05-15

    The Universal Force Field (UFF) is a classical force field applicable to almost all atom types of the periodic table. Such a flexibility makes this force field a potential good candidate for simulations involving a large spectrum of systems and, indeed, UFF has been applied to various families of molecules. Unfortunately, initializing UFF, that is, performing molecular structure perception to determine which parameters should be used to compute the UFF energy and forces, appears to be a difficult problem. Although many perception methods exist, they mostly focus on organic molecules, and are thus not well-adapted to the diversity of systems potentially considered with UFF. In this article, we propose an automatic perception method for initializing UFF that includes the identification of the system's connectivity, the assignment of bond orders as well as UFF atom types. This perception scheme is proposed as a self-contained UFF implementation integrated in a new module for the SAMSON software platform for computational nanoscience (http://www.samson-connect.net). We validate both the automatic perception method and the UFF implementation on a series of benchmarks. PMID:26927616

  8. A Cough-Based Algorithm for Automatic Diagnosis of Pertussis.

    Science.gov (United States)

    Pramono, Renard Xaviero Adhi; Imtiaz, Syed Anas; Rodriguez-Villegas, Esther

    2016-01-01

    Pertussis is a contagious respiratory disease which mainly affects young children and can be fatal if left untreated. The World Health Organization estimates 16 million pertussis cases annually worldwide resulting in over 200,000 deaths. It is prevalent mainly in developing countries where it is difficult to diagnose due to the lack of healthcare facilities and medical professionals. Hence, a low-cost, quick and easily accessible solution is needed to provide pertussis diagnosis in such areas to contain an outbreak. In this paper we present an algorithm for automated diagnosis of pertussis using audio signals by analyzing cough and whoop sounds. The algorithm consists of three main blocks to perform automatic cough detection, cough classification and whooping sound detection. Each of these extract relevant features from the audio signal and subsequently classify them using a logistic regression model. The output from these blocks is collated to provide a pertussis likelihood diagnosis. The performance of the proposed algorithm is evaluated using audio recordings from 38 patients. The algorithm is able to diagnose all pertussis successfully from all audio recordings without any false diagnosis. It can also automatically detect individual cough sounds with 92% accuracy and PPV of 97%. The low complexity of the proposed algorithm coupled with its high accuracy demonstrates that it can be readily deployed using smartphones and can be extremely useful for quick identification or early screening of pertussis and for infection outbreaks control. PMID:27583523

  9. Device for single-phase or three-phase automatic reclosure of 500-750 kV transmission lines

    Energy Technology Data Exchange (ETDEWEB)

    Strelkov, V.I.; Fokin, G.C.; Yakubson, G.G.; Kostina, A.D.

    1985-08-01

    A device for automatic reclosure of 500-700 kV as well as 220-330 kV transmission lines in conjunction with the new PDE 2000 protective relaying and line automation equipment set has been developed by the All-Union Scientific Research Institute of Electrical Power Engineering and the All-Union State Planning-Surveying and Scientific Research Institute of Power Systems and Electrical Networks, jointly with the Chelyabinsk Electrical Equipment Plant, to replace the APV-751 device and later also the APV-503 device. The principal functions of this PDE 2004.01 are: identification of the faulty phase and its automatic reclosure after a phase-to-ground short, with the aid of selective elements; disconnection of three phases and their automatic reclosure once after any kind of polyphase short (including one evolved from a single phase-to-ground short or caused by faults in not yet disconnected phases) and prior to single-phase automatic reclosure, with any direct phase-to-phase short isolated immediately ahead of selective action; disconnection of three phases after any kind of short with possibility of three-phase automatic reclosure after unsuccessful single-phase automatic reclosure; three-phase automatic reclosure once after three phase had been disconnected for reasons other than a fault or a human error. Monitoring and other functions of the device are also described.

  10. Automatic onset phase picking for portable seismic array observation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Automatic phase picking is a critical procedure for seismic data processing, especially for a huge amount of seismic data recorded by a large-scale portable seismic array. In this study is presented a new method used for automatic accurate onset phase picking based on the property of dense seismic array observations. In our method, the Akaike's information criterion (AIC) for the single channel observation and the least-squares cross-correlation for the multi-channel observation are combined together. The tests by the seismic array observation data after triggering with the short-term average/long-term average (STA/LTA) technique show that the phase picking error is less than 0.3 s for local events by using the single channel AIC algorithm. In terms of multi-channel least-squares cross-correlation technique, the clear teleseismic P onset can be detected reliably. Even for the teleseismic records with high noise level, our algorithm is also able to effectually avoid manual misdetections.

  11. Sleep facilitates long-term face adaptation

    OpenAIRE

    Ditye, T.; A.H Javadi; Carbon, C.C.; Walsh, V

    2013-01-01

    Adaptation is an automatic neural mechanism supporting the optimization of visual processing on the basis of previous experiences. While the short-term effects of adaptation on behaviour and physiology have been studied extensively, perceptual long-term changes associated with adaptation are still poorly understood. Here, we show that the integration of adaptation-dependent long-term shifts in neural function is facilitated by sleep. Perceptual shifts induced by adaptation to a distorted imag...

  12. Robust Fallback Scheme for the Danish Automatic Voltage Control System

    DEFF Research Database (Denmark)

    Qin, Nan; Dmitrova, Evgenia; Lund, Torsten;

    2015-01-01

    This paper proposes a fallback scheme for the Danish automatic voltage control system. It will be activated in case of the local station loses telecommunication to the control center and/or the local station voltage violates the acceptable operational limits. It cuts in/out switchable and tap......-able shunts to maintain the voltage locally. The fallback scheme is fully selfregulated according to the predefined triggering logic. In order to keep the robustness and avoid many shunts are being triggered in a short term, the inverse time characteristic is used to trigger switching one by one. This scheme...

  13. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  14. Laser Scanner For Automatic Storage

    Science.gov (United States)

    Carvalho, Fernando D.; Correia, Bento A.; Rebordao, Jose M.; Rodrigues, F. Carvalho

    1989-01-01

    The automated magazines are beeing used at industry more and more. One of the problems related with the automation of a Store House is the identification of the products envolved. Already used for stock management, the Bar Codes allows an easy way to identify one product. Applied to automated magazines, the bar codes allows a great variety of items in a small code. In order to be used by the national producers of automated magazines, a devoted laser scanner has been develloped. The Prototype uses an He-Ne laser whose beam scans a field angle of 75 degrees at 16 Hz. The scene reflectivity is transduced by a photodiode into an electrical signal, which is then binarized. This digital signal is the input of the decodifying program. The machine is able to see barcodes and to decode the information. A parallel interface allows the comunication with the central unit, which is responsible for the management of automated magazine.

  15. Automatic semi-continuous accumulation chamber for diffuse gas emissions monitoring in volcanic and non-volcanic areas

    Science.gov (United States)

    Lelli, Matteo; Raco, Brunella; Norelli, Francesco; Virgili, Giorgio; Continanza, Davide

    2016-04-01

    Since various decades the accumulation chamber method is intensively used in monitoring activities of diffuse gas emissions in volcanic areas. Although some improvements have been performed in terms of sensitivity and reproducibility of the detectors, the equipment used for measurement of gas emissions temporal variation usually requires expensive and bulky equipment. The unit described in this work is a low cost, easy to install-and-manage instrument that will make possible the creation of low-cost monitoring networks. The Non-Dispersive Infrared detector used has a concentration range of 0-5% CO2, but the substitution with other detector (range 0-5000 ppm) is possible and very easy. Power supply unit has a 12V, 7Ah battery, which is recharged by a 35W solar panel (equipped with charge regulator). The control unit contains a custom programmed CPU and the remote transmission is assured by a GPRS modem. The chamber is activated by DataLogger unit, using a linear actuator between the closed position (sampling) and closed position (idle). A probe for the measure of soil temperature, soil electrical conductivity, soil volumetric water content, air pressure and air temperature is assembled on the device, which is already arranged for the connection of others external sensors, including an automatic weather station. The automatic station has been tested on the field at Lipari island (Sicily, Italy) during a period of three months, performing CO2 flux measurement (and also weather parameters), each 1 hour. The possibility to measure in semi-continuous mode, and at the same time, the gas fluxes from soil and many external parameters, helps the time series analysis aimed to the identification of gas flux anomalies due to variations in deep system (e.g. onset of volcanic crises) from those triggered by external conditions.

  16. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  17. A Demonstration of Automatically Switched Optical Network

    Institute of Scientific and Technical Information of China (English)

    Weisheng Hu; Qingji Zeng; Yaohui Jin; Chun Jiang; Yue Wang; Xiaodong Wang; Chunlei Zhang; Yang Lu; Buwei Xu; Peigang Hu

    2003-01-01

    We build an automatically switched optical network (ASON) testbed with four optical cross-connect nodes. Many fundamental ASON features are demonstrated, which is implemented by control protocols based on generalized multi-protocol label switching (GMPLS) framework.

  18. Computer systems for automatic earthquake detection

    Science.gov (United States)

    Stewart, S.W.

    1974-01-01

    U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously. 

  19. Automatic acquisition of pattern collocations in GO

    Institute of Scientific and Technical Information of China (English)

    LIU Zhi-qing; DOU Qing; LI Wen-hong; LU Ben-jie

    2008-01-01

    The quality, quantity, and consistency of the knowledgeused in GO-playing programs often determine their strengths,and automatic acquisition of large amounts of high-quality andconsistent GO knowledge is crucial for successful GO playing.In a previous article of this subject, we have presented analgorithm for efficient and automatic acquisition of spatialpatterns of GO as well as their frequency of occurrence fromgame records. In this article, we present two algorithms, one forefficient and automatic acquisition of pairs of spatial patternsthat appear jointly in a local context, and the other for deter-mining whether the joint pattern appearances are of certainsignificance statistically and not just a coincidence. Results ofthe two algorithms include 1 779 966 pairs of spatial patternsacquired automatically from 16 067 game records of profess-sional GO players, of which about 99.8% are qualified as patterncollocations with a statistical confidence of 99.5% or higher.

  20. Automatic program debugging for intelligent tutoring systems

    Energy Technology Data Exchange (ETDEWEB)

    Murray, W.R.

    1986-01-01

    This thesis explores the process by which student programs can be automatically debugged in order to increase the instructional capabilities of these systems. This research presents a methodology and implementation for the diagnosis and correction of nontrivial recursive programs. In this approach, recursive programs are debugged by repairing induction proofs in the Boyer-Moore Logic. The potential of a program debugger to automatically debug widely varying novice programs in a nontrivial domain is proportional to its capabilities to reason about computational semantics. By increasing these reasoning capabilities a more powerful and robust system can result. This thesis supports these claims by examining related work in automated program debugging and by discussing the design, implementation, and evaluation of Talus, an automatic degugger for LISP programs. Talus relies on its abilities to reason about computational semantics to perform algorithm recognition, infer code teleology, and to automatically detect and correct nonsyntactic errors in student programs written in a restricted, but nontrivial, subset of LISP.

  1. Three layered framework for automatic service composition

    Science.gov (United States)

    Liu, Xinqiong; Xia, Ping; Wan, Junli

    2009-10-01

    For automatic service composition, a planning based framework MOCIS is proposed. Planning is based on two major techniques, service reasoning and constraint satisfaction. Constraint satisfaction can be divided into quality constraint satisfaction and quantity constraint satisfaction. Contrary to traditional methods realizing upon techniques by interleaving activity, message and provider, the novelty of the framework is dividing these concerns into three layers, with activity layer majoring service reasoning, message layer for quality constraint and provider layer for quantity constraint. The layered architecture makes automatic web service composition possible for activity tree that abstract BPEL list and concrete BPEL list are achieved automatically with each layer, and users can selection proper abstract BPEL or BPEL to satisfy their request. And E-traveling composition cases have been tested, demonstrating that complex service can be achieved through three layers compositing automatically.

  2. Variable load automatically tests dc power supplies

    Science.gov (United States)

    Burke, H. C., Jr.; Sullivan, R. M.

    1965-01-01

    Continuously variable load automatically tests dc power supplies over an extended current range. External meters monitor current and voltage, and multipliers at the outputs facilitate plotting the power curve of the unit.

  3. Automatic coding of online collaboration protocols

    NARCIS (Netherlands)

    Erkens, Gijsbert; Janssen, J.J.H.M.

    2006-01-01

    An automatic coding procedure is described to determine the communicative functions of messages in chat discussions. Five main communicative functions are distinguished: argumentative (indicating a line of argumentation or reasoning), responsive (e.g., confirmations, denials, and answers), informati

  4. Automatization and familiarity in repeated checking

    NARCIS (Netherlands)

    Dek, Eliane C P; van den Hout, Marcel A.; Giele, Catharina L.; Engelhard, Iris M.

    2014-01-01

    Repeated checking paradoxically increases memory uncertainty. This study investigated the underlying mechanism of this effect. We hypothesized that as a result of repeated checking, familiarity with stimuli increases, and automatization of the checking procedure occurs, which should result in decrea

  5. Automatic safety rod for reactors. [LMFBR

    Science.gov (United States)

    Germer, J.H.

    1982-03-23

    An automatic safety rod for a nuclear reactor containing neutron absorbing material and designed to be inserted into a reactor core after a loss-of-flow. Actuation is based upon either a sudden decrease in core pressure drop or the pressure drop decreases below a predetermined minimum value. The automatic control rod includes a pressure regulating device whereby a controlled decrease in operating pressure due to reduced coolant flow does not cause the rod to drop into the core.

  6. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  7. Automatic Programming with Ant Colony Optimization

    OpenAIRE

    Green, Jennifer; Jacqueline L. Whalley; Johnson, Colin G.

    2004-01-01

    Automatic programming is the use of search techniques to find programs that solve a problem. The most commonly explored automatic programming technique is genetic programming, which uses genetic algorithms to carry out the search. In this paper we introduce a new technique called Ant Colony Programming (ACP) which uses an ant colony based search in place of genetic algorithms. This algorithm is described and compared with other approaches in the literature.

  8. Automatic Morphometry of Nerve Histological Sections

    OpenAIRE

    Romero, E.; Cuisenaire, O.; Denef, J.; Delbeke, J.; Macq, B.; Veraart, C.

    2000-01-01

    A method for the automatic segmentation, recognition and measurement of neuronal myelinated fibers in nerve histological sections is presented. In this method, the fiber parameters i.e. perimeter, area, position of the fiber and myelin sheath thickness are automatically computed. Obliquity of the sections may be taken into account. First, the image is thresholded to provide a coarse classification between myelin and non-myelin pixels. Next, the resulting binary image is further simplified usi...

  9. Automatic processing of dominance and submissiveness

    OpenAIRE

    Moors, Agnes; De Houwer, Jan

    2005-01-01

    We investigated whether people are able to detect in a relatively automatic manner the dominant or submissive status of persons engaged in social interactions. Using a variant of the affective Simon task (De Houwer & Eelen, 1998), we demonstrated that the verbal response DOMINANT or SUBMISSIVE was facilitated when it had to be made to a target person that was respectively dominant or submissive. These results provide new information about the automatic nature of appraisals and ...

  10. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    OpenAIRE

    Veena Thakur; Trupti Gedam

    2015-01-01

    Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS) are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify...

  11. Automatic text categorisation of racist webpages

    OpenAIRE

    Greevy, Edel

    2004-01-01

    Automatic Text Categorisation (TC) involves the assignment of one or more predefined categories to text documents in order that they can be effectively managed. In this thesis we examine the possibility of applying automatic text categorisation to the problem of categorising texts (web pages) based on whether or not they are racist. TC has proven successful for topic-based problems such as news story categorisation. However, the problem of detecting racism is dissimilar to topic-based pro...

  12. Automatic Control of Freeboard and Turbine Operation

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Frigaard, Peter Bak; Friis-Madsen, Erik;

    The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003.......The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003....

  13. Iris Recognition System using canny edge detection for Biometric Identification

    OpenAIRE

    Bhawna Chouhan; Dr.(Mrs) Shailja Shukla

    2011-01-01

    biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by the individual. Iris recognition is regarded as the most reliable and accurate biometric identification system available. Most commercial iris recognition systems use patented algorithms developed by Daugman, and these algorithms are able to produce perfect recognition rates. Especially it focuses on image segmentation and feature extraction for iris recognition process...

  14. An efficient approach to the evaluation of mid-term dynamic processes in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Zivanovic, R.M. (Pretoria Technikon (South Africa)); Popovic, D.P. (Nikola Tesla Inst., Belgrade (Yugoslavia). Power System Dept.)

    1993-01-01

    This paper presents some improvements in the methodology for analysing mid-term dynamic processes in power systems. These improvements are: an efficient application of the hierarchical clustering algorithm to adaptive identification of coherent generator groups and a significant reduction of the mathematical model, on the basis of monitoring the state of only one generator in one of the established coherent groups. This enables a flexible, simple and fast transformation from the full to the reduced model and vice versa, a significant acceleration of the simulation while keeping the desired accuracy and the automatic use in continual dynamic analysis. Verification of the above mentioned contributions was performed on examples of the dynamic analysis of New England and Yugoslav power systems. (author)

  15. Audio watermarking technologies for automatic cue sheet generation systems

    Science.gov (United States)

    Caccia, Giuseppe; Lancini, Rosa C.; Pascarella, Annalisa; Tubaro, Stefano; Vicario, Elena

    2001-08-01

    Usually watermark is used as a way for hiding information on digital media. The watermarked information may be used to allow copyright protection or user and media identification. In this paper we propose a watermarking scheme for digital audio signals that allow automatic identification of musical pieces transmitted in TV broadcasting programs. In our application the watermark must be, obviously, imperceptible to the users, should be robust to standard TV and radio editing and have a very low complexity. This last item is essential to allow a software real-time implementation of the insertion and detection of watermarks using only a minimum amount of the computation power of a modern PC. In the proposed method the input audio sequence is subdivided in frames. For each frame a watermark spread spectrum sequence is added to the original data. A two steps filtering procedure is used to generate the watermark from a Pseudo-Noise (PN) sequence. The filters approximate respectively the threshold and the frequency masking of the Human Auditory System (HAS). In the paper we discuss first the watermark embedding system then the detection approach. The results of a large set of subjective tests are also presented to demonstrate the quality and robustness of the proposed approach.

  16. An automatic damage detection algorithm based on the Short Time Impulse Response Function

    Science.gov (United States)

    Auletta, Gianluca; Carlo Ponzo, Felice; Ditommaso, Rocco; Iacovino, Chiara

    2016-04-01

    Structural Health Monitoring together with all the dynamic identification techniques and damage detection techniques are increasing in popularity in both scientific and civil community in last years. The basic idea arises from the observation that spectral properties, described in terms of the so-called modal parameters (eigenfrequencies, mode shapes, and modal damping), are functions of the physical properties of the structure (mass, energy dissipation mechanisms and stiffness). Damage detection techniques traditionally consist in visual inspection and/or non-destructive testing. A different approach consists in vibration based methods detecting changes of feature related to damage. Structural damage exhibits its main effects in terms of stiffness and damping variation. Damage detection approach based on dynamic monitoring of structural properties over time has received a considerable attention in recent scientific literature. We focused the attention on the structural damage localization and detection after an earthquake, from the evaluation of the mode curvature difference. The methodology is based on the acquisition of the structural dynamic response through a three-directional accelerometer installed on the top floor of the structure. It is able to assess the presence of any damage on the structure providing also information about the related position and severity of the damage. The procedure is based on a Band-Variable Filter, (Ditommaso et al., 2012), used to extract the dynamic characteristics of systems that evolve over time by acting simultaneously in both time and frequency domain. In this paper using a combined approach based on the Fourier Transform and on the seismic interferometric analysis, an useful tool for the automatic fundamental frequency evaluation of nonlinear structures has been proposed. Moreover, using this kind of approach it is possible to improve some of the existing methods for the automatic damage detection providing stable results

  17. 14 CFR 23.1329 - Automatic pilot system.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Automatic pilot system. 23.1329 Section 23...: Installation § 23.1329 Automatic pilot system. If an automatic pilot system is installed, it must meet the following: (a) Each system must be designed so that the automatic pilot can— (1) Be quickly and...

  18. Relatedness Proportion Effects in Semantic Categorization: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2014-01-01

    Semantic priming effects at a short prime-target stimulus onset asynchrony are commonly explained in terms of an automatic spreading activation process. According to this view, the proportion of related trials should have no impact on the size of the semantic priming effect. Using a semantic categorization task ("Is this a living…

  19. Is Mobile-Assisted Language Learning Really Useful? An Examination of Recall Automatization and Learner Autonomy

    Science.gov (United States)

    Sato, Takeshi; Murase, Fumiko; Burden, Tyler

    2015-01-01

    The aim of this study is to examine the advantages of Mobile-Assisted Language Learning (MALL), especially vocabulary learning of English as a foreign or second language (L2) in terms of the two strands: automatization and learner autonomy. Previous studies articulate that technology-enhanced L2 learning could bring about some positive effects.…

  20. An Evaluation of Response Cost in the Treatment of Inappropriate Vocalizations Maintained by Automatic Reinforcement

    Science.gov (United States)

    Falcomata, Terry S.; Roane, Henry S.; Hovanetz, Alyson N.; Kettering, Tracy L.; Keeney, Kris M.

    2004-01-01

    In the current study, we examined the utility of a procedure consisting of noncontingent reinforcement with and without response cost in the treatment of inappropriate vocalizations maintained by automatic reinforcement. Results are discussed in terms of examining the variables that contribute to the effectiveness of response cost as treatment for…

  1. Reflecting and deflecting stereotypes : Assimilation and contrast in impression formation and automatic behavior

    NARCIS (Netherlands)

    Dijksterhuis, A; Spears, R; Lepinasse, V

    2001-01-01

    Factors influencing the tendency to represent a social stimulus primarily in stereotypic terms. or more as a distinct exemplar, were predicted to moderate automatic behavior effects, producing assimilation and contrast respectively. In Experiment I, we demonstrated that when an impression pertained

  2. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  3. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  4. Practical automatic Arabic license plate recognition system

    Science.gov (United States)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  5. Automatic Performance Debugging of SPMD-style Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Kunlin; Shi, Weisong; Yuan, Lin; Meng, Dan; Wang, Lei

    2011-01-01

    The simple program and multiple data (SPMD) programming model is widely used for both high performance computing and Cloud computing. In this paper, we design and implement an innovative system, AutoAnalyzer, that automates the process of debugging performance problems of SPMD-style parallel programs, including data collection, performance behavior analysis, locating bottlenecks, and uncovering their root causes. AutoAnalyzer is unique in terms of two features: first, without any apriori knowledge, it automatically locates bottlenecks and uncovers their root causes for performance optimization; second, it is lightweight in terms of the size of performance data to be collected and analyzed. Our contributions are three-fold: first, we propose two effective clustering algorithms to investigate the existence of performance bottlenecks that cause process behavior dissimilarity or code region behavior disparity, respectively; meanwhile, we present two searching algorithms to locate bottlenecks; second, on a basis o...

  6. Towards Automatic Improvement of Patient Queries in Health Retrieval Systems

    Directory of Open Access Journals (Sweden)

    Nesrine KSENTINI

    2016-07-01

    Full Text Available With the adoption of health information technology for clinical health, e-health is becoming usual practice today. Users of this technology find it difficult to seek information relevant to their needs due to the increasing amount of the clinical and medical data on the web, and the lack of knowledge of medical jargon. In this regards, a method is described to improve user's needs by automatically adding new related terms to their queries which appear in the same context of the original query in order to improve final search results. This method is based on the assessment of semantic relationships defined by a proposed statistical method between a set of terms or keywords. Experiments were performed on CLEF-eHealth-2015 database and the obtained results show the effectiveness of our proposed method.

  7. Efficient formulations of the material identification problem using full-field measurements

    Science.gov (United States)

    Pérez Zerpa, Jorge M.; Canelas, Alfredo

    2016-08-01

    The material identification problem addressed consists of determining the constitutive parameters distribution of a linear elastic solid using displacement measurements. This problem has been considered in important applications such as the design of methodologies for breast cancer diagnosis. Since the resolution of real life problems involves high computational costs, there is great interest in the development of efficient methods. In this paper two new efficient formulations of the problem are presented. The first formulation leads to a second-order cone optimization problem, and the second one leads to a quadratic optimization problem, both allowing the resolution of the problem with high efficiency and precision. Numerical examples are solved using synthetic input data with error. A regularization technique is applied using the Morozov criterion along with an automatic selection strategy of the regularization parameter. The proposed formulations present great advantages in terms of efficiency, when compared to other formulations that require the application of general nonlinear optimization algorithms.

  8. TEXT-DEPENDENT METHOD FOR GENDER IDENTIFICATION THROUGH SYNTHESIS OF VOICED SEGMENTS

    Directory of Open Access Journals (Sweden)

    SUMIT KUMAR BANCHHOR,

    2011-06-01

    Full Text Available Differences of physiological properties of the glottis and the vocal track are partly due to age and/or gender differences. Since these differences are reflected in the speech signal, acoustic measures related to those properties can be helpful for automatic gender classification. Acoustics measures of voice sources were extracted from 10 utterances spoken by 20 male and 20 female talkers (aged 19 to 25 year old. Speech long term features, including amplitude, zero crossing rate, short time energy, spectrum flux, and spectrogram is proposed for sex identification. An experimental framework is set-up for these classification task and the result of about 97% for gender classification clearly validate this hypothesis.

  9. An improved automatic detection method for earthquake-collapsed buildings from ADS40 image

    Institute of Scientific and Technical Information of China (English)

    GUO HuaDong; LU LinLin; MA JianWen; PESARESI Martino; YUAN FangYan

    2009-01-01

    Earthquake-collapsed building identification is important in earthquake damage assessment and is evidence for mapping seismic intensity. After the May 12th Wenchuan major earthquake occurred,experts from CEODE and IPSC collaborated to make a rapid earthquake damage assessment. A crucial task was to identify collapsed buildings from ADS40 images in the earthquake region. The difficulty was to differentiate collapsed buildings from concrete bridges,dry gravels,and landslide-induced rolling stones since they had a similar gray level range in the image. Based on the IPSC method,an improved automatic identification technique was developed and tested in the study area,a portion of Beichuan County. Final results showed that the technique's accuracy was over 95%. Procedures and results of this experiment are presented in this article. Theory of this technique indicates that it could be applied to collapsed building identification caused by other disasters.

  10. 48 CFR 252.211-7006 - Radio Frequency Identification.

    Science.gov (United States)

    2010-10-01

    ... supply, as defined in DoD 4140.1-R, DoD Supply Chain Materiel Management Regulation, AP1.1.11: (A... immediate, automatic, and accurate identification of any item in the supply chain of any company, in any..., organizational tool kits, hand tools, and administrative and housekeeping supplies and equipment. (C) Class...

  11. Developing a Speaker Identification System for the DARPA RATS Project

    DEFF Research Database (Denmark)

    Plchot, O; Matsoukas, S; Matejka, P;

    2013-01-01

    This paper describes the speaker identification (SID) system developed by the Patrol team for the first phase of the DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state of the art detection capabilities on audio from highly degraded communication channels. We...

  12. Musical Instrument Identification using Multiscale Mel-frequency Cepstral Coefficients

    DEFF Research Database (Denmark)

    Sturm, Bob L.; Morvidone, Marcela; Daudet, Laurent

    2010-01-01

    We investigate the benefits of evaluating Mel-frequency cepstral coefficients (MFCCs) over several time scales in the context of automatic musical instrument identification for signals that are monophonic but derived from real musical settings. We define several sets of features derived from MFCCs...

  13. Automatic prejudice in childhood and early adolescence.

    Science.gov (United States)

    Degner, Juliane; Wentura, Dirk

    2010-03-01

    Four cross-sectional studies are presented that investigated the automatic activation of prejudice in children and adolescents (aged 9 years to 15 years). Therefore, 4 different versions of the affective priming task were used, with pictures of ingroup and outgroup members being presented as prejudice-related prime stimuli. In all 4 studies, a pattern occurred that suggests a linear developmental increase of automatic prejudice with significant effects of outgroup negativity appearing only around the ages of 12 to 13 years. Results of younger children, on the contrary, did not indicate any effect of automatic prejudice activation. In contrast, prejudice effects in an Implicit Association Test (IAT) showed high levels of prejudice independent of age (Study 3). Results of Study 4 suggest that these age differences are due to age-related differences in spontaneous categorization processes. Introducing a forced-categorization into the affective priming procedure produced a pattern of results equivalent to that obtained with the IAT. These results suggest that although children are assumed to acquire prejudice at much younger ages, automatization of such attitudes might be related to developmental processes in early adolescence. We discuss possible theoretical implications of these results for a developmental theory of prejudice representation and automatization during childhood and adolescence. PMID:20175618

  14. Some reflections on identification.

    Science.gov (United States)

    Szpilka, J

    1999-12-01

    The author presents a view of identification based on a rereading of two of Freud's key texts and an approach derived from an academic interpretation of Hegel dating from the 1930s. These aspects are considered at length. The importance of the human and anthropogenic element is stressed. The human subject is presented as coming into being through language; being called upon to be what he is not and not to be what he is, the subject appears as wishful in nature, desiring the wish of the other at the same time as he desires the object of the other's wish. The author argues that identification as a problem arises only in a human being who speaks or has received an injunction to speak; this raises the question of who or what he is and of being as such. Analytic treatment may in his view therefore proceed in one of two directions, one based on the interplay of projection and introjection with identification as an end, and the other on resistance and repression where the Oedipus complex is seen as the nuclear issue. Identification is seen in terms of overcoming the negative identity of not being all other subjects, and identity is found to be a conscious response that might even have a political element.

  15. Automatic contrast: evidence that automatic comparison with the social self affects evaluative responses.

    Science.gov (United States)

    Ruys, Kirsten I; Spears, Russell; Gordijn, Ernestine H; de Vries, Nanne K

    2007-08-01

    The aim of the present research was to investigate whether unconsciously presented affective information may cause opposite evaluative responses depending on what social category the information originates from. We argue that automatic comparison processes between the self and the unconscious affective information produce this evaluative contrast effect. Consistent with research on automatic behaviour, we propose that when an intergroup context is activated, an automatic comparison to the social self may determine the automatic evaluative responses, at least for highly visible categories (e.g. sex, ethnicity). Contrary to previous research on evaluative priming, we predict automatic contrastive responses to affective information originating from an outgroup category such that the evaluative response to neutral targets is opposite to the valence of the suboptimal primes. Two studies using different intergroup contexts provide support for our hypotheses. PMID:17705936

  16. An Automatic Hierarchical Delay Analysis Tool

    Institute of Scientific and Technical Information of China (English)

    FaridMheir-El-Saadi; BozenaKaminska

    1994-01-01

    The performance analysis of VLSI integrated circuits(ICs) with flat tools is slow and even sometimes impossible to complete.Some hierarchical tools have been developed to speed up the analysis of these large ICs.However,these hierarchical tools suffer from a poor interaction with the CAD database and poorly automatized operations.We introduce a general hierarchical framework for performance analysis to solve these problems.The circuit analysis is automatic under the proposed framework.Information that has been automatically abstracted in the hierarchy is kept in database properties along with the topological information.A limited software implementation of the framework,PREDICT,has also been developed to analyze the delay performance.Experimental results show that hierarchical analysis CPU time and memory requirements are low if heuristics are used during the abstraction process.

  17. Research on an Intelligent Automatic Turning System

    Directory of Open Access Journals (Sweden)

    Lichong Huang

    2012-12-01

    Full Text Available Equipment manufacturing industry is the strategic industries of a country. And its core part is the CNC machine tool. Therefore, enhancing the independent research of relevant technology of CNC machine, especially the open CNC system, is of great significance. This paper presented some key techniques of an Intelligent Automatic Turning System and gave a viable solution for system integration. First of all, the integrated system architecture and the flexible and efficient workflow for perfoming the intelligent automatic turning process is illustrated. Secondly, the innovated methods of the workpiece feature recognition and expression and process planning of the NC machining are put forward. Thirdly, the cutting tool auto-selection and the cutting parameter optimization solution are generated with a integrated inference of rule-based reasoning and case-based reasoning. Finally, the actual machining case based on the developed intelligent automatic turning system proved the presented solutions are valid, practical and efficient.

  18. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable......Two kinds of cognitive processes can be distinguished: Automatic processes, which are mostly subconscious, are learned and changed very slowly, and are not subject to the capacity limitations of working memory, and strategic processes, which are conscious, are subject to capacity limitations......, and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...

  19. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  20. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  1. Fault injection system for automatic testing system

    Institute of Scientific and Technical Information of China (English)

    王胜文; 洪炳熔

    2003-01-01

    Considering the deficiency of the means for confirming the attribution of fault redundancy in the re-search of Automatic Testing System(ATS) , a fault-injection system has been proposed to study fault redundancyof automatic testing system through compurison. By means of a fault-imbeded environmental simulation, thefaults injected at the input level of the software are under test. These faults may induce inherent failure mode,thus bringing about unexpected output, and the anticipated goal of the test is attained. The fault injection con-sists of voltage signal generator, current signal generator and rear drive circuit which are specially developed,and the ATS can work regularly by means of software simulation. The experimental results indicate that the faultinjection system can find the deficiency of the automatic testing software, and identify the preference of fault re-dundancy. On the other hand, some soft deficiency never exposed before can be identified by analyzing the tes-ting results.

  2. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  3. Automatic Detection of Childhood Absence Epilepsy Seizures: Toward a Monitoring Device

    DEFF Research Database (Denmark)

    Duun-Henriksen, Jonas; Madsen, Rasmus E.; Remvig, Line S.;

    2012-01-01

    long-term prognoses, balancing antiepileptic effects and side effects. The electroencephalographic appearance of paroxysms in childhood absence epilepsy is fairly homogeneous, making it feasible to develop patient-independent automatic detection. We implemented a state-of-the-art algorithm......Automatic detections of paroxysms in patients with childhood absence epilepsy have been neglected for several years. We acquire reliable detections using only a single-channel brainwave monitor, allowing for unobtrusive monitoring of antiepileptic drug effects. Ultimately we seek to obtain optimal...

  4. Identification of fast-changing signals by means of adaptive chaotic transformations

    OpenAIRE

    Berezowski, Marek; Lawnik, Marcin

    2016-01-01

    The adaptive approach of strongly non-linear fast-changing signals identification is discussed. The approach is devised by adaptive sampling based on chaotic mapping in yourself of a signal. Presented sampling way may be utilized online in the automatic control of chemical reactor (throughout identification of concentrations and temperature oscillations in real-time), in medicine (throughout identification of ECG and EEG signals in real-time), etc. In this paper, we presented it to identify t...

  5. Semi-automatic knee cartilage segmentation

    Science.gov (United States)

    Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus

    2006-03-01

    Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.

  6. Aircraft noise effects on sleep: a systematic comparison of EEG awakenings and automatically detected cardiac activations

    International Nuclear Information System (INIS)

    Polysomnography is the gold standard for investigating noise effects on sleep, but data collection and analysis are sumptuous and expensive. We recently developed an algorithm for the automatic identification of cardiac activations associated with cortical arousals, which uses heart rate information derived from a single electrocardiogram (ECG) channel. We hypothesized that cardiac activations can be used as estimates for EEG awakenings. Polysomnographic EEG awakenings and automatically detected cardiac activations were systematically compared using laboratory data of 112 subjects (47 male, mean ± SD age 37.9 ± 13 years), 985 nights and 23 855 aircraft noise events (ANEs). The probability of automatically detected cardiac activations increased monotonically with increasing maximum sound pressure levels of ANEs, exceeding the probability of EEG awakenings by up to 18.1%. If spontaneous reactions were taken into account, exposure–response curves were practically identical for EEG awakenings and cardiac activations. Automatically detected cardiac activations may be used as estimates for EEG awakenings. More investigations are needed to further validate the ECG algorithm in the field and to investigate inter-individual differences in its ability to predict EEG awakenings. This inexpensive, objective and non-invasive method facilitates large-scale field studies on the effects of traffic noise on sleep

  7. AUTOMATIC RECOGNITION OF BOTH INTER AND INTRA CLASSES OF DIGITAL MODULATED SIGNALS USING ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    JIDE JULIUS POPOOLA

    2014-04-01

    Full Text Available In radio communication systems, signal modulation format recognition is a significant characteristic used in radio signal monitoring and identification. Over the past few decades, modulation formats have become increasingly complex, which has led to the problem of how to accurately and promptly recognize a modulation format. In addressing these challenges, the development of automatic modulation recognition systems that can classify a radio signal’s modulation format has received worldwide attention. Decision-theoretic methods and pattern recognition solutions are the two typical automatic modulation recognition approaches. While decision-theoretic approaches use probabilistic or likelihood functions, pattern recognition uses feature-based methods. This study applies the pattern recognition approach based on statistical parameters, using an artificial neural network to classify five different digital modulation formats. The paper deals with automatic recognition of both inter-and intra-classes of digitally modulated signals in contrast to most of the existing algorithms in literature that deal with either inter-class or intra-class modulation format recognition. The results of this study show that accurate and prompt modulation recognition is possible beyond the lower bound of 5 dB commonly acclaimed in literature. The other significant contribution of this paper is the usage of the Python programming language which reduces computational complexity that characterizes other automatic modulation recognition classifiers developed using the conventional MATLAB neural network toolbox.

  8. Wearable Automatic External Deifbrillators%穿戴式自动体外除颤仪

    Institute of Scientific and Technical Information of China (English)

    罗华杰; 罗章源; 金勋; 张蕾蕾; 王长金; 张文赞; 涂权

    2015-01-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes ECG measurements, bioelectrical impedance measurement, discharge defibrilation module, which can automatic identify VF signal, biphasic exponential waveform defibrilation discharge. After verified by animal tests, the device can realize ECG acquisition and automatic identification. After identifying the ventricular fibrilation signal, it can automatic defibrilate to abort ventricular fibrilation and to realize the cardiac electrical cardioversion.%电击除颤是最有效的治疗室颤(VF)的方法,该文介绍的是基于嵌入式系统的穿戴式自动体外除颤仪,包括心电信号测量,生物阻抗测量,放电除颤模块,能够自动识别室颤信号,以双相指数波型进行除颤放电。经动物实验验证,该设备可实现心电信号采集及自动识别,在识别出室颤信号后能自动进行电击除颤,可中止室颤,实现心脏电转复。

  9. Automatic malware analysis an emulator based approach

    CERN Document Server

    Yin, Heng

    2012-01-01

    Malicious software (i.e., malware) has become a severe threat to interconnected computer systems for decades and has caused billions of dollars damages each year. A large volume of new malware samples are discovered daily. Even worse, malware is rapidly evolving becoming more sophisticated and evasive to strike against current malware analysis and defense systems. Automatic Malware Analysis presents a virtualized malware analysis framework that addresses common challenges in malware analysis. In regards to this new analysis framework, a series of analysis techniques for automatic malware analy

  10. Development of automatic laser welding system

    International Nuclear Information System (INIS)

    Laser are a new production tool for high speed and low distortion welding and applications to automatic welding lines are increasing. IHI has long experience of laser processing for the preservation of nuclear power plants, welding of airplane engines and so on. Moreover, YAG laser oscillators and various kinds of hardware have been developed for laser welding and automation. Combining these welding technologies and laser hardware technologies produce the automatic laser welding system. In this paper, the component technologies are described, including combined optics intended to improve welding stability, laser oscillators, monitoring system, seam tracking system and so on. (author)

  11. Automatic emotional expression analysis from eye area

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  12. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  13. Automatic speech recognition a deep learning approach

    CERN Document Server

    Yu, Dong

    2015-01-01

    This book summarizes the recent advancement in the field of automatic speech recognition with a focus on discriminative and hierarchical models. This will be the first automatic speech recognition book to include a comprehensive coverage of recent developments such as conditional random field and deep learning techniques. It presents insights and theoretical foundation of a series of recent models such as conditional random field, semi-Markov and hidden conditional random field, deep neural network, deep belief network, and deep stacking models for sequential learning. It also discusses practical considerations of using these models in both acoustic and language modeling for continuous speech recognition.

  14. 中药材传统经验鉴别术语与药用植物学的内在联系研究%Studies on the Internal Relationship between Traditional Identification Term in Chinese Medicine and Phar-maceutical Botany

    Institute of Scientific and Technical Information of China (English)

    林丽; 晋玲; 高素芳; 陈红刚; 施晓龙

    2015-01-01

    OBJECTIVE:To enrich the identification diversity of traditional Chinese medicine(TCM),and provide theoretical guidance for the quality evaluation of TCM. METHODS:According to literature references and traditional identification experienc-es,characteristics including medicinal shape,size,color and lustre,surface,texture,section,odor and other aspects were identi-fied by sense organs such as eyes,hands,nose and mouth. The vivid traditional identification term were obtained through systemat-ic summarization in order to explore the internal relationship with pharmaceutical botany. RESULTS&CONCLUSIONS:As the sim-plest identification method,traditional identification method can rapidly identify the species and quality of TCM,evaluate the quali-ty,and has great significance to solve the security issues of clinical medication and health care in daily life. There was a correlation between the traditional identification and botanical research,which could be able to provide theoretical guidance to characters identi-fication and quality evaluation of TCM.%目的:丰富中药材鉴别方法的多样性,为中药的质量评价提供理论性指导。方法:通过查阅参考文献资料及总结多年来传统鉴别经验,采用眼、手、鼻、口等感官对中药材的形状、大小、色泽、表面、质地、断面及气味等进行性状鉴别,并进行系统归纳总结,得出形象、生动的传统经验鉴别术语,并寻找其与药用植物学学科间的内在联系。结果与结论:传统鉴别方法作为最简单的鉴别方法,能够快速鉴别中药品种,评价质量,对解决临床用药与日常生活保健用药的安全问题具有重大意义。传统经验鉴别和药用植物学学科间存在相关性,可为中药材性状鉴别及质量评价提供理论性指导。

  15. Automatic segmentation of mammogram and tomosynthesis images

    Science.gov (United States)

    Sargent, Dusty; Park, Sun Young

    2016-03-01

    Breast cancer is a one of the most common forms of cancer in terms of new cases and deaths both in the United States and worldwide. However, the survival rate with breast cancer is high if it is detected and treated before it spreads to other parts of the body. The most common screening methods for breast cancer are mammography and digital tomosynthesis, which involve acquiring X-ray images of the breasts that are interpreted by radiologists. The work described in this paper is aimed at optimizing the presentation of mammography and tomosynthesis images to the radiologist, thereby improving the early detection rate of breast cancer and the resulting patient outcomes. Breast cancer tissue has greater density than normal breast tissue, and appears as dense white image regions that are asymmetrical between the breasts. These irregularities are easily seen if the breast images are aligned and viewed side-by-side. However, since the breasts are imaged separately during mammography, the images may be poorly centered and aligned relative to each other, and may not properly focus on the tissue area. Similarly, although a full three dimensional reconstruction can be created from digital tomosynthesis images, the same centering and alignment issues can occur for digital tomosynthesis. Thus, a preprocessing algorithm that aligns the breasts for easy side-by-side comparison has the potential to greatly increase the speed and accuracy of mammogram reading. Likewise, the same preprocessing can improve the results of automatic tissue classification algorithms for mammography. In this paper, we present an automated segmentation algorithm for mammogram and tomosynthesis images that aims to improve the speed and accuracy of breast cancer screening by mitigating the above mentioned problems. Our algorithm uses information in the DICOM header to facilitate preprocessing, and incorporates anatomical region segmentation and contour analysis, along with a hidden Markov model (HMM) for

  16. Automatic leather inspection of defective patterns

    Science.gov (United States)

    Tafuri, Maria; Branca, Antonella; Attolico, Giovanni; Distante, Arcangelo; Delaney, William

    1996-02-01

    Constant and consistent quality levels in the manufacturing industry increasingly require automatic inspection. This paper describes a vision system for leather inspection based upon visual textural properties of the material surface. As visual appearances of both leather and defects exhibit a wide range of variations due to original skin characteristics, curing processes and defect causes, location and classification of defective areas become hard tasks. This paper describes a method for separating the oriented structures of defects from normal leather, a background not homogeneous in color, thickness, brightness and finally in wrinkledness. The first step requires the evaluation of the orientation field from the image of the leather. Such a field associates to each point of the image a 2D vector having as direction the dominant local orientation of gradient vectors and the length proportional to their coherence evaluated in a neighborhood of fixed size. The second step analyzes such a vector flow field by projecting it on a set of basis vectors (elementary texture vectors) spanning the vector space where the vector fields associated to the defects can be defined. The coefficients of these projections are the parameters by means of which both detection and classification can be performed. Since the set of basis vectors is neither orthogonal nor complete, the projection requires the definition of a global optimization criteria that has been chosen to be the minimum difference between the original flow field and the vector field obtained as a linear combination of the basis vectors using the estimated coefficients. This optimization step is performed through a neural network initialized to recognize a limited number of patterns (corresponding to the basis vectors). This second step estimates the parameter vector in each point of the original image. Both leather without defects and defects can be characterized in terms of coefficient vectors making it possible to

  17. Sparse discriminant analysis for breast cancer biomarker identification and classification

    Institute of Scientific and Technical Information of China (English)

    Yu Shi; Daoqing Dai; Chaochun Liu; Hong Yan

    2009-01-01

    Biomarker identification and cancer classification are two important procedures in microarray data analysis. We propose a novel uni-fied method to carry out both tasks. We first preselect biomarker candidates by eliminating unrelated genes through the BSS/WSS ratio filter to reduce computational cost, and then use a sparse discriminant analysis method for simultaneous biomarker identification and cancer classification. Moreover, we give a mathematical justification about automatic biomarker identification. Experimental results show that the proposed method can identify key genes that have been verified in biochemical or biomedical research and classify the breast cancer type correctly.

  18. Yuma proving grounds automatic UXO detection using biomorphic robots

    Energy Technology Data Exchange (ETDEWEB)

    Tilden, M.W.

    1996-07-01

    The current variety and dispersion of Unexploded Ordnance (UXO) is a daunting technological problem for current sensory and extraction techniques. The bottom line is that the only way to insure a live UXO has been found and removed is to step on it. As this is an upsetting proposition for biological organisms like animals, farmers, or Yuma field personnel, this paper details a non-biological approach to developing inexpensive, automatic machines that will find, tag, and may eventually remove UXO from a variety of terrains by several proposed methods. The Yuma proving grounds (Arizona) has been pelted with bombs, mines, missiles, and shells since the 1940s. The idea of automatic machines that can clean up after such testing is an old one but as yet unrealized because of the daunting cost, power and complexity requirements of capable robot mechanisms. A researcher at Los Alamos National Laboratory has invented and developed a new variety of living robots that are solar powered, legged, autonomous, adaptive to massive damage, and very inexpensive. This technology, called Nervous Networks (Nv), allows for the creation of capable walking mechanisms (known as Biomorphic robots, or Biomechs for short) that rather than work from task principles use instead a survival-based design philosophy. This allows Nv based machines to continue doing work even after multiple limbs and sensors have been removed or damaged, and to dynamically negotiate complex terrains as an emergent property of their operation (fighting to proceed, as it were). They are not programmed, and indeed, the twelve transistor Nv controller keeps their electronic cost well below that of most pocket radios. It is suspected that advanced forms of these machines in huge numbers may be an interesting, capable solution to the problem of general and specific UXO identification, tagging, and removal.

  19. Semi-automatic parcellation of the corpus striatum

    Science.gov (United States)

    Al-Hakim, Ramsey; Nain, Delphine; Levitt, James; Shenton, Martha; Tannenbaum, Allen

    2007-03-01

    The striatum is the input component of the basal ganglia from the cerebral cortex. It includes the caudate, putamen, and nucleus accumbens. Thus, the striatum is an important component in limbic frontal-subcortical circuitry and is believed to be relevant both for reward-guided behaviors and for the expression of psychosis. The dorsal striatum is composed of the caudate and putamen, both of which are further subdivided into pre- and post-commissural components. The ventral striatum (VS) is primarily composed of the nucleus accumbens. The striatum can be functionally divided into three broad regions: 1) a limbic; 2) a cognitive and 3) a sensor-motor region. The approximate corresponding anatomic subregions for these 3 functional regions are: 1) the VS; 2) the pre/post-commissural caudate and the pre-commissural putamen and 3) the post-commissural putamen. We believe assessing these subregions, separately, in disorders with limbic and cognitive impairment such as schizophrenia may yield more informative group differences in comparison with normal controls than prior parcellation strategies of the striatum such as assessing the caudate and putamen. The manual parcellation of the striatum into these subregions is currently defined using certain landmark points and geometric rules. Since identification of these areas is important to clinical research, a reliable and fast parcellation technique is required. Currently, only full manual parcellation using editing software is available; however, this technique is extremely time intensive. Previous work has shown successful application of heuristic rules into a semi-automatic platform1. We present here a semi-automatic algorithm which implements the rules currently used for manual parcellation of the striatum, but requires minimal user input and significantly reduces the time required for parcellation.

  20. Chinese Term Extraction Based on PAT Tree

    Institute of Scientific and Technical Information of China (English)

    ZHANG Feng; FAN Xiao-zhong; XU Yun

    2006-01-01

    A new method of automatic Chinese term extraction is proposed based on Patricia (PAT) tree. Mutual information is calculated based on prefix searching in PAT tree of domain corpus to estimate the internal associative strength between Chinese characters in a string. It can improve the speed of term candidate extraction largely compared with methods based on domain corpus directly. Common collocation suffix, prefix bank are constructed and term part of speech (POS) composing rules are summarized to improve the precision of term extraction. Experiment results show that the F-measure is 74.97 %.

  1. Automatic Synchronization as the Element of a Power System's Anti-Collapse Complex

    Science.gov (United States)

    Barkāns, J.; Žalostība, D.

    2008-01-01

    In the work, a new universal technical solution is proposed for blackout prevention in a power system, which combines the means for its optimal short-term sectioning and automatic self-restoration to normal conditions. The key element of self-restoration is automatic synchronization. The authors show that for this purpose it is possible to use automatic re-closing with a device for synchronism-check. The results of computations, with simplified formulas and a relevant mathematical model employed, indicate the area of application for this approach. The proposed solution has been created based on many-year experience in the liquidation of emergencies and on the potentialities of equipment, taking into account new features of blackout development that have come into being recently.

  2. Improvement and automatization of a proportional alpha-beta counting system - FAG

    International Nuclear Information System (INIS)

    An alpha and beta counting system - FAG*, for planchette samples is operated at the Health Physics department's laboratory of the NRCN. The original operation mode of the system was based on manual tasks handled by the FHT1 100 electronics. An option for a basic computer keyboard operation was available too. A computer with an appropriate I/O card was connected to the system and a new operating program was developed which enables full automatic control of the various components. The program includes activity calculations and statistical checks as well as data management. A bar-code laser system for sample number reading was integrated into the Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. authors)

  3. Validation of crowdsourced automatic rain gauge measurements in Amsterdam

    Science.gov (United States)

    de Vos, Lotte; Leijnse, Hidde; Overeem, Aart; Uijlenhoet, Remko

    2016-04-01

    over time in most stations showed an underestimation of rainfall compared to the accumulative values found in the corresponding radar pixel of the reference. Special consideration is given to the identification of faulty measurements without the need to obtain additional meta-data, such as setup and surroundings. This validation will show the potential of crowdsourced automatic weather stations for future urban rainfall monitoring.

  4. Automatic assessment of cardiac perfusion MRI

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Stegmann, Mikkel Bille; Larsson, Henrik B.W.

    2004-01-01

    In this paper, a method based on Active Appearance Models (AAM) is applied for automatic registration of myocardial perfusion MRI. A semi-quantitative perfusion assessment of the registered image sequences is presented. This includes the formation of perfusion maps for three parameters; maximum up...

  5. Feedback Improvement in Automatic Program Evaluation Systems

    Science.gov (United States)

    Skupas, Bronius

    2010-01-01

    Automatic program evaluation is a way to assess source program files. These techniques are used in learning management environments, programming exams and contest systems. However, use of automated program evaluation encounters problems: some evaluations are not clear for the students and the system messages do not show reasons for lost points.…

  6. Experiments in Automatic Library of Congress Classification.

    Science.gov (United States)

    Larson, Ray R.

    1992-01-01

    Presents the results of research into the automatic selection of Library of Congress Classification numbers based on the titles and subject headings in MARC records from a test database at the University of California at Berkeley Library School library. Classification clustering and matching techniques are described. (44 references) (LRW)

  7. Automatic Radiometric Normalization of Multitemporal Satellite Imagery

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg; Schmidt, Michael

    2004-01-01

    The linear scale invariance of the multivariate alteration detection (MAD) transformation is used to obtain invariant pixels for automatic relative radiometric normalization of time series of multispectral data. Normalization by means of ordinary least squares regression method is compared...... normalization, compare favorably with results from normalization from manually obtained time-invariant features....

  8. An automatic hinge system for leg orthoses

    NARCIS (Netherlands)

    Rietman, J.S.; Goudsmit, J.; Meulemans, D.; Halbertsma, J.P.K.; Geertzen, J.H.B.

    2004-01-01

    This paper describes a new, automatic hinge system for leg orthoses, which provides knee stability in stance, and allows knee-flexion during swing. Indications for the hinge system are a paresis or paralysis of the quadriceps muscles. Instrumented gait analysis was performed in three patients, fitte

  9. 42 CFR 407.17 - Automatic enrollment.

    Science.gov (United States)

    2010-10-01

    ... SUPPLEMENTARY MEDICAL INSURANCE (SMI) ENROLLMENT AND ENTITLEMENT Individual Enrollment and Entitlement for SMI... enrolled for SMI if he or she: (1) Resides in the United States, except in Puerto Rico; (2) Becomes... chapter; and (3) Does not decline SMI enrollment. (b) Opportunity to decline automatic enrollment. (1)...

  10. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    R. Winkels; R. Hoekstra

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  11. A Statistical Approach to Automatic Speech Summarization

    Science.gov (United States)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  12. Neuroanatomical automatic segmentation in brain cancer patients

    OpenAIRE

    D’Haese, P.; Niermann, K; Cmelak, A.; Donnelly, E.; Duay, V.; Li, R; Dawant, B.

    2003-01-01

    Conformally prescribed radiation therapy for brain cancer requires precisely defining the target treatment area, as well as delineating vital brain structures which must be spared from radiotoxicity. The current clinical practice of manually segmenting brain structures can be complex and exceedingly time consuming. Automatic computeraided segmentation methods have been proposed to increase efficiency and reproducibility in developing radiation treatment plans. Previous studies have establishe...

  13. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan;

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... incrementalizing a broad range of static analyses....

  14. Automatic alignment of hieroglyphs and transliteration

    OpenAIRE

    Nederhof, Mark Jan

    2009-01-01

    Automatic alignment has important applications in philology, facilitating study of texts on the basis of electronic resources produced by different scholars. A simple technique is presented to realise such alignment for Ancient Egyptian hieroglyphic texts and transliteration. Preliminary experiments with the technique are reported, and plans for future work are discussed. Postprint

  15. Learning slip behavior using automatic mechanical supervision

    OpenAIRE

    Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro

    2007-01-01

    We address the problem of learning terrain traversability properties from visual input, using automatic mechanical supervision collected from sensors onboard an autonomous vehicle. We present a novel probabilistic framework in which the visual information and the mechanical supervision interact to learn particular terrain types and their properties. The proposed method is applied to learning of rover slippage from visual information in a completely auto...

  16. Automatic Synthesis of Robust and Optimal Controllers

    DEFF Research Database (Denmark)

    Cassez, Franck; Jessen, Jan Jacob; Larsen, Kim Guldstrand;

    2009-01-01

    In this paper, we show how to apply recent tools for the automatic synthesis of robust and near-optimal controllers for a real industrial case study. We show how to use three different classes of models and their supporting existing tools, Uppaal-TiGA for synthesis, phaver for verification, and S...

  17. Automatic Guidance System for Welding Torches

    Science.gov (United States)

    Smith, H.; Wall, W.; Burns, M. R., Jr.

    1984-01-01

    Digital system automatically guides welding torch to produce squarebutt, V-groove and lap-joint weldments within tracking accuracy of +0.2 millimeter. Television camera observes and traverses weld joint, carrying welding torch behind. Image of joint digitized, and resulting data used to derive control signals that enable torch to track joint.

  18. What is automatized during perceptual categorization?

    Science.gov (United States)

    Roeder, Jessica L; Ashby, F Gregory

    2016-09-01

    An experiment is described that tested whether stimulus-response associations or an abstract rule are automatized during extensive practice at perceptual categorization. Twenty-seven participants each completed 12,300 trials of perceptual categorization, either on rule-based (RB) categories that could be learned explicitly or information-integration (II) categories that required procedural learning. Each participant practiced predominantly on a primary category structure, but every third session they switched to a secondary structure that used the same stimuli and responses. Half the stimuli retained their same response on the primary and secondary categories (the congruent stimuli) and half switched responses (the incongruent stimuli). Several results stood out. First, performance on the primary categories met the standard criteria of automaticity by the end of training. Second, for the primary categories in the RB condition, accuracy and response time (RT) were identical on congruent and incongruent stimuli. In contrast, for the primary II categories, accuracy was higher and RT was lower for congruent than for incongruent stimuli. These results are consistent with the hypothesis that rules are automatized in RB tasks, whereas stimulus-response associations are automatized in II tasks. A cognitive neuroscience theory is proposed that accounts for these results. PMID:27232521

  19. Automatic Pilot For Flight-Test Maneuvers

    Science.gov (United States)

    Duke, Eugene L.; Jones, Frank P.; Roncoli, Ralph B.

    1992-01-01

    Autopilot replaces pilot during automatic maneuvers. Pilot, based on ground, flies aircraft to required altitude, then turns control over to autopilot. Increases quality of maneuvers significantly beyond that attainable through remote manual control by pilot on ground. Also increases quality of maneuvers because it performs maneuvers faster than pilot could and because it does not have to repeat poorly executed maneuvers.

  20. Automatic bootstrapping and tracking of object contours.

    Science.gov (United States)

    Chiverton, John; Xie, Xianghua; Mirmehdi, Majid

    2012-03-01

    A new fully automatic object tracking and segmentation framework is proposed. The framework consists of a motion-based bootstrapping algorithm concurrent to a shape-based active contour. The shape-based active contour uses finite shape memory that is automatically and continuously built from both the bootstrap process and the active-contour object tracker. A scheme is proposed to ensure that the finite shape memory is continuously updated but forgets unnecessary information. Two new ways of automatically extracting shape information from image data given a region of interest are also proposed. Results demonstrate that the bootstrapping stage provides important motion and shape information to the object tracker. This information is found to be essential for good (fully automatic) initialization of the active contour. Further results also demonstrate convergence properties of the content of the finite shape memory and similar object tracking performance in comparison with an object tracker with unlimited shape memory. Tests with an active contour using a fixed-shape prior also demonstrate superior performance for the proposed bootstrapped finite-shape-memory framework and similar performance when compared with a recently proposed active contour that uses an alternative online learning model. PMID:21908256

  1. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  2. Automatically extracting class diagrams from spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    The use of spreadsheets to capture information is widespread in industry. Spreadsheets can thus be a wealthy source of domain information. We propose to automatically extract this information and transform it into class diagrams. The resulting class diagram can be used by software engineers to under

  3. Automatic visual inspection of hybrid microcircuits

    Energy Technology Data Exchange (ETDEWEB)

    Hines, R.E.

    1980-05-01

    An automatic visual inspection system using a minicomputer and a video digitizer was developed for inspecting hybrid microcircuits (HMC) and thin-film networks (TFN). The system performed well in detecting missing components on HMCs and reduced the testing time for each HMC by 75%.

  4. MARZ: Manual and automatic redshifting software

    Science.gov (United States)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  5. Automatic invariant detection in dynamic web applications

    NARCIS (Netherlands)

    Groeneveld, F.; Mesbah, A.; Van Deursen, A.

    2010-01-01

    The complexity of modern web applications increases as client-side JavaScript and dynamic DOM programming are used to offer a more interactive web experience. In this paper, we focus on improving the dependability of such applications by automatically inferring invariants from the client-side and us

  6. Automatic prejudice in childhood and early adolescence

    NARCIS (Netherlands)

    J. Degner; D. Wentura

    2010-01-01

    Four cross-sectional studies are presented that investigated the automatic activation of prejudice in children and adolescents (aged 9 years to 15 years). Therefore, 4 different versions of the affective priming task were used, with pictures of ingroup and outgroup members being presented as prejudi

  7. Automatic thematic mapping in the EROS program

    Science.gov (United States)

    Edson, D. T.

    1972-01-01

    A specified approach to the automatic extraction and catographic presentation of thematic data contained in multispectral photographic images is presented. Experimental efforts were directed toward the mapping of open waters, snow and ice, infrared reflective vegetation, and massed works of man. The system must also be able to process data from a wide variety of sources.

  8. Automatic quality assurance in cutting and machining

    International Nuclear Information System (INIS)

    Requirements, economics, and possibility of automatic data acquisition and processing are discussed for different production stages. Which of the stages of materials and measuring equipment handling data acquisition, and data processing is to have priority in automation depends on the time requirements of these stages. (orig.)

  9. A Statistical Approach to Automatic Speech Summarization

    Directory of Open Access Journals (Sweden)

    Chiori Hori

    2003-02-01

    Full Text Available This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG. We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  10. Automatic Positioning System of Small Agricultural Robot

    Science.gov (United States)

    Momot, M. V.; Proskokov, A. V.; Natalchenko, A. S.; Biktimirov, A. S.

    2016-08-01

    The present article discusses automatic positioning systems of agricultural robots used in field works. The existing solutions in this area have been analyzed. The article proposes an original solution, which is easy to implement and is characterized by high- accuracy positioning.

  11. Automatic Water Sensor Window Opening System

    KAUST Repository

    Percher, Michael

    2013-12-05

    A system can automatically open at least one window of a vehicle when the vehicle is being submerged in water. The system can include a water collector and a water sensor, and when the water sensor detects water in the water collector, at least one window of the vehicle opens.

  12. Automatic characterization of dynamics in Absence Epilepsy

    DEFF Research Database (Denmark)

    Petersen, Katrine N. H.; Nielsen, Trine N.; Kjær, Troels W.;

    2013-01-01

    Dynamics of the spike-wave paroxysms in Childhood Absence Epilepsy (CAE) are automatically characterized using novel approaches. Features are extracted from scalograms formed by Continuous Wavelet Transform (CWT). Detection algorithms are designed to identify an estimate of the temporal development...

  13. The CHilean Automatic Supernova sEarch

    DEFF Research Database (Denmark)

    Hamuy, M.; Pignata, G.; Maza, J.;

    2012-01-01

    The CHilean Automatic Supernova sEarch (CHASE) project began in 2007 with the goal to discover young, nearby southern supernovae in order to (1) better understand the physics of exploding stars and their progenitors, and (2) refine the methods to derive extragalactic distances. During the first...

  14. ASAM: Automatic architecture synthesis and application mapping

    DEFF Research Database (Denmark)

    Jozwiak, Lech; Lindwer, Menno; Corvino, Rosilde;

    2013-01-01

    This paper focuses on mastering the automatic architecture synthesis and application mapping for heterogeneous massively-parallel MPSoCs based on customizable application-specific instruction-set processors (ASIPs). It presents an overview of the research being currently performed in the scope of...

  15. Hierarchical word clustering - automatic thesaurus generation

    OpenAIRE

    Hodge, V.J.; Austin, J.

    2002-01-01

    In this paper, we propose a hierarchical, lexical clustering neural network algorithm that automatically generates a thesaurus (synonym abstraction) using purely stochastic information derived from unstructured text corpora and requiring no prior word classifications. The lexical hierarchy overcomes the Vocabulary Problem by accommodating paraphrasing through using synonym clusters and overcomes Information Overload by focusing search within cohesive clusters. We describe existing word catego...

  16. Automatization and familiarity in repeated checking

    NARCIS (Netherlands)

    Dek, E.C.P.; van den Hout, M.A.; Giele, C.L.; Engelhard, I.M.

    2015-01-01

    Repetitive, compulsive-like checking of an object leads to reductions in memory confidence, vividness, and detail. Experimental research suggests that this is caused by increased familiarity with perceptual characteristics of the stimulus and automatization of the checking procedure (Dek, van den Ho

  17. Automatically identifying periodic social events from Twitter

    NARCIS (Netherlands)

    Kunneman, F.A.; Bosch, A.P.J. van den

    2015-01-01

    Many events referred to on Twitter are of a periodic nature, characterized by roughly constant time intervals in between occurrences. Examples are annual music festivals, weekly television programs, and the full moon cycle. We propose a system that can automatically identify periodic events from Twi

  18. Automatic Estimation of Movement Statistics of People

    DEFF Research Database (Denmark)

    Ægidiussen Jensen, Thomas; Rasmussen, Henrik Anker; Moeslund, Thomas B.

    2012-01-01

    Automatic analysis of how people move about in a particular environment has a number of potential applications. However, no system has so far been able to do detection and tracking robustly. Instead, trajectories are often broken into tracklets. The key idea behind this paper is based around...

  19. Reduction of Dutch Sentences for Automatic Subtitling

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.; Daelemans, W.; Höthker, A.

    2004-01-01

    We compare machine learning approaches for sentence length reduction for automatic generation of subtitles for deaf and hearing-impaired people with a method which relies on hand-crafted deletion rules. We describe building the necessary resources for this task: a parallel corpus of examples of news

  20. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Science.gov (United States)

    Shin, Seung Yeon; Lee, Soochahn; Yun, Il Dong; Jung, Ho Yub; Heo, Yong Seok; Kim, Sun Mi; Lee, Kyoung Mu

    2015-01-01

    In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii) a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC) curve for detection of clustered μCs. PMID:26630496