WorldWideScience

Sample records for samplex automatic mapping

  1. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS). In the......This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS......). In the second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable...... of identifying a release of radioactivity provided that the number of sampling locations is sufficiently high. Consequently, we believe that a combination of applying the presented mapping approach and the physical knowledge of the transport processes of radioactivity should be used to predict the extreme values....

  2. Automatic generation of tourist maps

    OpenAIRE

    Grabler, Floraine; Agrawala, Maneesh; Sumner, Robert W.; Pauly, Mark

    2008-01-01

    Tourist maps are essential resources for visitors to an unfamiliar city because they visually highlight landmarks and other points of interest. Yet, hand-designed maps are static representations that cannot adapt to the needs and tastes of the individual tourist. In this paper we present an automated system for designing tourist maps that selects and highlights the information that is most important to tourists. Our system determines the salience of map elements using bottom-up vision-based i...

  3. ASAM: Automatic architecture synthesis and application mapping

    DEFF Research Database (Denmark)

    Jozwiak, Lech; Lindwer, Menno; Corvino, Rosilde

    2013-01-01

    This paper focuses on mastering the automatic architecture synthesis and application mapping for heterogeneous massively-parallel MPSoCs based on customizable application-specific instruction-set processors (ASIPs). It presents an overview of the research being currently performed in the scope of...

  4. ASAM: Automatic Architecture Synthesis and Application Mapping

    DEFF Research Database (Denmark)

    Jozwiak, L.; Lindwer, M.; Corvino, R.

    2012-01-01

    This paper focuses on mastering the automatic architecture synthesis and application mapping for heterogeneous massively-parallel MPSoCs based on customizable application-specific instruction-set processors (ASIPs). It presents an over-view of the research being currently performed in the scope...... of the European project ASAM of the ARTEMIS program. The paper briefly presents the results of our analysis of the main problems to be solved and challenges to be faced in the design of such heterogeneous MPSoCs. It explains which system, design, and electronic design automation (EDA) concepts seem to be adequate...... to resolve the problems and address the challenges. Finally, it introduces and briefly discusses the ASAM design-flow and its main stages....

  5. Automatic crown cover mapping to improve forest inventory

    Science.gov (United States)

    Claude Vidal; Jean-Guy Boureau; Nicolas Robert; Nicolas Py; Josiane Zerubia; Xavier Descombes; Guillaume Perrin

    2009-01-01

    To automatically analyze near infrared aerial photographs, the French National Institute for Research in Computer Science and Control developed together with the French National Forest Inventory (NFI) a method for automatic crown cover mapping. This method uses a Reverse Jump Monte Carlo Markov Chain algorithm to locate the crowns and describe those using ellipses or...

  6. Automatically Annotated Mapping for Indoor Mobile Robot Applications

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Howard, Thomas J.

    2012-01-01

    This paper presents a new and practical method for mapping and annotating indoor environments for mobile robot use. The method makes use of 2D occupancy grid maps for metric representation, and topology maps to indicate the connectivity of the ‘places-of-interests’ in the environment. Novel use...... consistent, automatically annotated hybrid metric-topological maps that is needed by mobile service robots....

  7. Projector: automatic contig mapping for gap closure purposes

    OpenAIRE

    van Hijum, Sacha A. F. T.; Zomer, Aldert L.; Kuipers, Oscar P.; Kok, Jan

    2003-01-01

    Projector was designed for automatic positioning of contigs from an unfinished prokaryotic genome onto a template genome of a closely related strain or species. Projector mapped 84 contigs of Lactococcus lactis MG1363 (corresponding to 81% of the assembly nucleotides) against the genome of L.lactis IL1403. Ninety three percent of subsequent gap closure PCRs were successful. Moreover, a significant improvement in the N50 and N80 values (describing the assembly quality) was observed after the u...

  8. AUTOMATIC TEXTURE MAPPING OF ARCHITECTURAL AND ARCHAEOLOGICAL 3D MODELS

    Directory of Open Access Journals (Sweden)

    T. P. Kersten

    2012-07-01

    Full Text Available Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  9. Automatic Tagging as a Support Strategy for Creating Knowledge Maps

    Directory of Open Access Journals (Sweden)

    Leonardo Moura De Araújo

    2017-06-01

    Full Text Available Graph organizers are powerful tools for both structuring and transmitting knowledge. Because of their unique characteristics, these organizers are valuable for cultural institutions, which own large amounts of information assets and need to constantly make sense of them. On one hand, graph organizers are tools for connecting numerous chunks of data. On the other hand, because they are visual media, they offer a bird's-eye view perspective on complexity, which is digestible to the human eye. They are effective tools for information synthesis, and are capable of providing valuable insights on data. Information synthesis is essential for Heritage Interpretation, since institutions depend on constant generation of new content to preserve relevance among their audiences. While Mind Maps are simpler to be structured and comprehended, Knowledge Maps offer challenges that require new methods to minimize the difficulties encountered during their assembly. This paper presents strategies based on manual and automatic tagging as an answer to this problem. In addition, we describe the results of a usability test and qualitative analysis performed to compare the workflows employed to construct both Mind Maps and Knowledge Maps. Furthermore, we also talk about how well concepts can be communicated through the visual representation of trees and networks. Depending on the employed method, different results can be achieved, because of their unique topological characteristics. Our findings suggest that automatic tagging supports and accelerates the construction of graphs.

  10. A Radio-Map Automatic Construction Algorithm Based on Crowdsourcing

    Science.gov (United States)

    Yu, Ning; Xiao, Chenxian; Wu, Yinfeng; Feng, Renjian

    2016-01-01

    Traditional radio-map-based localization methods need to sample a large number of location fingerprints offline, which requires huge amount of human and material resources. To solve the high sampling cost problem, an automatic radio-map construction algorithm based on crowdsourcing is proposed. The algorithm employs the crowd-sourced information provided by a large number of users when they are walking in the buildings as the source of location fingerprint data. Through the variation characteristics of users’ smartphone sensors, the indoor anchors (doors) are identified and their locations are regarded as reference positions of the whole radio-map. The AP-Cluster method is used to cluster the crowdsourced fingerprints to acquire the representative fingerprints. According to the reference positions and the similarity between fingerprints, the representative fingerprints are linked to their corresponding physical locations and the radio-map is generated. Experimental results demonstrate that the proposed algorithm reduces the cost of fingerprint sampling and radio-map construction and guarantees the localization accuracy. The proposed method does not require users’ explicit participation, which effectively solves the resource-consumption problem when a location fingerprint database is established. PMID:27070623

  11. IceMap250—Automatic 250 m Sea Ice Extent Mapping Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Charles Gignac

    2017-01-01

    Full Text Available The sea ice cover in the North evolves at a rapid rate. To adequately monitor this evolution, tools with high temporal and spatial resolution are needed. This paper presents IceMap250, an automatic sea ice extent mapping algorithm using MODIS reflective/emissive bands. Hybrid cloud-masking using both the MOD35 mask and a visibility mask, combined with downscaling of Bands 3–7 to 250 m, are utilized to delineate sea ice extent using a decision tree approach. IceMap250 was tested on scenes from the freeze-up, stable cover, and melt seasons in the Hudson Bay complex, in Northeastern Canada. IceMap250 first product is a daily composite sea ice presence map at 250 m. Validation based on comparisons with photo-interpreted ground-truth show the ability of the algorithm to achieve high classification accuracy, with kappa values systematically over 90%. IceMap250 second product is a weekly clear sky map that provides a synthesis of 7 days of daily composite maps. This map, produced using a majority filter, makes the sea ice presence map even more accurate by filtering out the effects of isolated classification errors. The synthesis maps show spatial consistency through time when compared to passive microwave and national ice services maps.

  12. Automatic detection and mapping of oil spill using SAR images

    Energy Technology Data Exchange (ETDEWEB)

    Assilzadeh, H.; Gao, Y. [Calgary Univ., AB (Canada). Schulich School of Engineering, Dept. of Geomatics Engineering

    2008-07-01

    Satellite remote sensing can act as a supplement for other aerial observations for offshore oil spills that cover vast areas of the marine environment. A method for automatic detection and mapping of oil spills in synthetic aperture radar (SAR) images was presented. The proposed SAR framework includes detecting and classifying spilled areas based on texture analysis, thresholding, Gamma filtering and unsupervised classification. SAR is suitable for spills in situations when the oil cannot be seen or discriminated against the background. The automatic processing of a radar image for oil spill detection and delineation in seawater and coastal areas was demonstrated in this paper. Radar signatures were shown to be effective in distinguishing different regions of an oil spill, and classifying the oil spill region into 3 classes according to spill thicknesses. Classification is based on a gradient of back scattering value in spilled regions through different oil concentrations. However, there are some limitations regarding weather conditions in identifying oil slicks. At high winds, the oil may be washed down into the sea, leaving no surface effect in the SAR image. In addition, since SAR signals cannot be received at very low winds, no slicks can be observed. Experienced operators are needed to distinguish false alarms that may occur when processing oil spill look-alikes such as natural surfactants, sea alga and grease ice. The proposed SAR framework was shown to provide valuable information about oil spill scenarios and the extent of polluted areas. 26 refs., 4 figs.

  13. Automatized system for realizing solar irradiation maps of lands

    Energy Technology Data Exchange (ETDEWEB)

    Biasini, A.; Fanucci, O.; Visentin, R.

    In this work are explained in detail the methodological, operating and graphic proceedings for the realization of ''solar irradiation maps'' of the Italian territory. Starting from a topographic presentation of level curves, the graphic results are elaborated by means of acquisition, classification, digitalization and automatic superimposition processes of data concerning the orography of the place (classes of slope and exposition of slopes) and the following association to the areas located in this way of the corresponding classes of the relative solar irradiation. The method has been applied and tested successfully in an area of about 400 km/sup 2/ corresponding to the quadrant NE of the leaf I.G.M.I. nr. 221 scale 1:100,000 because it seems to represent almost all possible combinations between slopes and exposition. In this work is reported an example of the final result and the intermediate stage.

  14. Automatic traveltime picking using local time-frequency maps

    KAUST Repository

    Saragiotis, Christos

    2011-01-01

    The arrival times of distinct and sufficiently concentrated signals can be computed using Fourier transforms. In real seis- mograms, however, signals are far from distinct. We use local time-frequency maps of the seismograms and its frequency derivatives to obtain frequency-dependent (instantaneous) traveltimes. A smooth division is utilized to control the resolution of the instantaneous traveltimes to allow for a trade-off between resolution and stability. We average these traveltimes over the frequency band which is data-dependent. The resulting traveltime attribute is used to isolate different signals in seismic traces. We demonstrate the effectiveness of this automatic method for picking arrivals by applying it on synthetic and real data. © 2011 Society of Exploration Geophysicists.

  15. Research on Geological Survey Data Management and Automatic Mapping Technology

    Directory of Open Access Journals (Sweden)

    Dong Huang

    2017-01-01

    Full Text Available The data management of a large geological survey is not an easy task. To efficiently store and manage the huge datasets, a database of geological information on the basis of Microsoft Access has been created. By using the database of geological information, we can make easily and scientifically store and manage the large geological information. The geological maps—borehole diagrams, the rose diagrams for the joint trends, and joint isointensity diagrams—are traditionally drawn by hand, which is not efficient way; next, it is not easily possible to modify. Therefore, to solve those problems, the automatic mapping method and associated interfaces have been developed by using VS2010 and geological information database; these developments are presented in this article. This article describes the theoretical basis of the new method in detail and provides a case study of practical engineering to demonstrate its application.

  16. AUTOMATIC ROAD SIGN INVENTORY USING MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Soilán

    2016-06-01

    Full Text Available The periodic inspection of certain infrastructure features plays a key role for road network safety and preservation, and for developing optimal maintenance planning that minimize the life-cycle cost of the inspected features. Mobile Mapping Systems (MMS use laser scanner technology in order to collect dense and precise three-dimensional point clouds that gather both geometric and radiometric information of the road network. Furthermore, time-stamped RGB imagery that is synchronized with the MMS trajectory is also available. In this paper a methodology for the automatic detection and classification of road signs from point cloud and imagery data provided by a LYNX Mobile Mapper System is presented. First, road signs are detected in the point cloud. Subsequently, the inventory is enriched with geometrical and contextual data such as orientation or distance to the trajectory. Finally, semantic content is given to the detected road signs. As point cloud resolution is insufficient, RGB imagery is used projecting the 3D points in the corresponding images and analysing the RGB data within the bounding box defined by the projected points. The methodology was tested in urban and road environments in Spain, obtaining global recall results greater than 95%, and F-score greater than 90%. In this way, inventory data is obtained in a fast, reliable manner, and it can be applied to improve the maintenance planning of the road network, or to feed a Spatial Information System (SIS, thus, road sign information can be available to be used in a Smart City context.

  17. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  18. Assessing the impact of graphical quality on automatic text recognition in digital maps

    Science.gov (United States)

    Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang

    2016-08-01

    Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.

  19. Automatic Scaffolding and Measurement of Concept Mapping for EFL Students to Write Summaries

    Science.gov (United States)

    Yang, Yu-Fen

    2015-01-01

    An incorrect concept map may obstruct a student's comprehension when writing summaries if they are unable to grasp key concepts when reading texts. The purpose of this study was to investigate the effects of automatic scaffolding and measurement of three-layer concept maps on improving university students' writing summaries. The automatic…

  20. Towards semi-automatic annotation of toponyms on old maps

    OpenAIRE

    Simon, Rainer; Pilgerstorfer, Peter; Isaksen, Leif; Barker, Elton

    2014-01-01

    Present-day map digitization methods produce data that is semantically opaque; that is to a machine, a digitized map is merely a collection of bits and bytes. The area it depicts, the places it mentions, any text contained within legends or written on its margins remain unknown - unless a human appraises the image and manually adds this information to its metadata. This problem is especially severe in the case of old maps: these are typically handwritten, may contain text in varying orientati...

  1. Knowledge-based segmentation for automatic Map interpretation

    NARCIS (Netherlands)

    Hartog, J. den; Kate, T. ten; Gerbrands, J.

    1996-01-01

    In this paper, a knowledge-based framework for the top-down interpretation and segmentation of maps is presented. The interpretation is based on a priori knowledge about map objects, their mutual spatial relationships and potential segmentation problems. To reduce computational costs, a global

  2. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    NARCIS (Netherlands)

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide

  3. Automatic estimation of excavation volume from laser mobile mapping data for mountain road widening

    NARCIS (Netherlands)

    Wang, J.; González-Jorge, H.; Lindenbergh, R.; Arias-Sánchez, P.; Menenti, M.

    2013-01-01

    Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS) which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically

  4. AUTOMATIC CONSTRUCTION OF WI-FI RADIO MAP USING SMARTPHONES

    Directory of Open Access Journals (Sweden)

    T. Liu

    2016-06-01

    Full Text Available Indoor positioning could provide interesting services and applications. As one of the most popular indoor positioning methods, location fingerprinting determines the location of mobile users by matching the received signal strength (RSS which is location dependent. However, fingerprinting-based indoor positioning requires calibration and updating of the fingerprints which is labor-intensive and time-consuming. In this paper, we propose a visual-based approach for the construction of radio map for anonymous indoor environments without any prior knowledge. This approach collects multi-sensors data, e.g. video, accelerometer, gyroscope, Wi-Fi signals, etc., when people (with smartphones walks freely in indoor environments. Then, it uses the multi-sensor data to restore the trajectories of people based on an integrated structure from motion (SFM and image matching method, and finally estimates location of sampling points on the trajectories and construct Wi-Fi radio map. Experiment results show that the average location error of the fingerprints is about 0.53 m.

  5. A feasible and automatic free tool for T1 and ECV mapping.

    Science.gov (United States)

    Altabella, Luisa; Borrazzo, Cristian; Carnì, Marco; Galea, Nicola; Francone, Marco; Fiorelli, Andrea; Di Castro, Elisabetta; Catalano, Carlo; Carbone, Iacopo

    2017-01-01

    Cardiac magnetic resonance (CMR) is a useful non-invasive tool for characterizing tissues and detecting myocardial fibrosis and edema. Estimation of extracellular volume fraction (ECV) using T1 sequences is emerging as an accurate biomarker in cardiac diseases associated with diffuse fibrosis. In this study, automatic software for T1 and ECV map generation consisting of an executable file was developed and validated using phantom and human data. T1 mapping was performed in phantoms and 30 subjects (22 patients and 8 healthy subjects) on a 1.5T MR scanner using the modified Look-Locker inversion-recovery (MOLLI) sequence prototype before and 15 min after contrast agent administration. T1 maps were generated using a Fast Nonlinear Least Squares algorithm. Myocardial ECV maps were generated using both pre- and post-contrast T1 image registration and automatic extraction of blood relaxation rates. Using our software, pre- and post-contrast T1 maps were obtained in phantoms and healthy subjects resulting in a robust and reliable quantification as compared to reference software. Coregistration of pre- and post-contrast images improved the quality of ECV maps. Mean ECV value in healthy subjects was 24.5%±2.5%. This study demonstrated that it is possible to obtain accurate T1 maps and informative ECV maps using our software. Pixel-wise ECV maps obtained with this automatic software made it possible to visualize and evaluate the extent and severity of ECV alterations. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. A framework for mapping, visualisation and automatic model creation of signal-transduction networks.

    Science.gov (United States)

    Tiger, Carl-Fredrik; Krause, Falko; Cedersund, Gunnar; Palmér, Robert; Klipp, Edda; Hohmann, Stefan; Kitano, Hiroaki; Krantz, Marcus

    2012-04-24

    Intracellular signalling systems are highly complex. This complexity makes handling, analysis and visualisation of available knowledge a major challenge in current signalling research. Here, we present a novel framework for mapping signal-transduction networks that avoids the combinatorial explosion by breaking down the network in reaction and contingency information. It provides two new visualisation methods and automatic export to mathematical models. We use this framework to compile the presently most comprehensive map of the yeast MAP kinase network. Our method improves previous strategies by combining (I) more concise mapping adapted to empirical data, (II) individual referencing for each piece of information, (III) visualisation without simplifications or added uncertainty, (IV) automatic visualisation in multiple formats, (V) automatic export to mathematical models and (VI) compatibility with established formats. The framework is supported by an open source software tool that facilitates integration of the three levels of network analysis: definition, visualisation and mathematical modelling. The framework is species independent and we expect that it will have wider impact in signalling research on any system.

  7. The Automatic Method of EEG State Classification by Using Self-Organizing Map

    Science.gov (United States)

    Tamura, Kazuhiro; Shimada, Takamasa; Saito, Yoichi

    In psychiatry, the sleep stage is one of the most important evidence for diagnosing mental disease. However, when doctor diagnose the sleep stage, much labor and skill are required, and a quantitative and objective method is required for more accurate diagnosis. For this reason, an automatic diagnosis system must be developed. In this paper, we propose an automatic sleep stage diagnosis method by using Self Organizing Maps (SOM). Neighborhood learning of SOM makes input data which has similar feature output closely. This function is effective to understandable classifying of complex input data automatically. We applied Elman-type feedback SOM to EEG of not only normal subjects but also subjects suffer from disease. The spectrum of characteristic waves in EEG of disease subjects is often different from it of normal subjects. So, it is difficult to classifying EEG of disease subjects with the rule for normal subjects. On the other hand, Elman-type feedback SOM Classifies the EEG with features which data include and classifying rule is made automatically, so even the EEG with disease subjects is able to be classified automatically. And this Elman-type feedback SOM has context units for diagnosing sleep stages considering contextual information of EEG. Experimental results indicate that the proposed method is able to achieve sleep stage judgment along with doctor's diagnosis.

  8. AN AUTOMATIC UAV MAPPING SYSTEM FOR SUPPORTING UN (UNITED NATIONS FIELD OPERATIONS

    Directory of Open Access Journals (Sweden)

    K. Choi

    2016-06-01

    Full Text Available The United Nations (UN has performed field operations worldwide such as peacekeeping or rescue missions. When such an operation is needed, the UN dispatches an operation team usually with a GIS (Geographic Information System customized to a specific operation. The base maps for the GIS are generated mostly with satellite images which may not retain a high resolution and the current situation. To build an up-to-date high resolution map, we propose a UAV (unmanned aerial vehicle based automatic mapping system, which can operate in a fully automatic way from the data acquisition of sensory data to the data processing for the generation of the geospatial products such as a mosaicked orthoimage of a target area. In this study, we analyse the requirements for UN field operations, suggest a UAV mapping system with an operation scenario, and investigate the applicability of the system. With the proposed system, we can construct a tailored GIS with up-to-date and high resolution base maps for a specific operation efficiently.

  9. Breast Contrast Enhanced MR Imaging: Semi-Automatic Detection of Vascular Map and Predominant Feeding Vessel.

    Science.gov (United States)

    Petrillo, Antonella; Fusco, Roberta; Filice, Salvatore; Granata, Vincenza; Catalano, Orlando; Vallone, Paolo; Di Bonito, Maurizio; D'Aiuto, Massimiliano; Rinaldo, Massimo; Capasso, Immacolata; Sansone, Mario

    2016-01-01

    To obtain breast vascular map and to assess correlation between predominant feeding vessel and tumor location with a semi-automatic method compared to conventional radiologic reading. 148 malignant and 75 benign breast lesions were included. All patients underwent bilateral MR imaging. Written informed consent was obtained from the patients before MRI. The local ethics committee granted approval for this study. Semi-automatic breast vascular map and predominant vessel detection was performed on MRI, for each patient. Semi-automatic detection (depending on grey levels threshold manually chosen by radiologist) was compared with results of two expert radiologists; inter-observer variability and reliability of semi-automatic approach were assessed. Anatomic analysis of breast lesions revealed that 20% of patients had masses in internal half, 50% in external half and the 30% in subareolar/central area. As regards the 44 tumors in internal half, based on radiologic consensus, 40 demonstrated a predominant feeding vessel (61% were supplied by internal thoracic vessels, 14% by lateral thoracic vessels, 16% by both thoracic vessels and 9% had no predominant feeding vessel-pfeeding vessel (66% were supplied by internal thoracic vessels, 11% by lateral thoracic vessels, 9% by both thoracic vessels and 14% had no predominant feeding vessel-pfeeding vessel (25% were supplied by internal thoracic vessels, 39% by lateral thoracic vessels, 18% by both thoracic vessels and 18% had no predominant feeding vessel-pfeeding vessel (27% were supplied by internal thoracic vessels, 45% by lateral thoracic vessels, 4% by both thoracic vessels and 24% had no predominant feeding vessel-pfeeding vessel. An excellent reliability for semi-automatic assessment (Cronbach's alpha = 0.96) was reported. Predominant feeding vessel location was correlated with breast lesion location: internal thoracic artery supplied the highest proportion of breasts with tumor in internal half and lateral thoracic

  10. Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet

    Directory of Open Access Journals (Sweden)

    Paweł Kędzia

    2015-06-01

    Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet The paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.

  11. Conversion of KEGG metabolic pathways to SBGN maps including automatic layout.

    Science.gov (United States)

    Czauderna, Tobias; Wybrow, Michael; Marriott, Kim; Schreiber, Falk

    2013-08-16

    Biologists make frequent use of databases containing large and complex biological networks. One popular database is the Kyoto Encyclopedia of Genes and Genomes (KEGG) which uses its own graphical representation and manual layout for pathways. While some general drawing conventions exist for biological networks, arbitrary graphical representations are very common. Recently, a new standard has been established for displaying biological processes, the Systems Biology Graphical Notation (SBGN), which aims to unify the look of such maps. Ideally, online repositories such as KEGG would automatically provide networks in a variety of notations including SBGN. Unfortunately, this is non-trivial, since converting between notations may add, remove or otherwise alter map elements so that the existing layout cannot be simply reused. Here we describe a methodology for automatic translation of KEGG metabolic pathways into the SBGN format. We infer important properties of the KEGG layout and treat these as layout constraints that are maintained during the conversion to SBGN maps. This allows for the drawing and layout conventions of SBGN to be followed while creating maps that are still recognizably the original KEGG pathways. This article details the steps in this process and provides examples of the final result.

  12. Virtual Character Animations from Human Body Motion by Automatic Direct and Inverse Kinematics-based Mapping

    Directory of Open Access Journals (Sweden)

    Andrea Sanna

    2015-02-01

    Full Text Available Motion capture systems provide an efficient and interactive solution for extracting information related to a human skeleton, which is often exploited to animate virtual characters. When the character cannot be assimilated to an anthropometric shape, the task to map motion capture data onto the armature to be animated could be extremely challenging. This paper presents two methodologies for the automatic mapping of a human skeleton onto virtual character armatures. Kinematics chains of the human skeleton are analyzed in order to map joints, bones and end-effectors onto an arbitrary shaped armatures. Both forward and inverse kinematics are considered. A prototype implementation has been developed by using the Microsoft Kinect as body tracking device. Results show that the proposed solution can already be used to animate truly different characters ranging from a Pixar-like lamp to different kinds of animals.

  13. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    Directory of Open Access Journals (Sweden)

    Mostafa Rabah

    2013-12-01

    The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  14. Application of Multilayer Perceptron with Automatic Relevance Determination on Weed Mapping Using UAV Multispectral Imagery.

    Science.gov (United States)

    Tamouridou, Afroditi A; Alexandridis, Thomas K; Pantazi, Xanthoula E; Lagopodi, Anastasia L; Kashefi, Javid; Kasampalis, Dimitris; Kontouris, Georgios; Moshou, Dimitrios

    2017-10-11

    Remote sensing techniques are routinely used in plant species discrimination and of weed mapping. In the presented work, successful Silybum marianum detection and mapping using multilayer neural networks is demonstrated. A multispectral camera (green-red-near infrared) attached on a fixed wing unmanned aerial vehicle (UAV) was utilized for the acquisition of high-resolution images (0.1 m resolution). The Multilayer Perceptron with Automatic Relevance Determination (MLP-ARD) was used to identify the S. marianum among other vegetation, mostly Avena sterilis L. The three spectral bands of Red, Green, Near Infrared (NIR) and the texture layer resulting from local variance were used as input. The S. marianum identification rates using MLP-ARD reached an accuracy of 99.54%. Τhe study had an one year duration, meaning that the results are specific, although the accuracy shows the interesting potential of S. marianum mapping with MLP-ARD on multispectral UAV imagery.

  15. Automatic determination of reaction mappings and reaction center information. 2. Validation on a biochemical reaction database.

    Science.gov (United States)

    Apostolakis, Joannis; Sacher, Oliver; Körner, Robert; Gasteiger, Johann

    2008-06-01

    The correct identification of the reacting bonds and atoms is a prerequisite for the analysis of the reaction mechanism. We have recently developed a method based on the Imaginary Transition State Energy Minimization approach for automatically determining the reaction center information and the atom-atom mapping numbers. We test here the accuracy of this ITSE approach by comparing the predictions of the method against more than 1500 manually annotated reactions from BioPath, a comprehensive database of biochemical reactions. The results show high agreement between manually annotated mappings and computational predictions (98.4%), with significant discrepancies in only 24 cases out of 1542 (1.6%). This result validates both the computational prediction and the database, at the same time, as the results of the former agree with expert knowledge and the latter appears largely self-consistent, and consistent with a simple principle. In 10 of the discrepant cases, simple chemical arguments or independent literature studies support the predicted reaction center. In five reaction instances the differences in the automatically and manually annotated mappings are described in detail. Finally, in approximately 200 cases the algorithm finds alternate reaction centers, which need to be studied on a case by case basis, as the exact choice of the alternative may depend on the enzyme catalyzing the reaction.

  16. Perfusion CT in acute stroke: effectiveness of automatically-generated colour maps.

    Science.gov (United States)

    Ukmar, Maja; Degrassi, Ferruccio; Pozzi Mucelli, Roberta Antea; Neri, Francesca; Mucelli, Fabio Pozzi; Cova, Maria Assunta

    2017-04-01

    To evaluate the accuracy of perfusion CT (pCT) in the definition of the infarcted core and the penumbra, comparing the data obtained from the evaluation of parametric maps [cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT)] with software-generated colour maps. A retrospective analysis was performed to identify patients with suspected acute ischaemic strokes and who had undergone unenhanced CT and pCT carried out within 4.5 h from the onset of the symptoms. A qualitative evaluation of the CBV, CBF and MTT maps was performed, followed by an analysis of the colour maps automatically generated by the software. 26 patients were identified, but a direct CT follow-up was performed only on 19 patients after 24-48 h. In the qualitative analysis, 14 patients showed perfusion abnormalities. Specifically, 29 perfusion deficit areas were detected, of which 15 areas suggested the penumbra and the remaining 14 areas suggested the infarct. As for automatically software-generated maps, 12 patients showed perfusion abnormalities. 25 perfusion deficit areas were identified, 15 areas of which suggested the penumbra and the other 10 areas the infarct. The McNemar's test showed no statistically significant difference between the two methods of evaluation in highlighting infarcted areas proved later at CT follow-up. We demonstrated how pCT provides good diagnostic accuracy in the identification of acute ischaemic lesions. The limits of identification of the lesions mainly lie at the pons level and in the basal ganglia area. Qualitative analysis has proven to be more efficient in identification of perfusion lesions in comparison with software-generated maps. However, software-generated maps have proven to be very useful in the emergency setting. Advances in knowledge: The use of CT perfusion is requested in increasingly more patients in order to optimize the treatment, thanks also to the technological evolution of CT, which now allows a whole

  17. An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing

    Directory of Open Access Journals (Sweden)

    Yong Ma

    2016-03-01

    Full Text Available Disaster change mapping, which can provide accurate and timely changed information (e.g., damaged buildings, accessibility of road and the shelter sites for decision makers to guide and support a plan for coordinating emergency rescue, is critical for early disaster rescue. In this paper, we focus on optical remote sensing data to propose an automatic procedure to reduce the impacts of optical data limitations and provide the emergency information in the early phases of a disaster. The procedure utilizes a series of new methods, such as an Optimizable Variational Model (OptVM for image fusion and a scale-invariant feature transform (SIFT constraint optical flow method (SIFT-OFM for image registration, to produce product maps including cloudless backdrop maps and change-detection maps for catastrophic event regions, helping people to be aware of the whole scope of the disaster and assess the distribution and magnitude of damage. These product maps have a rather high accuracy as they are based on high precision preprocessing results in spectral consistency and geometric, which compared with traditional fused and registration methods by visual qualitative or quantitative analysis. The procedure is fully automated without any manual intervention to save response time. It also can be applied to many situations.

  18. DTM-based automatic mapping and fractal clustering of putative mud volcanoes in Arabia Terra craters

    Science.gov (United States)

    Pozzobon, R. P.; Mazzarini, F. M.; Massironi, M. M.; Cremonese, G. C.; Rossi, A. P. R.; Pondrelli, M. P.; Marinangeli, L. M.

    2017-09-01

    Arabia Terra is a region of Mars where occurrence of past-water manifests at surface and subsurface. To date, several landforms associated with this activity were recognized and mapped, directly influencing the models of fluid circulation. In particular, within several craters such as Firsoff and an unnamed southern crater, putative mud volcanoes were described by several authors. In fact, numerous mounds (from 30 m of diameter in the case of monogenic cones, up to 3-400 m in the case of coalescing mounds) present an apical vent-like depression, resembling subaerial Azerbaijan mud volcanoes and gryphons. To this date, landform analysis through topographic position index and curvatures based on topography was never attempted. We hereby present a landform classification method suitable for mounds automatic mapping. Their resulting spatial distribution is then studied in terms of self-similar clustering.

  19. Computational solution to automatically map metabolite libraries in the context of genome scale metabolic networks

    Directory of Open Access Journals (Sweden)

    Benjamin eMerlet

    2016-02-01

    Full Text Available This article describes a generic programmatic method for mapping chemical compound libraries on organism-specific metabolic networks from various databases (KEGG, BioCyc and flat file formats (SBML and Matlab files. We show how this pipeline was successfully applied to decipher the coverage of chemical libraries set up by two metabolomics facilities MetaboHub (French National infrastructure for metabolomics and fluxomics and Glasgow Polyomics on the metabolic networks available in the MetExplore web server. The present generic protocol is designed to formalize and reduce the volume of information transfer between the library and the network database. Matching of metabolites between libraries and metabolic networks is based on InChIs or InChIKeys and therefore requires that these identifiers are specified in both libraries and networks.In addition to providing covering statistics, this pipeline also allows the visualization of mapping results in the context of metabolic networks.In order to achieve this goal we tackled issues on programmatic interaction between two servers, improvement of metabolite annotation in metabolic networks and automatic loading of a mapping in genome scale metabolic network analysis tool MetExplore. It is important to note that this mapping can also be performed on a single or a selection of organisms of interest and is thus not limited to large facilities.

  20. Towards the Optimal Pixel Size of dem for Automatic Mapping of Landslide Areas

    Science.gov (United States)

    Pawłuszek, K.; Borkowski, A.; Tarolli, P.

    2017-05-01

    Determining appropriate spatial resolution of digital elevation model (DEM) is a key step for effective landslide analysis based on remote sensing data. Several studies demonstrated that choosing the finest DEM resolution is not always the best solution. Various DEM resolutions can be applicable for diverse landslide applications. Thus, this study aims to assess the influence of special resolution on automatic landslide mapping. Pixel-based approach using parametric and non-parametric classification methods, namely feed forward neural network (FFNN) and maximum likelihood classification (ML), were applied in this study. Additionally, this allowed to determine the impact of used classification method for selection of DEM resolution. Landslide affected areas were mapped based on four DEMs generated at 1 m, 2 m, 5 m and 10 m spatial resolution from airborne laser scanning (ALS) data. The performance of the landslide mapping was then evaluated by applying landslide inventory map and computation of confusion matrix. The results of this study suggests that the finest scale of DEM is not always the best fit, however working at 1 m DEM resolution on micro-topography scale, can show different results. The best performance was found at 5 m DEM-resolution for FFNN and 1 m DEM resolution for results. The best performance was found to be using 5 m DEM-resolution for FFNN and 1 m DEM resolution for ML classification.

  1. Automatic Detection of Secondary Craters and Mapping of Planetary Surface Age Based on Lunar Orbital Images

    Science.gov (United States)

    Salih, A. L.; Lompart, A.; Grumpe, A.; Wöhler, C.; Hiesinger, H.

    2017-07-01

    Ages of planetary surfaces are typically obtained by manually determining the impact crater size-frequency distribution (CSFD) in spacecraft imagery, which is a very intricate and time-consuming procedure. In this work, an image-based crater detection algorithm that relies on a generative template matching technique is applied to establish the CSFD of the floor of the lunar farside crater Tsiolkovsky. The automatic detection threshold value is calibrated based on a 100 km² test area for which the CSFD has been determined by manual crater counting in a previous study. This allows for the construction of an age map of the complete crater floor. It is well known that the CSFD may be affected by secondary craters. Hence, our detection results are refined by applying a secondary candidate detection (SCD) algorithm relying on Voronoi tessellation of the spatial crater distribution, which searches for clusters of craters. The detected clusters are assumed to result from the presence of secondary craters, which are then removed from the CSFD. We found it favourable to apply the SCD algorithm separately to each diameter bin of the CSFD histogram. In comparison with the original age map, the refined age map obtained after removal of secondary candidates has a more homogeneous appearance and does not exhibit regions of spuriously high age resulting from contamination by secondary craters.

  2. MR-based automatic delineation of volumes of interest in human brain PET images using probability maps

    DEFF Research Database (Denmark)

    Svarer, Claus; Madsen, Karina; Hasselbalch, Steen G.

    2005-01-01

    The purpose of this study was to develop and validate an observer-independent approach for automatic generation of volume-of-interest (VOI) brain templates to be used in emission tomography studies of the brain. The method utilizes a VOI probability map created on the basis of a database of several...... delineation of the VOI set. The approach was also shown to work equally well in individuals with pronounced cerebral atrophy. Probability-map-based automatic delineation of VOIs is a fast, objective, reproducible, and safe way to assess regional brain values from PET or SPECT scans. In addition, the method...

  3. AUTOMATIC PEDESTRIAN CROSSING DETECTION AND IMPAIRMENT ANALYSIS BASED ON MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    X. Liu

    2017-09-01

    Full Text Available Pedestrian crossing, as an important part of transportation infrastructures, serves to secure pedestrians’ lives and possessions and keep traffic flow in order. As a prominent feature in the street scene, detection of pedestrian crossing contributes to 3D road marking reconstruction and diminishing the adverse impact of outliers in 3D street scene reconstruction. Since pedestrian crossing is subject to wearing and tearing from heavy traffic flow, it is of great imperative to monitor its status quo. On this account, an approach of automatic pedestrian crossing detection using images from vehicle-based Mobile Mapping System is put forward and its defilement and impairment are analyzed in this paper. Firstly, pedestrian crossing classifier is trained with low recall rate. Then initial detections are refined by utilizing projection filtering, contour information analysis, and monocular vision. Finally, a pedestrian crossing detection and analysis system with high recall rate, precision and robustness will be achieved. This system works for pedestrian crossing detection under different situations and light conditions. It can recognize defiled and impaired crossings automatically in the meanwhile, which facilitates monitoring and maintenance of traffic facilities, so as to reduce potential traffic safety problems and secure lives and property.

  4. Automatic Pedestrian Crossing Detection and Impairment Analysis Based on Mobile Mapping System

    Science.gov (United States)

    Liu, X.; Zhang, Y.; Li, Q.

    2017-09-01

    Pedestrian crossing, as an important part of transportation infrastructures, serves to secure pedestrians' lives and possessions and keep traffic flow in order. As a prominent feature in the street scene, detection of pedestrian crossing contributes to 3D road marking reconstruction and diminishing the adverse impact of outliers in 3D street scene reconstruction. Since pedestrian crossing is subject to wearing and tearing from heavy traffic flow, it is of great imperative to monitor its status quo. On this account, an approach of automatic pedestrian crossing detection using images from vehicle-based Mobile Mapping System is put forward and its defilement and impairment are analyzed in this paper. Firstly, pedestrian crossing classifier is trained with low recall rate. Then initial detections are refined by utilizing projection filtering, contour information analysis, and monocular vision. Finally, a pedestrian crossing detection and analysis system with high recall rate, precision and robustness will be achieved. This system works for pedestrian crossing detection under different situations and light conditions. It can recognize defiled and impaired crossings automatically in the meanwhile, which facilitates monitoring and maintenance of traffic facilities, so as to reduce potential traffic safety problems and secure lives and property.

  5. Semi-automatic mapping for identifying complex geobodies in seismic images

    Science.gov (United States)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  6. AUTOMATIC PADDY RICE MAPPING INTERFACE USING ARCENGINE AND LANDSAT8 IMAGERY (CASE STUDY IN NORTH PART OF IRAN

    Directory of Open Access Journals (Sweden)

    Sh. Bahramvash Shams

    2014-10-01

    Full Text Available Recognition of paddy rice boundaries is an essential step for many agricultural processes such as yield estimation, cadastre and water management. In this study, an automatic rice paddy mapping is proposed. The algorithm is based on two temporal images: an initial period of flooding and after harvesting. The proposed method has several steps include: finding flooded pixels and masking unwanted pixels which contain water bodies, clouds, forests, and swamps. In order to achieve final paddy map, indexes such as Normalized Difference Vegetation Index (NDVI and Land Surface Water Index (LSWI are used. Validation is performed by rice paddy boundaries, which were drawn by an expert operator in Google maps. Due to this appraisal good agreement (close to 90% is reached. The algorithm is applied to Gilan province located in the north part of Iran using Landsat 8 date 2013. Automatic Interface is designed based on proposed algorithm using Arc Engine and visual studio. In the Interface, inputs are Landsat bands of two time periods including: red (0.66 μm, blue (0.48 μm, NIR (0.87 μm, and SWIR (2.20 μm, which should be defined by user. The whole process will run automatically and the final result will provide paddy map of desire year.

  7. Automatic Paddy Rice Mapping Interface Using Arcengine and LANDSAT8 Imagery (case Study in North Part of Iran)

    Science.gov (United States)

    Bahramvash Shams, Sh.

    2014-10-01

    Recognition of paddy rice boundaries is an essential step for many agricultural processes such as yield estimation, cadastre and water management. In this study, an automatic rice paddy mapping is proposed. The algorithm is based on two temporal images: an initial period of flooding and after harvesting. The proposed method has several steps include: finding flooded pixels and masking unwanted pixels which contain water bodies, clouds, forests, and swamps. In order to achieve final paddy map, indexes such as Normalized Difference Vegetation Index (NDVI) and Land Surface Water Index (LSWI) are used. Validation is performed by rice paddy boundaries, which were drawn by an expert operator in Google maps. Due to this appraisal good agreement (close to 90%) is reached. The algorithm is applied to Gilan province located in the north part of Iran using Landsat 8 date 2013. Automatic Interface is designed based on proposed algorithm using Arc Engine and visual studio. In the Interface, inputs are Landsat bands of two time periods including: red (0.66 μm), blue (0.48 μm), NIR (0.87 μm), and SWIR (2.20 μm), which should be defined by user. The whole process will run automatically and the final result will provide paddy map of desire year.

  8. By the People, for the People: the Crowdsourcing of "STREETBUMP": AN Automatic Pothole Mapping App

    Science.gov (United States)

    Carrera, F.; Guerin, S.; Thorp, J. B.

    2013-05-01

    This paper traces the genesis and development of StreetBump, a smartphone application to map the location of potholes in Boston, Massachusetts. StreetBump belongs to a special category of "subliminal" crowdsourcing mobile applications that turn humans into sensors. Once started, it automatically collects road condition information without any human intervention, using the accelerometers and GPS inside smartphones. The StreetBump app evolved from a hardware device designed and built by WPI's City Lab starting in 2003, which was originally intended to measure and map boat wakes in the city of Venice, Italy (Chiu, 2004). A second version of the custom hardware with onboard GPS and accelerometers was adapted to use in Boston, Massachusetts, to map road damage (potholes) in 2006 (Angelini, 2006). In 2009, Prof. Carrera proposed to the newly created office of New Urban Mechanics in the City of Boston to migrate the concept to Smartphones, based on the Android platform. The first prototype of the mobile app, called StreetBump, was released in 2010 by the authors (Harmon, 2010). In 2011, the app provided the basis for a worldwide Innocentive competition to develop the best postprocessing algorithms to identify the real potholes vs. other phone bumps (Moskowitz, 2011). Starting in 2012, the City of Boston has begun using a subsequent version of the app to operationally manage road repairs based on the data collected by StreetBump. The novelty of this app is not purely technological, but lies also in the top-to-bottom crowdsourcing of all its components. The app was designed to rely on the crowd to confirm the presence of damage though repeat hits (or lack thereof) as more users travel the same roads over time. Moreover, the non-trivial post-processing of the StreetBump data was itself the subject of a crowdsourced competition through an Innocentive challenge for the best algorithm. The release of the StreetBump code as open-source allowed the development of the final

  9. MR-based automatic delineation of volumes of interest in human brain PET images using probability maps

    DEFF Research Database (Denmark)

    Svarer, Claus; Madsen, Karina; Hasselbalch, Steen G.

    2005-01-01

    The purpose of this study was to develop and validate an observer-independent approach for automatic generation of volume-of-interest (VOI) brain templates to be used in emission tomography studies of the brain. The method utilizes a VOI probability map created on the basis of a database of several...... subjects' MR-images, where VOI sets have been defined manually. High-resolution structural MR-images and 5-HT(2A) receptor binding PET-images (in terms of (18)F-altanserin binding) from 10 healthy volunteers and 10 patients with mild cognitive impairment were included for the analysis. A template including...... delineation of the VOI set. The approach was also shown to work equally well in individuals with pronounced cerebral atrophy. Probability-map-based automatic delineation of VOIs is a fast, objective, reproducible, and safe way to assess regional brain values from PET or SPECT scans. In addition, the method...

  10. Automatic blood vessels segmentation based on different retinal maps from OCTA scans.

    Science.gov (United States)

    Eladawi, Nabila; Elmogy, Mohammed; Helmy, Omar; Aboelfetouh, Ahmed; Riad, Alaa; Sandhu, Harpal; Schaal, Shlomit; El-Baz, Ayman

    2017-10-01

    The retinal vascular network reflects the health of the retina, which is a useful diagnostic indicator of systemic vascular. Therefore, the segmentation of retinal blood vessels is a powerful method for diagnosing vascular diseases. This paper presents an automatic segmentation system for retinal blood vessels from Optical Coherence Tomography Angiography (OCTA) images. The system segments blood vessels from the superficial and deep retinal maps for normal and diabetic cases. Initially, we reduced the noise and improved the contrast of the OCTA images by using the Generalized Gauss-Markov random field (GGMRF) model. Secondly, we proposed a joint Markov-Gibbs random field (MGRF) model to segment the retinal blood vessels from other background tissues. It integrates both appearance and spatial models in addition to the prior probability model of OCTA images. The higher order MGRF (HO-MGRF) model in addition to the 1 st -order intensity model are used to consider the spatial information in order to overcome the low contrast between vessels and other tissues. Finally, we refined the segmentation by extracting connected regions using a 2D connectivity filter. The proposed segmentation system was trained and tested on 47 data sets, which are 23 normal data sets and 24 data sets for diabetic patients. To evaluate the accuracy and robustness of the proposed segmentation framework, we used three different metrics, which are Dice similarity coefficient (DSC), absolute vessels volume difference (VVD), and area under the curve (AUC). The results on OCTA data sets (DSC=95.04±3.75%, VVD=8.51±1.49%, and AUC=95.20±1.52%) show the promise of the proposed segmentation approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Classification of building infrastructure and automatic building footprint delineation using airborne laser swath mapping data

    Science.gov (United States)

    Caceres, Jhon

    Three-dimensional (3D) models of urban infrastructure comprise critical data for planners working on problems in wireless communications, environmental monitoring, civil engineering, and urban planning, among other tasks. Photogrammetric methods have been the most common approach to date to extract building models. However, Airborne Laser Swath Mapping (ALSM) observations offer a competitive alternative because they overcome some of the ambiguities that arise when trying to extract 3D information from 2D images. Regardless of the source data, the building extraction process requires segmentation and classification of the data and building identification. In this work, approaches for classifying ALSM data, separating building and tree points, and delineating ALSM footprints from the classified data are described. Digital aerial photographs are used in some cases to verify results, but the objective of this work is to develop methods that can work on ALSM data alone. A robust approach for separating tree and building points in ALSM data is presented. The method is based on supervised learning of the classes (tree vs. building) in a high dimensional feature space that yields good class separability. Features used for classification are based on the generation of local mappings, from three-dimensional space to two-dimensional space, known as "spin images" for each ALSM point to be classified. The method discriminates ALSM returns in compact spaces and even where the classes are very close together or overlapping spatially. A modified algorithm of the Hough Transform is used to orient the spin images, and the spin image parameters are specified such that the mutual information between the spin image pixel values and class labels is maximized. This new approach to ALSM classification allows us to fully exploit the 3D point information in the ALSM data while still achieving good class separability, which has been a difficult trade-off in the past. Supported by the spin

  12. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    Science.gov (United States)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  13. Automatic Estimation of Excavation Volume from Laser Mobile Mapping Data for Mountain Road Widening

    Directory of Open Access Journals (Sweden)

    Massimo Menenti

    2013-09-01

    Full Text Available Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as

  14. Automatic determination of cardiovascular risk by CT attenuation correction maps in Rb-82 PET/CT.

    Science.gov (United States)

    Išgum, Ivana; de Vos, Bob D; Wolterink, Jelmer M; Dey, Damini; Berman, Daniel S; Rubeaux, Mathieu; Leiner, Tim; Slomka, Piotr J

    2017-04-04

    We investigated fully automatic coronary artery calcium (CAC) scoring and cardiovascular disease (CVD) risk categorization from CT attenuation correction (CTAC) acquired at rest and stress during cardiac PET/CT and compared it with manual annotations in CTAC and with dedicated calcium scoring CT (CSCT). We included 133 consecutive patients undergoing myocardial perfusion 82Rb PET/CT with the acquisition of low-dose CTAC at rest and stress. Additionally, a dedicated CSCT was performed for all patients. Manual CAC annotations in CTAC and CSCT provided the reference standard. In CTAC, CAC was scored automatically using a previously developed machine learning algorithm. Patients were assigned to a CVD risk category based on their Agatston score (0, 1-10, 11-100, 101-400, >400). Agreement in CVD risk categorization between manual and automatic scoring in CTAC at rest and stress resulted in Cohen's linearly weighted κ of 0.85 and 0.89, respectively. The agreement between CSCT and CTAC at rest resulted in κ of 0.82 and 0.74, using manual and automatic scoring, respectively. For CTAC at stress, these were 0.79 and 0.70, respectively. Automatic CAC scoring from CTAC PET/CT may allow routine CVD risk assessment from the CTAC component of PET/CT without any additional radiation dose or scan time.

  15. Challenges and opportunities : One stop processing of automatic large-scale base map production using airborne lidar data within gis environment case study: Makassar City, Indonesia

    NARCIS (Netherlands)

    Widyaningrum, E.; Gorte, B.G.H.

    2017-01-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information

  16. Automatic Extraction of Appendix from Ultrasonography with Self-Organizing Map and Shape-Brightness Pattern Learning

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2016-01-01

    Full Text Available Accurate diagnosis of acute appendicitis is a difficult problem in practice especially when the patient is too young or women in pregnancy. In this paper, we propose a fully automatic appendix extractor from ultrasonography by applying a series of image processing algorithms and an unsupervised neural learning algorithm, self-organizing map. From the suggestions of clinical practitioners, we define four shape patterns of appendix and self-organizing map learns those patterns in pixel clustering phase. In the experiment designed to test the performance for those four frequently found shape patterns, our method is successful in 3 types (1 failure out of 45 cases but leaves a question for one shape pattern (80% correct.

  17. Automatic Classification of Protein Structure Using the Maximum Contact Map Overlap Metric

    NARCIS (Netherlands)

    R. Andonov (Rumen); H. Djidjev (Hristo); G.W. Klau (Gunnar); M. Le Boudic-Jamin (Mathilde); I. Wohlers (Inken)

    2015-01-01

    htmlabstractIn this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfies all properties of a metric on the space of

  18. Automatic classification of protein structure using the maximum contact map overlap metric

    NARCIS (Netherlands)

    Andonov, Rumen; Djidjev, Hristo; Klau, Gunnar W.; Boudic-Jamin, Mathilde Le; Wohlers, Inken

    2015-01-01

    In this work, we propose a new distance measure for comparing two protein structures based on their contact map representations. We show that our novel measure, which we refer to as the maximum contact map overlap (max-CMO) metric, satisfiesall properties of a metric on the space of protein

  19. High-Resolution, Semi-Automatic Fault Mapping Using Umanned Aerial Vehicles and Computer Vision: Mapping from an Armchair

    Science.gov (United States)

    Micklethwaite, S.; Vasuki, Y.; Turner, D.; Kovesi, P.; Holden, E.; Lucieer, A.

    2012-12-01

    Our ability to characterise fractures depends upon the accuracy and precision of field techniques, as well as the quantity of data that can be collected. Unmanned Aerial Vehicles (UAVs; otherwise known as "drones") and photogrammetry, provide exciting new opportunities for the accurate mapping of fracture networks, over large surface areas. We use a highly stable, 8 rotor, UAV platform (Oktokopter) with a digital SLR camera and the Structure-from-Motion computer vision technique, to generate point clouds, wireframes, digital elevation models and orthorectified photo mosaics. Furthermore, new image analysis methods such as phase congruency are applied to the data to semiautomatically map fault networks. A case study is provided of intersecting fault networks and associated damage, from Piccaninny Point in Tasmania, Australia. Outcrops >1 km in length can be surveyed in a single 5-10 minute flight, with pixel resolution ~1 cm. Centimetre scale precision can be achieved when selected ground control points are measured using a total station. These techniques have the potential to provide rapid, ultra-high resolution mapping of fracture networks, from many different lithologies; enabling us to more accurately assess the "fit" of observed data relative to model predictions, over a wide range of boundary conditions.igh resolution DEM of faulted outcrop (Piccaninny Point, Tasmania) generated using the Oktokopter UAV (inset) and photogrammetric techniques.

  20. Automatic mapping of event landslides at basin scale in Taiwan using a Montecarlo approach and synthetic land cover fingerprints

    Science.gov (United States)

    Mondini, Alessandro C.; Chang, Kang-Tsung; Chiang, Shou-Hao; Schlögel, Romy; Notarnicola, Claudia; Saito, Hitoshi

    2017-12-01

    We propose a framework to systematically generate event landslide inventory maps from satellite images in southern Taiwan, where landslides are frequent and abundant. The spectral information is used to assess the pixel land cover class membership probability through a Maximum Likelihood classifier trained with randomly generated synthetic land cover spectral fingerprints, which are obtained from an independent training images dataset. Pixels are classified as landslides when the calculated landslide class membership probability, weighted by a susceptibility model, is higher than membership probabilities of other classes. We generated synthetic fingerprints from two FORMOSAT-2 images acquired in 2009 and tested the procedure on two other images, one in 2005 and the other in 2009. We also obtained two landslide maps through manual interpretation. The agreement between the two sets of inventories is given by the Cohen's k coefficients of 0.62 and 0.64, respectively. This procedure can now classify a new FORMOSAT-2 image automatically facilitating the production of landslide inventory maps.

  1. Automatic spline-smoothing approach applied to denoise Moroccan resistivity data phosphate deposit “disturbances” map

    Directory of Open Access Journals (Sweden)

    Saad Bakkali

    2010-04-01

    Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.

  2. Acceleration of Topographic Map Production Using Semi-Automatic DTM from Dsm Radar Data

    Science.gov (United States)

    Rizaldy, Aldino; Mayasari, Ratna

    2016-06-01

    Badan Informasi Geospasial (BIG) is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using "stereo-mate", the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image), and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  3. ACCELERATION OF TOPOGRAPHIC MAP PRODUCTION USING SEMI-AUTOMATIC DTM FROM DSM RADAR DATA

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2016-06-01

    Full Text Available Badan Informasi Geospasial (BIG is government institution in Indonesia which is responsible to provide Topographic Map at several map scale. For medium map scale, e.g. 1:25.000 or 1:50.000, DSM from Radar data is very good solution since Radar is able to penetrate cloud that usually covering tropical area in Indonesia. DSM Radar is produced using Radargrammetry and Interferrometry technique. The conventional method of DTM production is using “stereo-mate”, the stereo image created from DSM Radar and ORRI (Ortho Rectified Radar Image, and human operator will digitizing masspoint and breakline manually using digital stereoplotter workstation. This technique is accurate but very costly and time consuming, also needs large resource of human operator. Since DSMs are already generated, it is possible to filter DSM to DTM using several techniques. This paper will study the possibility of DSM to DTM filtering using technique that usually used in point cloud LIDAR filtering. Accuracy of this method will also be calculated using enough numbers of check points. If the accuracy meets the requirement, this method is very potential to accelerate the production of Topographic Map in Indonesia.

  4. Automatic Galaxy Classification via Machine Learning Techniques: Parallelized Rotation/Flipping INvariant Kohonen Maps (PINK)

    Science.gov (United States)

    Polsterer, K. L.; Gieseke, F.; Igel, C.

    2015-09-01

    In the last decades more and more all-sky surveys created an enormous amount of data which is publicly available on the Internet. Crowd-sourcing projects such as Galaxy-Zoo and Radio-Galaxy-Zoo used encouraged users from all over the world to manually conduct various classification tasks. The combination of the pattern-recognition capabilities of thousands of volunteers enabled scientists to finish the data analysis within acceptable time. For up-coming surveys with billions of sources, however, this approach is not feasible anymore. In this work, we present an unsupervised method that can automatically process large amounts of galaxy data and which generates a set of prototypes. This resulting model can be used to both visualize the given galaxy data as well as to classify so far unseen images.

  5. Challenges and Opportunities: One Stop Processing of Automatic Large-Scale Base Map Production Using Airborne LIDAR Data Within GIS Environment. Case Study: Makassar City, Indonesia

    Science.gov (United States)

    Widyaningrum, E.; Gorte, B. G. H.

    2017-05-01

    LiDAR data acquisition is recognized as one of the fastest solutions to provide basis data for large-scale topographical base maps worldwide. Automatic LiDAR processing is believed one possible scheme to accelerate the large-scale topographic base map provision by the Geospatial Information Agency in Indonesia. As a progressive advanced technology, Geographic Information System (GIS) open possibilities to deal with geospatial data automatic processing and analyses. Considering further needs of spatial data sharing and integration, the one stop processing of LiDAR data in a GIS environment is considered a powerful and efficient approach for the base map provision. The quality of the automated topographic base map is assessed and analysed based on its completeness, correctness, quality, and the confusion matrix.

  6. Automatic Registration of Scanned Satellite Imagery with a Digital Map Data Base.

    Science.gov (United States)

    1980-11-01

    of reseau points (ROOS, 1975). The associative, complex mental interpretation of an ima- ge by the human expert will hardly ever be matched by the...Representation: Boundary Codes from Quadtrees, NATO ASI Lecture Notes-June, Maratea, Italy. FLONZAT G. et al. (1979): Extraction de 1’Information d’une image...Photogrammetrie, Kulturtechnik, 77. Jahrgang, Heft 7-8, S.205-210, ZUrich. FREEMAN H. (1979): Map Data Encoding Techniques, NATO ASI Lecture Notes-June, Maratea

  7. Play estimation with motions and textures with automatic generation of template space-time map

    Science.gov (United States)

    Aoki, Kyota; Aita, Ryo; Fukiba, Takuro

    2015-07-01

    It is easy to retrieve the small size parts from small videos. It is also easy to retrieve the middle size part from large videos. However, we have difficulties to retrieve the small size parts from large videos. We have large needs for estimating plays in sport videos. Plays in sports are described as the motions of players. This paper proposes the play retrieving method based on both motion compensation vectors and normal color frames in MPEG sports videos. This work uses the 1-dimensional degenerated descriptions of each motion image between two adjacent frames. Connecting the 1-dimensional degenerated descriptions on time direction, we have the space-time map. This spacetime map describes a sequence of frames as a 2-dimensional image. Using this space-time map on motion compensation vector frames and normal color frames, this work shows the method to create a new better template from a single template for retrieving a small number of plays in a huge number of frames. In an experiment, the resulting F-measure marks 0.955.

  8. Geological lineament mapping in arid area by semi-automatic extraction from satellite images: example at the El Kseïbat region (Algerian Sahara)

    Energy Technology Data Exchange (ETDEWEB)

    Hammad, N.; Djidel, M.; Maabedi, N.

    2016-07-01

    Geologists in charge of a detailed lineament mapping in arid and desert area, face the extent of the land and the abundance of eolian deposits. This study presents a semi-automatic approach of extraction of lineament, different from other methods, such as the automatic extraction and manual extraction, by being both fast and objective. It consists of a series of digital processing (textural and spatial filtering, binarization by thresholding and mathematic morphology ... etc.) applied to a Landsat 7 ETM+scene. This semi-automatic approach has produced a detailed map of lineaments, while taking account of tectonic directions recognized in the region. It helps mitigate the effect of dune deposits meet the specifications of arid environment. The visual validation of these linear structures, by geoscientists and field data, allowed the identification of the majority of structural lineaments or at least those tried geological. (Author)

  9. Approach of automatic 3D geological mapping: the case of the Kovdor phoscorite-carbonatite complex, NW Russia.

    Science.gov (United States)

    Kalashnikov, A O; Ivanyuk, G Yu; Mikhailova, J A; Sokharev, V A

    2017-07-31

    We have developed an approach for automatic 3D geological mapping based on conversion of chemical composition of rocks to mineral composition by logical computation. It allows to calculate mineral composition based on bulk rock chemistry, interpolate the mineral composition in the same way as chemical composition, and, finally, build a 3D geological model. The approach was developed for the Kovdor phoscorite-carbonatite complex containing the Kovdor baddeleyite-apatite-magnetite deposit. We used 4 bulk rock chemistry analyses - Fe magn , P 2 O 5 , CO 2 and SiO 2 . We used four techniques for prediction of rock types - calculation of normative mineral compositions (norms), multiple regression, artificial neural network and developed by logical evaluation. The two latter became the best. As a result, we distinguished 14 types of phoscorites (forsterite-apatite-magnetite-carbonate rock), carbonatite and host rocks. The results show good convergence with our petrographical studies of the deposit, and recent manually built maps. The proposed approach can be used as a tool of a deposit genesis reconstruction and preliminary geometallurgical modelling.

  10. Semi-automatic supervised classification of minerals from x-ray mapping images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Flesche, Harald; Larsen, Rasmus

    1998-01-01

    This paper addresses the problem of assessing the robustness with respect to change in parameters of an integrated training and classification routine for minerals commonly encountered in siliciclastic or carbonate rocks. Twelve chemical elements are mapped from thin sections by energy dispersive...... spectroscopy (EDS) in a scanning electron microscope (SEM). Extensions to traditional multivariate statistical methods are applied to perform the classification. Training sets are grown from one or a few seed points by a method that ensures spatial and spectral closeness of observations. Spectral closeness...

  11. Mineral Mapping Using the Automatized Gaussian Model (AGM—Application to Two Industrial French Sites at Gardanne and Thann

    Directory of Open Access Journals (Sweden)

    Rodolphe Marion

    2018-01-01

    Full Text Available The identification and mapping of the mineral composition of by-products and residues on industrial sites is a topic of growing interest because it may provide information on plant-processing activities and their impact on the surrounding environment. Imaging spectroscopy can provide such information based on the spectral signatures of soil mineral markers. In this study, we use the automatized Gaussian model (AGM, an automated, physically based method relying on spectral deconvolution. Originally developed for the short-wavelength infrared (SWIR range, it has been extended to include information from the visible and near-infrared (VNIR range to take iron oxides/hydroxides into account. We present the results of its application to two French industrial sites: (i the Altéo Environnement site in Gardanne, southern France, dedicated to the extraction of alumina from bauxite; and (ii the Millennium Inorganic Chemicals site in Thann, eastern France, which produces titanium dioxide from ilmenite and rutile, and its associated Séché Éco Services site used to neutralize the resulting effluents, producing gypsum. HySpex hyperspectral images were acquired over Gardanne in September 2013 and an APEX image was acquired over Thann in June 2013. In both cases, reflectance spectra were measured and samples were collected in the field and analyzed for mineralogical and chemical composition. When applying the AGM to the images, both in the VNIR and SWIR ranges, we successfully identified and mapped minerals of interest characteristic of each site: bauxite, Bauxaline® and alumina for Gardanne; and red and white gypsum and calcite for Thann. Identifications and maps were consistent with in situ measurements.

  12. Automatic mapping of the base of aquifer — A case study from Morrill, Nebraska

    Science.gov (United States)

    Gulbrandsen, Mats Lundh; Ball, Lyndsay B.; Minsley, Burke J.; Hansen, Thomas Mejer

    2017-01-01

    When a geologist sets up a geologic model, various types of disparate information may be available, such as exposures, boreholes, and (or) geophysical data. In recent years, the amount of geophysical data available has been increasing, a trend that is only expected to continue. It is nontrivial (and often, in practice, impossible) for the geologist to take all the details of the geophysical data into account when setting up a geologic model. We have developed an approach that allows for the objective quantification of information from geophysical data and borehole observations in a way that is easy to integrate in the geologic modeling process. This will allow the geologist to make a geologic interpretation that is consistent with the geophysical information at hand. We have determined that automated interpretation of geologic layer boundaries using information from boreholes and geophysical data alone can provide a good geologic layer model, even before manual interpretation has begun. The workflow is implemented on a set of boreholes and airborne electromagnetic (AEM) data from Morrill, Nebraska. From the borehole logs, information about the depth to the base of aquifer (BOA) is extracted and used together with the AEM data to map a surface that represents this geologic contact. Finally, a comparison between our automated approach and a previous manual mapping of the BOA in the region validates the quality of the proposed method and suggests that this workflow will allow a much faster and objective geologic modeling process that is consistent with the available data.

  13. A Fully Automatic Burnt Area Mapping Processor Based on AVHRR Imagery—A TIMELINE Thematic Processor

    Directory of Open Access Journals (Sweden)

    Simon Plank

    2018-02-01

    Full Text Available The German Aerospace Center’s (DLR TIMELINE project (“Time Series Processing of Medium Resolution Earth Observation Data Assessing Long-Term Dynamics in our Natural Environment” aims to develop an operational processing and data management environment to process 30 years of National Oceanic and Atmospheric Administration (NOAA—Advanced Very High-Resolution Radiometer (AVHRR raw data into Level (L 1b, L2, and L3 products. This article presents the current status of the fully automated L3 burnt area mapping processor, which is based on multi-temporal datasets. The advantages of the proposed approach are (I the combined use of different indices to improve the classification result, (II the provision of a fully automated processor, (III the generation and usage of an up-to-date cloud-free pre-fire dataset, (IV classification with adaptive thresholding, and (V the assignment of five different probability levels to the burnt areas detected. The results of the AVHRR data-based burn scar mapping processor were validated with the Moderate Resolution Imaging Spectroradiometer (MODIS burnt area product MCD64 at four different European study sites. In addition, the accuracy of the AVHRR-based classification and that of the MCD64 itself were assessed by means of Landsat imagery.

  14. Automatic prediction of infarct growth in acute ischemic stroke from MR apparent diffusion coefficient maps.

    Science.gov (United States)

    Montiel, Nidiyare Hevia; Rosso, Charlotte; Chupin, Narie; Deltour, Sanorine; Bardinet, Eric; Dormont, Didier; Samson, Yves; Baillet, Sylvain

    2008-01-01

    We introduce a new approach to the prediction of final infarct growth in human acute ischemic stroke based on image analysis of the apparent diffusion coefficient (ADC) maps obtained from magnetic resonance imaging. Evidence from multiple previous studies indicate that ADC maps are likely to reveal brain regions belonging to the ischemic penumbra, that is, areas that may be at risk of infarction in the few hours following stroke onset. In a context where "time is brain," and contrarily to the alternative-and still-debated-perfusion-diffusion weighted image (PWI/DWI) mismatch approach, the DWI magnetic resonance sequences are standardized, fast to acquire, and do not necessitate injection of a contrast agent. The image analysis approach presented here consists of the segmentation of the ischemic penumbra using a fast three-dimensional region-growing technique that mimics the growth of the infarct lesion during acute stroke. The method was evaluated with both numerical simulations and on two groups of 20 ischemic stroke patients (40 patients total). The first group of patient data was used to adjust the parameters of the model ruling the region-growing procedure. The second group of patient data was dedicated to evaluation purposes only, with no subsequent adjustment of the free parameters of the image-analysis procedure. Results indicate that the predicted final infarct volumes are significantly correlated with the true final lesion volumes as revealed by follow-up measurements from DWI sequences. The DWI-ADC mismatch method is an encouraging fast alternative to the PWI-DWI mismatch approach to evaluate the likeliness of infarct growth during the acute stage of ischemic stroke.

  15. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    Science.gov (United States)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial

  16. Generalized Self-Organizing Maps for Automatic Determination of the Number of Clusters and Their Multiprototypes in Cluster Analysis.

    Science.gov (United States)

    Gorzalczany, Marian B; Rudzinski, Filip

    2017-06-07

    This paper presents a generalization of self-organizing maps with 1-D neighborhoods (neuron chains) that can be effectively applied to complex cluster analysis problems. The essence of the generalization consists in introducing mechanisms that allow the neuron chain--during learning--to disconnect into subchains, to reconnect some of the subchains again, and to dynamically regulate the overall number of neurons in the system. These features enable the network--working in a fully unsupervised way (i.e., using unlabeled data without a predefined number of clusters)--to automatically generate collections of multiprototypes that are able to represent a broad range of clusters in data sets. First, the operation of the proposed approach is illustrated on some synthetic data sets. Then, this technique is tested using several real-life, complex, and multidimensional benchmark data sets available from the University of California at Irvine (UCI) Machine Learning repository and the Knowledge Extraction based on Evolutionary Learning data set repository. A sensitivity analysis of our approach to changes in control parameters and a comparative analysis with an alternative approach are also performed.

  17. Semi-automatic 10/20 Identification Method for MRI-Free Probe Placement in Transcranial Brain Mapping Techniques.

    Science.gov (United States)

    Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-01-01

    The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.

  18. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  19. Semi-automatic mapping of fault rocks on a Digital Outcrop Model, Gole Larghe Fault Zone (Southern Alps, Italy)

    Science.gov (United States)

    Mittempergher, Silvia; Vho, Alice; Bistacchi, Andrea

    2016-04-01

    A quantitative analysis of fault-rock distribution in outcrops of exhumed fault zones is of fundamental importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation. We present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM), developed on the Gole Larghe Fault Zone (GLFZ), a well exposed strike-slip fault in the Adamello batholith (Italian Southern Alps). The GLFZ has been exhumed from ca. 8-10 km depth, and consists of hundreds of individual seismogenic slip surfaces lined by green cataclasites (crushed wall rocks cemented by the hydrothermal epidote and K-feldspar) and black pseudotachylytes (solidified frictional melts, considered as a marker for seismic slip). A digital model of selected outcrop exposures was reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs processed with VisualSFM software. The resulting DOM has a resolution up to 0.2 mm/pixel. Most of the outcrop was imaged using images each one covering a 1 x 1 m2 area, while selected structural features, such as sidewall ripouts or stepovers, were covered with higher-resolution images covering 30 x 40 cm2 areas.Image processing algorithms were preliminarily tested using the ImageJ-Fiji package, then a workflow in Matlab was developed to process a large collection of images sequentially. Particularly in detailed 30 x 40 cm images, cataclasites and hydrothermal veins were successfully identified using spectral analysis in RGB and HSV color spaces. This allows mapping the network of cataclasites and veins which provided the pathway for hydrothermal fluid circulation, and also the volume of mineralization, since we are able to measure the thickness of cataclasites and veins on the outcrop surface. The spectral signature of pseudotachylyte veins is indistinguishable from that of biotite grains in the wall rock (tonalite), so we tested morphological analysis tools to discriminate

  20. Combining Quantitative Susceptibility Mapping with Automatic Zero Reference (QSM0) and Myelin Water Fraction Imaging to Quantify Iron-Related Myelin Damage in Chronic Active MS Lesions.

    Science.gov (United States)

    Yao, Y; Nguyen, T D; Pandya, S; Zhang, Y; Hurtado Rúa, S; Kovanlikaya, I; Kuceyeski, A; Liu, Z; Wang, Y; Gauthier, S A

    2018-02-01

    A hyperintense rim on susceptibility in chronic MS lesions is consistent with iron deposition, and the purpose of this study was to quantify iron-related myelin damage within these lesions as compared with those without rim. Forty-six patients had 2 longitudinal quantitative susceptibility mapping with automatic zero reference scans with a mean interval of 28.9 ± 11.4 months. Myelin water fraction mapping by using fast acquisition with spiral trajectory and T2 prep was obtained at the second time point to measure myelin damage. Mixed-effects models were used to assess lesion quantitative susceptibility mapping and myelin water fraction values. Quantitative susceptibility mapping scans were on average 6.8 parts per billion higher in 116 rim-positive lesions compared with 441 rim-negative lesions ( P quantitative susceptibility mapping values of both the rim and core regions ( P Quantitative susceptibility mapping scans and myelin water fraction in rim-positive lesions decreased from rim to core, which is consistent with rim iron deposition. Whole lesion myelin water fractions for rim-positive and rim-negative lesions were 0.055 ± 0.07 and 0.066 ± 0.04, respectively. In the mixed-effects model, rim-positive lesions had on average 0.01 lower myelin water fraction compared with rim-negative lesions ( P quantitative susceptibility mapping scan was negatively associated with follow-up myelin water fraction ( P Quantitative susceptibility mapping rim-positive lesions maintained a hyperintense rim, increased in susceptibility, and had more myelin damage compared with rim-negative lesions. Our results are consistent with the identification of chronic active MS lesions and may provide a target for therapeutic interventions to reduce myelin damage. © 2018 by American Journal of Neuroradiology.

  1. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  2. Terrane daylight mapping on large dip-slope terrain based on high-resolution DTM and semi-automatic geoprocessing processes

    Science.gov (United States)

    Yeh, Chih-Hsiang; Lin, Ming-Lang; Chan, Yu-Chang; Chang, Kuo-Jen; Hsieh, Yu-Chung

    2015-04-01

    "Daylight" in slope engineering means a lineament appearing on the ground surface casued by a internal weak plane of a rock slope. The morphology of the daylight implies the free surface condition of the rock mass upper the weak plane, directly affecting the slope stability and safety. Traditionally, the reconnaissance of daylight employs field investigation and drillings in local dip slope area, but when mapping in large area, it would be subjected to vegetation cover and budget limitation to get a simply result not used for engineering applications. Therefore, the purpose of this study is to develop a rapid and reliable mapping program based on high-resolution DTM, and to generate a large-scale daylight map for large dip slope area. The methodology can be divided into two phases: the first is re-mapping terrane boundary lineaments using LiDAR data and 3D GIS mapping technology; the second is automatically mapping daylight tracks by trend surface analysis and python scripts based on above terrane boundary lineaments. This study takes the area of Keelung River north bank, which is mainly cuesta topography, for an example. Recently, in the area, the frequency of dip slope landslide occurrence becomes more higher because of human development. One major reason to cause the daylight appearing on downslope is the slope toe cutting or river incision. Hereby, according to the final results of the daylight map, we can assess where the potential landsides dip slops are, and further differentiate three different risks of dip slope from the daylight's morphology, expecting to provide more detail engineering and geological information for furture engineering site selection and the design and application of disaster prevention.

  3. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  4. Analysis of the relationship of automatically and manually extracted lineaments from DEM and geologically mapped tectonic faults around the Main Ethiopian Rift and the Ethiopian Highlands, Ethiopia

    Directory of Open Access Journals (Sweden)

    Michal Kusák

    2017-02-01

    Full Text Available The paper deals with the functions that automatically extract lineaments from the 90 m Shuttle Radar Topographic Mission (SRTM of Digital Elevation Model (DEM (Consortium for Spatial Information 2014 in the software ArcGIS 10.1 and PCI Geomatica. They were performed for the Main Ethiopian Rift and the Ethiopian Highlands (transregional scale 1,060,000 km2, which are one of the tectonically most active areas in the world. The values of input parameters – the RADI (filter radius value, GTHR (edge gradient threshold, LTHR (curve length, FTHR (line fitting error, ATHR (angular difference, and the DTHR (linked distance threshold – and their influence on the final shape and number of lineaments are discussed. A map of automated extracted lineaments was created and compared with 1 the tectonic faults on the geological map by Geological Survey of Ethiopia (Mangesha et al. 1996 and 2 the lineaments based on visual interpretation by the author from the same data set. The predominant azimuth of lineaments is similar to the azimuth of the faults on the geological map. The comparison of lineaments by automated visualization in GIS and visual interpretation of lineaments carried out by the authors around the Jemma River Basin (regional scale 16,000 km2 proved that both sets of lineaments are of the same NE–SW azimuth, which is the orientation of the rift. However, lineaments mapping by automated visualization in GIS identifies a larger number of shorter lineaments than lineaments created by visual interpretation.

  5. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  6. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Directory of Open Access Journals (Sweden)

    Nakamura Satoshi

    2004-01-01

    Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  7. Development of a System to Assist Automatic Translation of Hand-Drawn Maps into Tactile Graphics and Its Usability Evaluation

    Directory of Open Access Journals (Sweden)

    Jianjun Chen

    2014-01-01

    Full Text Available Tactile graphics are images that use raised surfaces so that a visually impaired person can feel them. Tactile maps are used by blind and partially sighted people when navigating around an environment, and they are also used prior to a visit for orientation purposes. Since the ability to read tactile graphics deeply depends on individuals, providing tactile graphics individually is needed. This implies that producing tactile graphics should be as simple as possible. Based on this background, we are developing a system for automating production of tactile maps from hand-drawn figures. In this paper, we first present a pattern recognition method for hand-drawn maps. The usability of our system is then evaluated by comparing it with the two different methods to produce tactile graphics.

  8. AUTOMATIC TEXTURE MAPPING WITH AN OMNIDIRECTIONAL CAMERA MOUNTED ON A VEHICLE TOWARDS LARGE SCALE 3D CITY MODELS

    Directory of Open Access Journals (Sweden)

    F. Deng

    2012-07-01

    Full Text Available Today high resolution panoramic images with competitive quality have been widely used for rendering in some commercial systems. However the potential applications such as mapping, augmented reality and modelling which need accurate orientation information are still poorly studied. Urban models can be quickly obtained from aerial images or LIDAR, however with limited quality or efficiency due to low resolution textures and manual texture mapping work flow. We combine an Extended Kalman Filter (EKF with the traditional Structure from Motion (SFM method without any prior information based on a general camera model which can handle various kinds of omnidirectional and other kind of single perspective image sequences even with unconnected or weakly connected frames. The orientation results is then applied to mapping the textures from panoramas to the existing building models obtained from aerial photogrammetry. It turns out to largely improve the quality of the models and the efficiency of the modelling procedure.

  9. Automatic reduction of large X-ray fluorescence data-sets applied to XAS and mapping experiments

    Energy Technology Data Exchange (ETDEWEB)

    Martin Montoya, Ligia Andrea

    2017-02-15

    In this thesis two automatic methods for the reduction of large fluorescence data sets are presented. The first method is proposed in the framework of BioXAS experiments. The challenge of this experiment is to deal with samples in ultra dilute concentrations where the signal-to-background ratio is low. The experiment is performed in fluorescence mode X-ray absorption spectroscopy with a 100 pixel high-purity Ge detector. The first step consists on reducing 100 fluorescence spectra into one. In this step, outliers are identified by means of the shot noise. Furthermore, a fitting routine which model includes Gaussian functions for the fluorescence lines and exponentially modified Gaussian (EMG) functions for the scattering lines (with long tails at lower energies) is proposed to extract the line of interest from the fluorescence spectrum. Additionally, the fitting model has an EMG function for each scattering line (elastic and inelastic) at incident energies where they start to be discerned. At these energies, the data reduction is done per detector column to include the angular dependence of scattering. In the second part of this thesis, an automatic method for texts separation on palimpsests is presented. Scanning X-ray fluorescence is performed on the parchment, where a spectrum per scanned point is collected. Within this method, each spectrum is treated as a vector forming a basis which is to be transformed so that the basis vectors are the spectra of each ink. Principal Component Analysis is employed as an initial guess of the seek basis. This basis is further transformed by means of an optimization routine that maximizes the contrast and minimizes the non-negative entries in the spectra. The method is tested on original and self made palimpsests.

  10. Establishing New Mappings between Familiar Phones: Neural and Behavioral Evidence for Early Automatic Processing of Nonnative Contrasts

    Science.gov (United States)

    Barrios, Shannon L.; Namyst, Anna M.; Lau, Ellen F.; Feldman, Naomi H.; Idsardi, William J.

    2016-01-01

    To attain native-like competence, second language (L2) learners must establish mappings between familiar speech sounds and new phoneme categories. For example, Spanish learners of English must learn that [d] and [ð], which are allophones of the same phoneme in Spanish, can distinguish meaning in English (i.e., /deɪ/ “day” and /ðeɪ/ “they”). Because adult listeners are less sensitive to allophonic than phonemic contrasts in their native language (L1), novel target language contrasts between L1 allophones may pose special difficulty for L2 learners. We investigate whether advanced Spanish late-learners of English overcome native language mappings to establish new phonological relations between familiar phones. We report behavioral and magnetoencepholographic (MEG) evidence from two experiments that measured the sensitivity and pre-attentive processing of three listener groups (L1 English, L1 Spanish, and advanced Spanish late-learners of English) to differences between three nonword stimulus pairs ([idi]-[iði], [idi]-[iɾi], and [iði]-[iɾi]) which differ in phones that play a different functional role in Spanish and English. Spanish and English listeners demonstrated greater sensitivity (larger d' scores) for nonword pairs distinguished by phonemic than by allophonic contrasts, mirroring previous findings. Spanish late-learners demonstrated sensitivity (large d' scores and MMN responses) to all three contrasts, suggesting that these L2 learners may have established a novel [d]-[ð] contrast despite the phonological relatedness of these sounds in the L1. Our results suggest that phonological relatedness influences perceived similarity, as evidenced by the results of the native speaker groups, but may not cause persistent difficulty for advanced L2 learners. Instead, L2 learners are able to use cues that are present in their input to establish new mappings between familiar phones. PMID:27445949

  11. Morphotectonic mapping from the analysis of automatically extracted lineaments using Landsat 8 images and SRTM data in the Hindukush-Pamir

    Science.gov (United States)

    Rahnama, Mehdi; Gloaguen, Richard

    2014-05-01

    Modern deformation, fault movements, and induced earthquakes in the Hindukush-Pamir region are driven by the collision between the northward-moving Indian subcontinent and Eurasia. We investigated neotectonic activity and generated tectonic maps of this area. We developed a Matlab based toolbox for the automatic extraction of image discontinuities. The approach consists of frequency domain filtering, edge detection in the spatial domain, Hough transformation, segment grouping, polynomial interpolation and geostatistical analysis of the lineaments patterns. Statistical quantification of counts, lengths, azimuth frequency, density distribution, and orientations are analyzed to understand the tectonic activities, to explain the prominent structural trends, and to demarcate the contribution of different faulting styles. Morphotectonic lineaments observed on the study area were automatically extracted from panchromatic band of Landsat 8 with 15-m resolution and SRTM digital elevation model (DEM) with 90-m resolution. Then, this data was analyzed to characterize the tectonic trends that dominated the geologic evolution of this area. We show that the SW-Pamir is mainly controlled by the Chaman-Herat-Central Badakhshan fault systems and, to a lesser extent by the Darvaz fault zone. Extracted lineaments and the intensity of the characterized tectonic trends correspond well with reference data. In Addition, results are consistent with the styles of faulting determined from focal mechanisms of the historical earthquake epicenters in the region. The presented results could be applicable in different geological aspects that are based on a good knowledge of the system patterns and the spatial relationship between them. These aspects included geodynamics, seismic and risk assessment, mineral exploration and hydrogeological research.

  12. Towards an Automatic Framework for Urban Settlement Mapping from Satellite Images: Applications of Geo-referenced Social Media and One Class Classification

    Science.gov (United States)

    Miao, Zelang

    2017-04-01

    Currently, urban dwellers comprise more than half of the world's population and this percentage is still dramatically increasing. The explosive urban growth over the next two decades poses long-term profound impact on people as well as the environment. Accurate and up-to-date delineation of urban settlements plays a fundamental role in defining planning strategies and in supporting sustainable development of urban settlements. In order to provide adequate data about urban extents and land covers, classifying satellite data has become a common practice, usually with accurate enough results. Indeed, a number of supervised learning methods have proven effective in urban area classification, but they usually depend on a large amount of training samples, whose collection is a time and labor expensive task. This issue becomes particularly serious when classifying large areas at the regional/global level. As an alternative to manual ground truth collection, in this work we use geo-referenced social media data. Cities and densely populated areas are an extremely fertile land for the production of individual geo-referenced data (such as GPS and social network data). Training samples derived from geo-referenced social media have several advantages: they are easy to collect, usually they are freely exploitable; and, finally, data from social media are spatially available in many locations, and with no doubt in most urban areas around the world. Despite these advantages, the selection of training samples from social media meets two challenges: 1) there are many duplicated points; 2) method is required to automatically label them as "urban/non-urban". The objective of this research is to validate automatic sample selection from geo-referenced social media and its applicability in one class classification for urban extent mapping from satellite images. The findings in this study shed new light on social media applications in the field of remote sensing.

  13. Evaluation of methods to produce an image library for automatic patient model localization for dose mapping during fluoroscopically guided procedures

    Science.gov (United States)

    Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.

    2017-03-01

    The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.

  14. Fully Automatic Feature-Based Registration of Mobile Mapping and Aerial Nadir Images for Enabling the Adjustment of Mobile Platform Locations in Gnss-Denied Urban Environments

    Science.gov (United States)

    Jende, P.; Nex, F.; Gerke, M.; Vosselman, G.

    2017-05-01

    Mobile Mapping (MM) has gained significant importance in the realm of high-resolution data acquisition techniques. MM is able to record georeferenced street-level data in a continuous (laser scanners) and/or discrete (cameras) fashion. MM's georeferencing relies on a conjunction of Global Navigation Satellite Systems (GNSS), Inertial Measurement Units (IMU) and optionally on odometry sensors. While this technique does not pose a problem for absolute positioning in open areas, its reliability and accuracy may be diminished in urban areas where high-rise buildings and other tall objects can obstruct the direct line-of-sight between the satellite and the receiver unit. Consequently, multipath measurements or complete signal outages impede the MM platform's localisation and may affect the accurate georeferencing of collected data. This paper presents a technique to recover correct orientation parameters for MM imaging platforms by utilising aerial images as an external georeferencing source. This is achieved by a fully automatic registration strategy which takes into account the overall differences between aerial and MM data, such as scale, illumination, perspective and content. Based on these correspondences, MM data can be verified and/or corrected by using an adjustment solution. The registration strategy is discussed and results in a success rate of about 95 %.

  15. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  16. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  17. Semi-Automatic Mapping of Tidal Cracks in the Fast Ice Region near Zhongshan Station in East Antarctica Using Landsat-8 OLI Imagery

    Directory of Open Access Journals (Sweden)

    Fengming Hui

    2016-03-01

    Full Text Available Tidal cracks are linear features that appear parallel to coastlines in fast ice regions due to the actions of periodic and non-periodic sea level oscillations. They can influence energy and heat exchange between the ocean, ice, and atmosphere, as well as human activities. In this paper, the LINE module of Geomatics 2015 software was used to automatically extract tidal cracks in fast ice regions near the Chinese Zhongshan Station in East Antarctica from Landsat-8 Operational Land Imager (OLI data with resolutions of 15 m (panchromatic band 8 and 30 m (multispectral bands 1–7. The detected tidal cracks were determined based on matching between the output from the LINE module and manually-interpreted tidal cracks in OLI images. The ratio of the length of detected tidal cracks to the total length of interpreted cracks was used to evaluate the automated detection method. Results show that the vertical direction gradient is a better input to the LINE module than the top-of-atmosphere (TOA reflectance input for estimating the presence of cracks, regardless of the examined resolution. Data with a resolution of 15 m also gives better results in crack detection than data with a resolution of 30 m. The statistics also show that, in the results from the 15-m-resolution data, the ratios in Band 8 performed best with values of the above-mentioned ratio of 50.92 and 31.38 percent using the vertical gradient and the TOA reflectance methods, respectively. On the other hand, in the results from the 30-m-resolution data, the ratios in Band 5 performed best with ratios of 47.43 and 17.8 percent using the same methods, respectively. This implies that Band 8 was better for tidal crack detection than the multispectral fusion data (Bands 1–7, and Band 5 with a resolution of 30 m was best among the multispectral data. The semi-automatic mapping of tidal cracks will improve the safety of vehicles travel in fast ice regimes.

  18. An Automatic Mosaicking Algorithm for the Generation of a Large-Scale Forest Height Map Using Spaceborne Repeat-Pass InSAR Correlation Magnitude

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2015-05-01

    Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.

  19. Neural Bases of Automaticity.

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F; Logan, Gordon D

    2017-09-21

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this prediction, however, these experiments did not use the canonical paradigms used to study automaticity. Specifically, automaticity is typically studied using practice regimes with consistent mapping between targets and distractors and spaced practice with individual targets, features that these previous studies lacked. The aim of the present work was to examine whether the practice-induced shift from working memory to long-term memory inferred from subjects' ERPs is observed under the conditions in which automaticity is traditionally studied. We found that to be the case in 3 experiments, firmly supporting the predictions of theories. In addition, we found that the temporal distribution of practice (massed vs. spaced) modulates the shape of learning curves. The ERP data revealed that the switch to long-term memory is slower for spaced than massed practice, suggesting that memory systems are used in a strategic manner. This finding provides new constraints for theories of learning and automaticity. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Automatic Collision Avoidance Technology (ACAT)

    Science.gov (United States)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  1. Pupillary automatism

    Directory of Open Access Journals (Sweden)

    Menon V

    1989-01-01

    Full Text Available An unusual case of cyclic pupillary movements in an otherwise complete oculomotor nerve palsy in a five year-old girl is reported. This is considered to be due to destruction of somatic and visceral nuclei of the oculomotor nerve following injury to its fascicular part. Pupillary automatism has been explained on the basis of the presence of aberrant autonomic cells in the ciliary ganglion which are discharging in a regular rhythm independent of higher control.

  2. Automatic Evolution of Molecular Nanotechnology Designs

    Science.gov (United States)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.

  3. NLP techniques associated with the OpenGALEN ontology for semi-automatic textual extraction of medical knowledge: abstracting and mapping equivalent linguistic and logical constructs.

    Science.gov (United States)

    do Amaral, M B; Roberts, A; Rector, A L

    2000-01-01

    This research project presents methodological and theoretical issues related to the inter-relationship between linguistic and conceptual semantics, analysing the results obtained by the application of a NLP parser to a set of radiology reports. Our objective is to define a technique for associating linguistic methods with domain specific ontologies for semi-automatic extraction of intermediate representation (IR) information formats and medical ontological knowledge from clinical texts. We have applied the Edinburgh LTG natural language parser to 2810 clinical narratives describing radiology procedures. In a second step, we have used medical expertise and ontology formalism for identification of semantic structures and abstraction of IR schemas related to the processed texts. These IR schemas are an association of linguistic and conceptual knowledge, based on their semantic contents. This methodology aims to contribute to the elaboration of models relating linguistic and logical constructs based on empirical data analysis. Advance in this field might lead to the development of computational techniques for automatic enrichment of medical ontologies from real clinical environments, using descriptive knowledge implicit in large text corpora sources.

  4. Fully automatic feature-based registration of mobile mapping and aerial nadir images for enabling the adjustment of mobile platform locations in gnss-denied urban environments

    NARCIS (Netherlands)

    Jende, P.; Nex, F.; Gerke, M.; Vosselman, G.; Heipke, C.

    2017-01-01

    Mobile Mapping (MM) has gained significant importance in the realm of high-resolution data acquisition techniques. MM is able to record georeferenced street-level data in a continuous (laser scanners) and/or discrete (cameras) fashion. MM?s georeferencing relies on a conjunction of Global Navigation

  5. Automatic vs semi-automatic global cardiac function assessment using 64-row CT

    Science.gov (United States)

    Greupner, J; Zimmermann, E; Hamm, B; Dewey, M

    2012-01-01

    Objective Global cardiac function assessment using multidetector CT (MDCT) is time-consuming. Therefore we sought to compare an automatic software tool with an established semi-automatic method. Methods A total of 36 patients underwent CT with 64×0.5 mm detector collimation, and global left ventricular function was subsequently assessed by two independent blinded readers using both an automatic region-growing-based software tool (with and without manual adjustment) and an established semi-automatic software tool. We also analysed automatic motion mapping to identify end-systole. Results The time needed for assessment using the semi-automatic approach (12:12±6:19 min) was reduced by 75–85% with the automatic software tool (unadjusted, 01:34±0:29 min, adjusted, 02:53±1:19 min; both pautomatic (58.6±14.9%) and the semi-automatic (58.0±15.3%) approaches. Also the manually adjusted automatic approach led to significantly smaller limits of agreement than the unadjusted automatic approach for end-diastolic volume (±36.4 ml vs ±58.5 ml, p>0.05). Using motion mapping to automatically identify end-systole reduced analysis time by 95% compared with the semi-automatic approach, but showed inferior precision for EF and end-systolic volume. Conclusion Automatic function assessment using MDCT with manual adjustment shows good agreement with an established semi-automatic approach, while reducing the analysis by 75% to less than 3 min. This suggests that automatic CT function assessment with manual correction may be used for fast, comfortable and reliable evaluation of global left ventricular function. PMID:22045953

  6. Automatic Techniques for generation of environmental sensitivity index map to oil spill in the Guajará Bay , Belém-PA.

    Directory of Open Access Journals (Sweden)

    Fernando Pellon de Miranda

    2006-12-01

    Full Text Available A routine of techniques and procedures was established in order to produce Environmental Sensitivity Index (ESI maps for oil spill based on optical (Landsat ETM+ 7 and radar (Radarsat-1 Wide 1 remote sensing, multi-sensor data fusion and geographic information system. Seven landscape units were recognized related to high-land (coastal plateaus with artificial structures - ISA 8B and Estuarine wall - ISA 1B and recent coastal environments, as well as their ESI (flood -plain - ISA 10B ; Mangrove - ISA 10A; Vegeted muddy banks - ISA 9B; Estuarine beach - ISA 4; Cliff - ISA 3. The results obtained in the investigation have opened new perspectives in oil industry regarding the operational security and environmental protection, social-environmental assessment, technology for emergencies, coastal management and environmental information system.

  7. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... is the possiblility to combine the three methods in an extremely flexible way. We examine some applications where this flexibility is very useful. A method for Taylor expanding solutions of ordinary differential equations is presented, and a method for obtaining interval enclosures of the truncation errors incurred......, using this method has been developed. (ADIODES is an abbreviation of `` Automatic Differentiation Interval Ordinary Differential Equation Solver''). ADIODES is used to prove existence and uniqueness of periodic solutions to specific ordinary differential equations occuring in dynamical systems theory...

  8. [Wearable Automatic External Defibrillators].

    Science.gov (United States)

    Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan

    2015-11-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.

  9. Automatic fluid dispenser

    Science.gov (United States)

    Sakellaris, P. C. (Inventor)

    1977-01-01

    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  10. Automatic assessment of cardiac perfusion MRI

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Stegmann, Mikkel Bille; Larsson, Henrik B.W.

    2004-01-01

    In this paper, a method based on Active Appearance Models (AAM) is applied for automatic registration of myocardial perfusion MRI. A semi-quantitative perfusion assessment of the registered image sequences is presented. This includes the formation of perfusion maps for three parameters; maximum up...

  11. Analysis of engineering drawings and raster map images

    CERN Document Server

    Henderson, Thomas C

    2013-01-01

    Presents up-to-date methods and algorithms for the automated analysis of engineering drawings and digital cartographic maps Discusses automatic engineering drawing and map analysis techniques Covers detailed accounts of the use of unsupervised segmentation algorithms to map images

  12. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  13. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  14. Mediation and Automatization.

    Science.gov (United States)

    Hutchins, Edwin

    This paper discusses the relationship between the mediation of task performance by some structure that is not inherent in the task domain itself and the phenomenon of automatization, in which skilled performance becomes effortless or phenomenologically "automatic" after extensive practice. The use of a common simple explicit mediating…

  15. Digital automatic gain control

    Science.gov (United States)

    Uzdy, Z.

    1980-01-01

    Performance analysis, used to evaluated fitness of several circuits to digital automatic gain control (AGC), indicates that digital integrator employing coherent amplitude detector (CAD) is best device suited for application. Circuit reduces gain error to half that of conventional analog AGC while making it possible to automatically modify response of receiver to match incoming signal conditions.

  16. AUTOMATIC INTRAVENOUS DRIP CONTROLLER*

    African Journals Online (AJOL)

    Both the nursing staff shortage and the need for precise control in the administration of dangerous drugs intra- venously have led to the development of various devices to achieve an automatic system. The continuous automatic control of the drip rate eliminates errors due to any physical effect such as movement of the ...

  17. Automatic wire twister.

    Science.gov (United States)

    Smith, J F; Rodeheaver, G T; Thacker, J G; Morgan, R F; Chang, D E; Fariss, B L; Edlich, R F

    1988-06-01

    This automatic wire twister used in surgery consists of a 6-inch needle holder attached to a twisting mechanism. The major advantage of this device is that it twists wires significantly more rapidly than the conventional manual techniques. Testing has found that the ultimate force required to disrupt the wires twisted by either the automatic wire twister or manual techniques did not differ significantly and was directly related to the number of twists. The automatic wire twister reduces the time needed for wire twisting without altering the security of the twisted wire.

  18. Automatic traveltime picking using instantaneous traveltime

    KAUST Repository

    Saragiotis, Christos

    2013-02-08

    Event picking is used in many steps of seismic processing. We present an automatic event picking method that is based on a new attribute of seismic signals, instantaneous traveltime. The calculation of the instantaneous traveltime consists of two separate but interrelated stages. First, a trace is mapped onto the time-frequency domain. Then the time-frequency representation is mapped back onto the time domain by an appropriate operation. The computed instantaneous traveltime equals the recording time at those instances at which there is a seismic event, a feature that is used to pick the events. We analyzed the concept of the instantaneous traveltime and demonstrated the application of our automatic picking method on dynamite and Vibroseis field data.

  19. Automatic transmission vehicle injuries.

    Science.gov (United States)

    Fidler, M

    1973-04-07

    Four drivers sustained severe injuries when run down by their own automatic cars while adjusting the carburettor or throttle linkages. The transmission had been left in the "Drive" position and the engine was idling. This accident is easily avoidable.

  20. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  1. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  2. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  3. Application of nonlinear transformations to automatic flight control

    Science.gov (United States)

    Meyer, G.; Su, R.; Hunt, L. R.

    1984-01-01

    The theory of transformations of nonlinear systems to linear ones is applied to the design of an automatic flight controller for the UH-1H helicopter. The helicopter mathematical model is described and it is shown to satisfy the necessary and sufficient conditions for transformability. The mapping is constructed, taking the nonlinear model to canonical form. The performance of the automatic control system in a detailed simulation on the flight computer is summarized.

  4. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  5. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  6. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  7. Automatization and training in visual search.

    Science.gov (United States)

    Czerwinski, M; Lightfoot, N; Shiffrin, R M

    1992-01-01

    In several search tasks, the amount of practice on particular combinations of targets and distractors was equated in varied-mapping (VM) and consistent-mapping (CM) conditions. The results indicate the importance of distinguishing between memory and visual search tasks, and implicate a number of factors that play important roles in visual search and its learning. Visual search was studied in Experiment 1. VM and CM performance were almost equal, and slope reductions occurred during practice for both, suggesting the learning of efficient attentive search based on features, and no important role for automatic attention attraction. However, positive transfer effects occurred when previous CM targets were re-paired with previous CM distractors, even though these targets and distractors had not been trained together. Also, the introduction of a demanding simultaneous task produced advantages of CM over VM. These latter two results demonstrated the operation of automatic attention attraction. Visual search was further studied in Experiment 2, using novel characters for which feature overlap and similarity were controlled. The design and many of the findings paralleled Experiment 1. In addition, enormous search improvement was seen over 35 sessions of training, suggesting the operation of perceptual unitization for the novel characters. Experiment 3 showed a large, persistent advantage for CM over VM performance in memory search, even when practice on particular combinations of targets and distractors was equated in the two training conditions. A multifactor theory of automatization and attention is put forth to account for these findings and others in the literature.

  8. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)

    2017-12-21

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  9. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  10. Exploring Automatization Processes.

    Science.gov (United States)

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  11. Automaticity and Reading: Perspectives from the Instance Theory of Automatization.

    Science.gov (United States)

    Logan, Gordon D.

    1997-01-01

    Reviews recent literature on automaticity, defining the criteria that distinguish automatic processing from non-automatic processing, and describing modern theories of the underlying mechanisms. Focuses on evidence from studies of reading and draws implications from theory and data for practical issues in teaching reading. Suggests that…

  12. Mapping out Map Libraries

    Directory of Open Access Journals (Sweden)

    Ferjan Ormeling

    2008-09-01

    Full Text Available Discussing the requirements for map data quality, map users and their library/archives environment, the paper focuses on the metadata the user would need for a correct and efficient interpretation of the map data. For such a correct interpretation, knowledge of the rules and guidelines according to which the topographers/cartographers work (such as the kind of data categories to be collected, and the degree to which these rules and guidelines were indeed followed are essential. This is not only valid for the old maps stored in our libraries and archives, but perhaps even more so for the new digital files as the format in which we now have to access our geospatial data. As this would be too much to ask from map librarians/curators, some sort of web 2.0 environment is sought where comments about data quality, completeness and up-to-dateness from knowledgeable map users regarding the specific maps or map series studied can be collected and tagged to scanned versions of these maps on the web. In order not to be subject to the same disadvantages as Wikipedia, where the ‘communis opinio’ rather than scholarship, seems to be decisive, some checking by map curators of this tagged map use information would still be needed. Cooperation between map curators and the International Cartographic Association ( ICA map and spatial data use commission to this end is suggested.

  13. A Bayesian Network Approach to Ontology Mapping

    National Research Council Canada - National Science Library

    Pan, Rong; Ding, Zhongli; Yu, Yang; Peng, Yun

    2005-01-01

    This paper presents our ongoing effort on developing a principled methodology for automatic ontology mapping based on BayesOWL, a probabilistic framework we developed for modeling uncertainty in semantic web...

  14. Management of natural resources through automatic cartographic inventory

    Science.gov (United States)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  15. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  16. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    Medical ultrasound has been a widely used imaging modality in healthcare platforms for examination, diagnostic purposes, and for real-time guidance during surgery. However, despite the recent advances, medical ultrasound remains the most operator-dependent imaging modality, as it heavily relies...... on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...... on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...

  17. Automatic speech recognition

    Science.gov (United States)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  18. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  19. Automatic Language Identification

    Science.gov (United States)

    2000-08-01

    hundreds guish one language from another. The reader is referred of input languages would need to be supported , the cost of to the linguistics literature...eventually obtained bet- 108 TRAINING FRENCH GERMAN ITRAIING FRENCH M- ALGORITHM - __ GERMAN NHSPANISH TRAINING SPEECH SET OF MODELS: UTTERANCES ONE MODEL...i.e. vowels ) for each speech utterance are located malized to be insensitive to overall amplitude, pitch and automatically. Next, feature vectors

  20. Towards Automatic Threat Recognition

    Science.gov (United States)

    2006-12-01

    York: Bantam. Forschungsinstitut für Kommunikation , Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Towards...Automatic Threat Recognition Dr. Ulrich Schade Joachim Biermann Miłosław Frey FGAN – FKIE Germany Forschungsinstitut für Kommunikation ...as Processing Principle Back to the Example Conclusion and Outlook Forschungsinstitut für Kommunikation , Informationsverarbeitung und Ergonomie FGAN

  1. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  2. Automatization of lexicographic work

    Directory of Open Access Journals (Sweden)

    Iztok Kosem

    2013-12-01

    Full Text Available A new approach to lexicographic work, in which the lexicographer is seen more as a validator of the choices made by computer, was recently envisaged by Rundell and Kilgarriff (2011. In this paper, we describe an experiment using such an approach during the creation of Slovene Lexical Database (Gantar, Krek, 2011. The corpus data, i.e. grammatical relations, collocations, examples, and grammatical labels, were automatically extracted from 1,18-billion-word Gigafida corpus of Slovene. The evaluation of the extracted data consisted of making a comparison between the time spent writing a manual entry and a (semi-automatic entry, and identifying potential improvements in the extraction algorithm and in the presentation of data. An important finding was that the automatic approach was far more effective than the manual approach, without any significant loss of information. Based on our experience, we would propose a slightly revised version of the approach envisaged by Rundell and Kilgarriff in which the validation of data is left to lower-level linguists or crowd-sourcing, whereas high-level tasks such as meaning description remain the domain of lexicographers. Such an approach indeed reduces the scope of lexicographer’s work, however it also results in the ability of bringing the content to the users more quickly.

  3. The automatic checking and the seismic attributes map in a boundary region: the Niger delta deep sea bottom; Le pointe automatique et la carte des attributs sismiques dans une region frontiere: le grand fond du Delta du Niger

    Energy Technology Data Exchange (ETDEWEB)

    Montagnier, Ph.; Rossi, T.; Clergeat, B.; Dall`asta, M.; Weigerber, F. [Societe Nationale Elf-Aquitaine (France)

    1997-03-01

    Most of the interpretation teams involved in the deep sea exploration of the Niger delta had to take up a major challenge: the interpretation of a huge amount of 3D seismic surveys in a short time with the understanding of turbidite deposits and the identification of stratigraphic traps, without any calibration and reference well. The ELF team has used the advanced functionalities provided by the SISMAGE station of interpretation to complete the seismic mesh and to statistically calculate seismic attributes maps relative to surfaces and profiles. Abstract only. (J.S.)

  4. Automatic Seismic Signal Processing

    Science.gov (United States)

    1982-02-04

    81-04 4 February 1982 AUTOMATIC SEISMIC SIGNAL PROCESSING FINAL TECHNICAL REPORT i j Contract F08606-80.C-0021" PREPARED BY ILKKA NOPONEN, ROBERT SAX...PERFORMING ORG. REPORT NUMBER SAS-FR-81-04 7. AUTHOR(e) a. CONTRACT OR GRANT NUMBER(e) F08606- 80-C-0021 ILKKA NOPONEN, ROBERT SAX AND F 6 C0 STEVEN...observed, as also Swindell and Snell (1977), that the distribu- tion of x was slightly skewed, we used the median of x instead of aver- age of x for U(x

  5. Automatic Program Development

    DEFF Research Database (Denmark)

    by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...... a renewed stimulus for continuing and deepening Bob's research visions. A familiar touch is given to the book by some pictures kindly provided to us by his wife Nieba, the personal recollections of his brother Gary and some of his colleagues and friends....

  6. Automatic readout micrometer

    Science.gov (United States)

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  7. Automatic Aircraft Collision Avoidance System and Method

    Science.gov (United States)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  8. An automatic measuring system for mapping of spectral and angular dependence of direct and diffuse solar radiation; Et automatisk maalesystem for kartlegging av vinkel- og spektralfordeling av direkte og diffus solstraaling

    Energy Technology Data Exchange (ETDEWEB)

    Grandum, Oddbjoern

    1997-12-31

    In optimizing solar systems, it is necessary to know the spectral and angular dependence of the radiation. The general nonlinear character of most solar energy systems accentuates this. This thesis describes a spectroradiometer that will measure both the direct component of the solar radiation and the angular dependence of the diffuse component. Radiation from a selected part of the sky is transported through a movable set of tube sections on to a stationary set of three monochromators with detectors. The beam transport system may effectively be looked upon as a single long tube aimed at a particular spot in the sky. The half value of the effective opening angle is 1.3{sup o} for diffuse radiation and 2.8{sup o} for direct radiation. The whole measurement process is controlled and operated by a PC and normally runs without manual attention. The instrument is built into a caravan. The thesis describes in detail the experimental apparatus, calibration and measurement accuracies. To map the diffuse radiation, one divides the sky into 26 sectors of equal solid angle. A complete measurement cycle is then made at a random point within each sector. These measurements are modelled by fitting to spherical harmonics, enforcing symmetry around the solar direction and the horizontal plane. The direct radiation is measured separately. Also the circumsolar sector is given special treatment. The measurements are routinely checked against global radiation measured in parallel by a standard pyranometer, and direct solar radiation by a pyrheliometer. An extensive improvement programme is being planned for the instrument, including the use of a photomultiplier tube to measure the UV part of the spectrum, a diode array for the 400-1100 nm range, and use of a Ge diode for the 1000-1900 nm range. 78 refs., 90 figs., 31 tabs.

  9. AUTOMATIC ARCHITECTURAL STYLE RECOGNITION

    Directory of Open Access Journals (Sweden)

    M. Mathias

    2012-09-01

    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  10. Automatic speech recognition systems

    Science.gov (United States)

    Catariov, Alexandru

    2005-02-01

    In this paper is presented analyses in automatic speech recognition (ASR) to find out what is the state of the arts in this direction and, eventually, it can be a starting point for the implementation of a real ASR system. In the second chapter of this work, it is revealed the structure of a typical speech recognition system and the used methods for each step of the recognition process, and in special, there are described two kinds of speech recognition algorithms, namely, Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The work continues with some results of ASR, in order to make conclusions about what is needed to be improved and what is more eligible to implement an ASR system.

  11. AUTOMATIC FUSION OF PARTIAL RECONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    A. Wendel

    2012-07-01

    Full Text Available Novel image acquisition tools such as micro aerial vehicles (MAVs in form of quad- or octo-rotor helicopters support the creation of 3D reconstructions with ground sampling distances below 1 cm. The limitation of aerial photogrammetry to nadir and oblique views in heights of several hundred meters is bypassed, allowing close-up photos of facades and ground features. However, the new acquisition modality also introduces challenges: First, flight space might be restricted in urban areas, which leads to missing views for accurate 3D reconstruction and causes fracturing of large models. This could also happen due to vegetation or simply a change of illumination during image acquisition. Second, accurate geo-referencing of reconstructions is difficult because of shadowed GPS signals in urban areas, so alignment based on GPS information is often not possible. In this paper, we address the automatic fusion of such partial reconstructions. Our approach is largely based on the work of (Wendel et al., 2011a, but does not require an overhead digital surface model for fusion. Instead, we exploit that patch-based semi-dense reconstruction of the fractured model typically results in several point clouds covering overlapping areas, even if sparse feature correspondences cannot be established. We approximate orthographic depth maps for the individual parts and iteratively align them in a global coordinate system. As a result, we are able to generate point clouds which are visually more appealing and serve as an ideal basis for further processing. Mismatches between parts of the fused models depend only on the individual point density, which allows us to achieve a fusion accuracy in the range of ±1 cm on our evaluation dataset.

  12. Quality Assessment of Pre-Classification Maps Generated from Spaceborne/Airborne Multi-Spectral Images by the Satellite Image Automatic Mapper™ and Atmospheric/Topographic Correction™-Spectral Classification Software Products: Part 2 — Experimental Results

    Directory of Open Access Journals (Sweden)

    Andrea Baraldi

    2013-10-01

    Full Text Available This paper complies with the Quality Assurance Framework for Earth Observation (QA4EO international guidelines to provide a metrological/statistically-based quality assessment of the Spectral Classification of surface reflectance signatures (SPECL secondary product, implemented within the popular Atmospheric/Topographic Correction (ATCOR™ commercial software suite, and of the Satellite Image Automatic Mapper™ (SIAM™ software product, proposed to the remote sensing (RS community in recent years. The ATCOR™-SPECL and SIAM™ physical model-based expert systems are considered of potential interest to a wide RS audience: in operating mode, they require neither user-defined parameters nor training data samples to map, in near real-time, a spaceborne/airborne multi-spectral (MS image into a discrete and finite set of (pre-attentional first-stage spectral-based semi-concepts (e.g., “vegetation”, whose informative content is always equal or inferior to that of target (attentional second-stage land cover (LC concepts (e.g., “deciduous forest”. For the sake of simplicity, this paper is split into two: Part 1—Theory and Part 2—Experimental results. The Part 1 provides the present Part 2 with an interdisciplinary terminology and a theoretical background. To comply with the principle of statistics and the QA4EO guidelines discussed in the Part 1, the present Part 2 applies an original adaptation of a novel probability sampling protocol for thematic map quality assessment to the ATCOR™-SPECL and SIAM™ pre-classification maps, generated from three spaceborne/airborne MS test images. Collected metrological/ statistically-based quality indicators (QIs comprise: (i an original Categorical Variable Pair Similarity Index (CVPSI, capable of estimating the degree of match between a test pre-classification map’s legend and a reference LC map’s legend that do not coincide and must be harmonized (reconciled; (ii pixel-based Thematic (symbolic

  13. The Perugia University Automatic Observatory

    Science.gov (United States)

    Tosti, Gino; Pascolini, Sergio; Fiorucci, Massimo

    1996-08-01

    In this paper we describe the hardware and software architecture of the Automatic Imaging Telescope (AIT), recently developed at the Perugia University Observatory. It is based on an existing 0.4 m telescope which was transformed into an automatic device. During the night, all the observatory functions are controlled by two PCs in an unattended mode. The system is equipped with an autoguider and the software was designed to allow the automatic reduction of the data at the end of the night. Since October 1994 the AIT has been collecting a large amount of BVR_cI_c data for about 30 blazars. (SECTION: Astronomical Instrumentation)

  14. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  15. Cognitive maps

    OpenAIRE

    Kitchin, Rob

    2001-01-01

    A cognitive map is a representative expression of an individual's cognitive map knowledge, where cognitive map knowledge is an individual's knowledge about the spatial and environmental relations of geographic space. For example, a sketch map drawn to show the route between two locations is a cognitive map — a representative expression of the drawer's knowledge of the route between the two locations. This map can be analyzed using classification schemes or quantitatively using spatial statist...

  16. Automatic Nuclei Detection Based on Generalized Laplacian of Gaussian Filters.

    Science.gov (United States)

    Hongming Xu; Cheng Lu; Berendt, Richard; Jha, Naresh; Mandal, Mrinal

    2017-05-01

    Efficient and accurate detection of cell nuclei is an important step toward automatic analysis in histopathology. In this work, we present an automatic technique based on generalized Laplacian of Gaussian (gLoG) filter for nuclei detection in digitized histological images. The proposed technique first generates a bank of gLoG kernels with different scales and orientations and then performs convolution between directional gLoG kernels and the candidate image to obtain a set of response maps. The local maxima of response maps are detected and clustered into different groups by mean-shift algorithm based on their geometrical closeness. The point which has the maximum response in each group is finally selected as the nucleus seed. Experimental results on two datasets show that the proposed technique provides a superior performance in nuclei detection compared to existing techniques.

  17. AVID: Automatic Visualization Interface Designer

    National Research Council Canada - National Science Library

    Chuah, Mei

    2000-01-01

    .... Automatic generation offers great flexibility in performing data and information analysis tasks, because new designs are generated on a case by case basis to suit current and changing future needs...

  18. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  19. A State-of-the-Art Assessment of Automatic Name Placement.

    Science.gov (United States)

    1986-08-01

    develop an automatic name placement system. 11 Balodis, M., "Positioning of typography on maps," Proc. ACSM Pall Con- vention, Salt Lake City, Utah, Sept...1983, pp. 28-44. This article deals with the selection of typography for maps. It describes psycho-visual experiments with groups of individuals to...Polytechnic Institute, Troy, NY 12181, May 1984. (Also available as Tech. Rept. IPL-TR-063.) SBalodis, M., "Positioning of typography on maps," Proc

  20. Automatic change detection using mobile laser scanning

    Science.gov (United States)

    Hebel, M.; Hammer, M.; Gordon, M.; Arens, M.

    2014-10-01

    Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping, which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at different time intervals.

  1. An automatic image recognition approach

    Directory of Open Access Journals (Sweden)

    Tudor Barbu

    2007-07-01

    Full Text Available Our paper focuses on the graphical analysis domain. We propose an automatic image recognition technique. This approach consists of two main pattern recognition steps. First, it performs an image feature extraction operation on an input image set, using statistical dispersion features. Then, an unsupervised classification process is performed on the previously obtained graphical feature vectors. An automatic region-growing based clustering procedure is proposed and utilized in the classification stage.

  2. Prospects for de-automatization.

    Science.gov (United States)

    Kihlstrom, John F

    2011-06-01

    Research by Raz and his associates has repeatedly found that suggestions for hypnotic agnosia, administered to highly hypnotizable subjects, reduce or even eliminate Stroop interference. The present paper sought unsuccessfully to extend these findings to negative priming in the Stroop task. Nevertheless, the reduction of Stroop interference has broad theoretical implications, both for our understanding of automaticity and for the prospect of de-automatizing cognition in meditation and other altered states of consciousness. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. The automatization of journalistic narrative

    Directory of Open Access Journals (Sweden)

    Naara Normande

    2013-06-01

    Full Text Available This paper proposes an initial discussion about the production of automatized journalistic narratives. Despite being a topic discussed in specialized sites and international conferences in communication area, the concepts are still deficient in academic research. For this article, we studied the concepts of narrative, databases and algorithms, indicating a theoretical trend that explains this automatized journalistic narratives. As characterization, we use the cases of Los Angeles Times, Narrative Science and Automated Insights.

  4. On the automaticity of response inhibition in individuals with alcoholism.

    Science.gov (United States)

    Noël, Xavier; Brevers, Damien; Hanak, Catherine; Kornreich, Charles; Verbanck, Paul; Verbruggen, Frederick

    2016-06-01

    Response inhibition is usually considered a hallmark of executive control. However, recent work indicates that stop performance can become associatively mediated ('automatic') over practice. This study investigated automatic response inhibition in sober and recently detoxified individuals with alcoholism.. We administered to forty recently detoxified alcoholics and forty healthy participants a modified stop-signal task that consisted of a training phase in which a subset of the stimuli was consistently associated with stopping or going, and a test phase in which this mapping was reversed. In the training phase, stop performance improved for the consistent stop stimuli, compared with control stimuli that were not associated with going or stopping. In the test phase, go performance tended to be impaired for old stop stimuli. Combined, these findings support the automatic inhibition hypothesis. Importantly, performance was similar in both groups, which indicates that automatic inhibitory control develops normally in individuals with alcoholism.. This finding is specific to individuals with alcoholism without other psychiatric disorders, which is rather atypical and prevents generalization. Personalized stimuli with a stronger affective content should be used in future studies. These results advance our understanding of behavioral inhibition in individuals with alcoholism. Furthermore, intact automatic inhibitory control may be an important element of successful cognitive remediation of addictive behaviors.. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Schema matching and mapping

    CERN Document Server

    Bellahsene, Zohra; Rahm, Erhard

    2011-01-01

    Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements

  6. A Continuously Updated, Global Land Classification Map Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to demonstrate a fully automatic capability for generating a global, high resolution (30 m) land classification map, with continuous updates from...

  7. Concept Mapping.

    Science.gov (United States)

    Callison, Daniel

    2001-01-01

    Explains concept mapping as a heuristic device that is helpful in visualizing the relationships between and among ideas. Highlights include how to begin a map; brainstorming; map applications, including document or information summaries and writing composition; and mind mapping to strengthen note-taking. (LRW)

  8. Concept Maps

    OpenAIRE

    Schwendimann, Beat Adrian

    2014-01-01

    A concept map is a node-link diagram showing the semantic relationships among concepts. The technique for constructing concept maps is called "concept mapping". A concept map consists of nodes, arrows as linking lines, and linking phrases that describe the relationship between nodes. Two nodes connected with a labeled arrow are called a proposition. Concept maps are versatile graphic organizers that can represent many different forms of relationships between concepts. The relationship between...

  9. An automatization of Barnsley's algorithm for the inverse problem of iterated function systems.

    Science.gov (United States)

    Wadströmer, Niclas

    2003-01-01

    We present an automatization of Barnsley's manual algorithm for the solution of the inverse problem of iterated function systems (IFSs). The problem is to retrieve the number of mappings and the parameters of an IFS from a digital binary image approximating the attractor induced by the IFS. M.F. Barnsley et al. described a way to solve manually the inverse problem by identifying the fragments of which the collage is composed, and then computing the parameters of the mappings (Barnsley et al., Proc. Nat. Acad. Sci. USA, vol.83, p.1975-7, 1986; Barnsley, "Fractals Everywhere", Academic, 1988; Barnsley and Hurd, L., "Fractal Image Compression", A.K. Peters, 1992). The automatic algorithm searches through a finite set of points in the parameter space determining a set of affine mappings. The algorithm uses the collage theorem and the Hausdorff metric. The inverse problem of IFSs is related to the image coding of binary images. If the number of mappings and the parameters of an IFS, with not too many mappings, could be obtained from a binary image, then this would give an efficient representation of the image. It is shown that the inverse problem solved by the automatic algorithm has a solution and some experiments show that the automatic algorithm is able to retrieve an IFS, including the number of mappings, from a digital binary image approximating the attractor induced by the IFS.

  10. Harvesting geographic features from heterogeneous raster maps

    Science.gov (United States)

    Chiang, Yao-Yi

    2010-11-01

    Raster maps offer a great deal of geospatial information and are easily accessible compared to other geospatial data. However, harvesting geographic features locked in heterogeneous raster maps to obtain the geospatial information is challenging. This is because of the varying image quality of raster maps (e.g., scanned maps with poor image quality and computer-generated maps with good image quality), the overlapping geographic features in maps, and the typical lack of metadata (e.g., map geocoordinates, map source, and original vector data). Previous work on map processing is typically limited to a specific type of map and often relies on intensive manual work. In contrast, this thesis investigates a general approach that does not rely on any prior knowledge and requires minimal user effort to process heterogeneous raster maps. This approach includes automatic and supervised techniques to process raster maps for separating individual layers of geographic features from the maps and recognizing geographic features in the separated layers (i.e., detecting road intersections, generating and vectorizing road geometry, and recognizing text labels). The automatic technique eliminates user intervention by exploiting common map properties of how road lines and text labels are drawn in raster maps. For example, the road lines are elongated linear objects and the characters are small connected-objects. The supervised technique utilizes labels of road and text areas to handle complex raster maps, or maps with poor image quality, and can process a variety of raster maps with minimal user input. The results show that the general approach can handle raster maps with varying map complexity, color usage, and image quality. By matching extracted road intersections to another geospatial dataset, we can identify the geocoordinates of a raster map and further align the raster map, separated feature layers from the map, and recognized features from the layers with the geospatial

  11. Automatic detection of artifacts in converted S3D video

    Science.gov (United States)

    Bokov, Alexander; Vatolin, Dmitriy; Zachesov, Anton; Belous, Alexander; Erofeev, Mikhail

    2014-03-01

    In this paper we present algorithms for automatically detecting issues specific to converted S3D content. When a depth-image-based rendering approach produces a stereoscopic image, the quality of the result depends on both the depth maps and the warping algorithms. The most common problem with converted S3D video is edge-sharpness mismatch. This artifact may appear owing to depth-map blurriness at semitransparent edges: after warping, the object boundary becomes sharper in one view and blurrier in the other, yielding binocular rivalry. To detect this problem we estimate the disparity map, extract boundaries with noticeable differences, and analyze edge-sharpness correspondence between views. We pay additional attention to cases involving a complex background and large occlusions. Another problem is detection of scenes that lack depth volume: we present algorithms for detecting at scenes and scenes with at foreground objects. To identify these problems we analyze the features of the RGB image as well as uniform areas in the depth map. Testing of our algorithms involved examining 10 Blu-ray 3D releases with converted S3D content, including Clash of the Titans, The Avengers, and The Chronicles of Narnia: The Voyage of the Dawn Treader. The algorithms we present enable improved automatic quality assessment during the production stage.

  12. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  13. Algorithms for skiascopy measurement automatization

    Science.gov (United States)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  14. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology

    Science.gov (United States)

    Dansereau, Jules

    1978-01-01

    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  15. autoimage: Multiple Heat Maps for Projected Coordinates.

    Science.gov (United States)

    French, Joshua P

    2017-01-01

    Heat maps are commonly used to display the spatial distribution of a response observed on a two-dimensional grid. The autoimage package provides convenient functions for constructing multiple heat maps in unified, seamless way, particularly when working with projected coordinates. The autoimage package natively supports: 1. automatic inclusion of a color scale with the plotted image, 2. construction of heat maps for responses observed on regular or irregular grids, as well as non-gridded data, 3. construction of a matrix of heat maps with a common color scale, 4. construction of a matrix of heat maps with individual color scales, 5. projecting coordinates before plotting, 6. easily adding geographic borders, points, and other features to the heat maps. After comparing the autoimage package's capabilities for constructing heat maps to those of existing tools, a carefully selected set of examples is used to highlight the capabilities of the autoimage package.

  16. Testing Metadata Existence of Web Map Services

    Directory of Open Access Journals (Sweden)

    Jan Růžička

    2011-05-01

    Full Text Available For a general user is quite common to use data sources available on WWW. Almost all GIS software allow to use data sources available via Web Map Service (ISO/OGC standard interface. The opportunity to use different sources and combine them brings a lot of problems that were discussed many times on conferences or journal papers. One of the problem is based on non existence of metadata for published sources. The question was: were the discussions effective? The article is partly based on comparison of situation for metadata between years 2007 and 2010. Second part of the article is focused only on 2010 year situation. The paper is created in a context of research of intelligent map systems, that can be used for an automatic or a semi-automatic map creation or a map evaluation.

  17. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  18. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    Science.gov (United States)

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping.

  19. CEPH maps.

    Science.gov (United States)

    Cann, H M

    1992-06-01

    There are CEPH genetic maps on each homologous human chromosome pair. Genotypes for these maps have been generated in 88 laboratories that receive DNA from a reference panel of large nuclear pedigrees/families supplied by the Centre d'Etude du Polymorphisme Humain. These maps serve as useful tools for the localization of both disease genes and other genes of interest.

  20. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  1. Automatic animacy classification for Dutch

    NARCIS (Netherlands)

    Bloem, J.; Bouma, G.

    2013-01-01

    We present an automatic animacy classifier for Dutch that can determine the animacy status of nouns -- how alive the noun's referent is (human, inanimate, etc.). Animacy is a semantic property that has been shown to play a role in human sentence processing, felicity and grammaticality. Although

  2. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...

  3. The automatic lumber planing mill

    Science.gov (United States)

    Peter Koch

    1957-01-01

    It is probable that a truly automatic planning operation could be devised if some of the variables commonly present in the mill-run lumber were eliminated and the remaining variables kept under close control. This paper will deal with the more general situation faced by mostl umber manufacturing plants. In other words, it will be assumed that the incoming lumber has...

  4. Automatic quantification of iris color

    DEFF Research Database (Denmark)

    Christoffersen, S.; Harder, Stine; Andersen, J. D.

    2012-01-01

    An automatic algorithm to quantify the eye colour and structural information from standard hi-resolution photos of the human iris has been developed. Initially, the major structures in the eye region are identified including the pupil, iris, sclera, and eyelashes. Based on this segmentation, the ...

  5. Automatic Identification of Metaphoric Utterances

    Science.gov (United States)

    Dunn, Jonathan Edwin

    2013-01-01

    This dissertation analyzes the problem of metaphor identification in linguistic and computational semantics, considering both manual and automatic approaches. It describes a manual approach to metaphor identification, the Metaphoricity Measurement Procedure (MMP), and compares this approach with other manual approaches. The dissertation then…

  6. Automatic agar tray inoculation device

    Science.gov (United States)

    Wilkins, J. R.; Mills, S. M.

    1972-01-01

    Automatic agar tray inoculation device is simple in design and foolproof in operation. It employs either conventional inoculating loop or cotton swab for uniform inoculation of agar media, and it allows technician to carry on with other activities while tray is being inoculated.

  7. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we demons...

  8. Automatically Preparing Safe SQL Queries

    Science.gov (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  9. Automatic affective associations and psychopathology

    NARCIS (Netherlands)

    Huijding, Jorg

    2006-01-01

    The aim of these thesis was to examine the theoretical and clinical relevance of distinguishing between (dysfunctional) automatic and more deliberated affective associations. This interest was sired by the development of so called 'implicit measures', better referred to as indirect measures of

  10. Automatic evaluation of test answers

    Directory of Open Access Journals (Sweden)

    Annalina Fabrizio

    2006-01-01

    Full Text Available Presentation of a method to allow automatic evaluation of open-ended responses provided in the context of evaluation tests. Besides the description of the theoretical framework, the authors describe the implementation that have developed to address the problem and testing the solution.

  11. Topographic mapping

    Science.gov (United States)

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  12. Automatisms: bridging clinical neurology with criminal law.

    Science.gov (United States)

    Rolnick, Joshua; Parvizi, Josef

    2011-03-01

    The law, like neurology, grapples with the relationship between disease states and behavior. Sometimes, the two disciplines share the same terminology, such as automatism. In law, the "automatism defense" is a claim that action was involuntary or performed while unconscious. Someone charged with a serious crime can acknowledge committing the act and yet may go free if, relying on the expert testimony of clinicians, the court determines that the act of crime was committed in a state of automatism. In this review, we explore the relationship between the use of automatism in the legal and clinical literature. We close by addressing several issues raised by the automatism defense: semantic ambiguity surrounding the term automatism, the presence or absence of consciousness during automatisms, and the methodological obstacles that have hindered the study of cognition during automatisms. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  14. Diffraction phase microscopy realized with an automatic digital pinhole

    Science.gov (United States)

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Zhang, Zhimin; Liu, Xu

    2017-12-01

    We report a novel approach to diffraction phase microscopy (DPM) with automatic pinhole alignment. The pinhole, which serves as a spatial low-pass filter to generate a uniform reference beam, is made out of a liquid crystal display (LCD) device that allows for electrical control. We have made DPM more accessible to users, while maintaining high phase measurement sensitivity and accuracy, through exploring low cost optical components and replacing the tedious pinhole alignment process with an automatic pinhole optical alignment procedure. Due to its flexibility in modifying the size and shape, this LCD device serves as a universal filter, requiring no future replacement. Moreover, a graphic user interface for real-time phase imaging has been also developed by using a USB CMOS camera. Experimental results of height maps of beads sample and live red blood cells (RBCs) dynamics are also presented, making this system ready for broad adaption to biological imaging and material metrology.

  15. Self-Compassion and Automatic Thoughts

    Science.gov (United States)

    Akin, Ahmet

    2012-01-01

    The aim of this research is to examine the relationships between self-compassion and automatic thoughts. Participants were 299 university students. In this study, the Self-compassion Scale and the Automatic Thoughts Questionnaire were used. The relationships between self-compassion and automatic thoughts were examined using correlation analysis…

  16. An Automatic Proof of Euler's Formula

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2005-05-01

    Full Text Available In this information age, everything is digitalized. The encoding of functions and the automatic proof of functions are important. This paper will discuss the automatic calculation for Taylor expansion coefficients, as an example, it can be applied to prove Euler's formula automatically.

  17. Adding Automatic Evaluation to Interactive Virtual Labs

    Science.gov (United States)

    Farias, Gonzalo; Muñoz de la Peña, David; Gómez-Estern, Fabio; De la Torre, Luis; Sánchez, Carlos; Dormido, Sebastián

    2016-01-01

    Automatic evaluation is a challenging field that has been addressed by the academic community in order to reduce the assessment workload. In this work we present a new element for the authoring tool Easy Java Simulations (EJS). This element, which is named automatic evaluation element (AEE), provides automatic evaluation to virtual and remote…

  18. Automatically scramming nuclear reactor system

    Science.gov (United States)

    Ougouag, Abderrafi M.; Schultz, Richard R.; Terry, William K.

    2004-10-12

    An automatically scramming nuclear reactor system. One embodiment comprises a core having a coolant inlet end and a coolant outlet end. A cooling system operatively associated with the core provides coolant to the coolant inlet end and removes heated coolant from the coolant outlet end, thus maintaining a pressure differential therebetween during a normal operating condition of the nuclear reactor system. A guide tube is positioned within the core with a first end of the guide tube in fluid communication with the coolant inlet end of the core, and a second end of the guide tube in fluid communication with the coolant outlet end of the core. A control element is positioned within the guide tube and is movable therein between upper and lower positions, and automatically falls under the action of gravity to the lower position when the pressure differential drops below a safe pressure differential.

  19. Automatic design of magazine covers

    Science.gov (United States)

    Jahanian, Ali; Liu, Jerry; Tretter, Daniel R.; Lin, Qian; Damera-Venkata, Niranjan; O'Brien-Strain, Eamonn; Lee, Seungyon; Fan, Jian; Allebach, Jan P.

    2012-03-01

    In this paper, we propose a system for automatic design of magazine covers that quantifies a number of concepts from art and aesthetics. Our solution to automatic design of this type of media has been shaped by input from professional designers, magazine art directors and editorial boards, and journalists. Consequently, a number of principles in design and rules in designing magazine covers are delineated. Several techniques are derived and employed in order to quantify and implement these principles and rules in the format of a software framework. At this stage, our framework divides the task of design into three main modules: layout of magazine cover elements, choice of color for masthead and cover lines, and typography of cover lines. Feedback from professional designers on our designs suggests that our results are congruent with their intuition.

  20. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  1. 實徵研究/基於自動查詢語句擴展之主題地圖智慧型新聞搜尋引擎/陳志銘;張美華;邱偉嘉 │ An Intelligent News Search Engine with Topic Map User Interface Based on Automatic Query Expansion / Chih-Ming Chen; Mei-Hua Chang; Wei-Chia Chiu

    Directory of Open Access Journals (Sweden)

    陳志銘、張美華、邱偉嘉 Chih-Ming Chen; Mei-Hua Chang; Wei-Chia Chiu

    2008-10-01

    Full Text Available 近年來快速發展的新聞網站衍生使用者對於資訊查詢的強烈需求,而目前許多新聞網站(例如:Google News可以自動針對網際網路的新聞進行新聞事件的自動分類,以提供讀者相���新聞事件的聚集資訊,也提供以關鍵字進行新聞查詢的功能,但是卻無法提供整 個相關新聞事件發展脈絡的查詢及依據主題相關之新聞地圖的呈現機制。本研究利用Google News可以自動將新聞事件分類的特性,進一步利用改良式霍普菲爾類神經網路(Modified Hopfield Neural Networks, MHNN自動歸納產生新聞查詢語句知識本體(ontology,並依據所產生之新聞查詢語句知識本體,提出了一個可以依據使用者感興趣新聞類別來做新 聞事件的擴展查詢(query expansion機制,以強化新聞搜尋的效能。此外,藉由主題地圖(topic maps之方式提供有別於傳統新聞呈現方式給使用者瀏覽閱讀,可以將新聞查詢結果依據時間標記及主題關聯呈現,讓使用者能更清楚地掌握整個新聞主題的發 展脈絡。With the rapid development of the computer and Internet techniques, the Internet appears some news aggregator sites containing a large number of news articles, thus leading to advanced information retrieval requirements. Particularly, some news sites, such as Google news, provide automatically classified news information and keyword based search mechanism to readers for retrieving user-interested news events. However, most news sites do not provide currently to retrieve the developing clues of a news event and display the search results by the topic map with visualization user interface. Therefore, this study presents a novel news search engine with automatic user query expansion mechanism and a friendly topic map user interface based on the automatic generation scheme of news ontology constructed by Modified Hopfield Neural Networks. The experimental

  2. Automatic renal segmentation for MR urography using 3D-GrabCut and random forests.

    Science.gov (United States)

    Yoruk, Umit; Hargreaves, Brian A; Vasanawala, Shreyas S

    2018-03-01

    To introduce and evaluate a fully automated renal segmentation technique for glomerular filtration rate (GFR) assessment in children. An image segmentation method based on iterative graph cuts (GrabCut) was modified to work on time-resolved 3D dynamic contrast-enhanced MRI data sets. A random forest classifier was trained to further segment the renal tissue into cortex, medulla, and the collecting system. The algorithm was tested on 26 subjects and the segmentation results were compared to the manually drawn segmentation maps using the F1-score metric. A two-compartment model was used to estimate the GFR of each subject using both automatically and manually generated segmentation maps. Segmentation maps generated automatically showed high similarity to the manually drawn maps for the whole-kidney (F1 = 0.93) and renal cortex (F1 = 0.86). GFR estimations using whole-kidney segmentation maps from the automatic method were highly correlated (Spearman's ρ = 0.99) to the GFR values obtained from manual maps. The mean GFR estimation error of the automatic method was 2.98 ± 0.66% with an average segmentation time of 45 s per patient. The automatic segmentation method performs as well as the manual segmentation for GFR estimation and reduces the segmentation time from several hours to 45 s. Magn Reson Med 79:1696-1707, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  4. Automatic Detection of Fake News

    OpenAIRE

    Pérez-Rosas, Verónica; Kleinberg, Bennett; Lefevre, Alexandra; Mihalcea, Rada

    2017-01-01

    The proliferation of misleading information in everyday access media outlets such as social media feeds, news blogs, and online newspapers have made it challenging to identify trustworthy news sources, thus increasing the need for computational tools able to provide insights into the reliability of online content. In this paper, we focus on the automatic identification of fake content in online news. Our contribution is twofold. First, we introduce two novel datasets for the task of fake news...

  5. Annual review in automatic programming

    CERN Document Server

    Halpern, Mark I; Bolliet, Louis

    2014-01-01

    Computer Science and Technology and their Application is an eight-chapter book that first presents a tutorial on database organization. Subsequent chapters describe the general concepts of Simula 67 programming language; incremental compilation and conversational interpretation; dynamic syntax; the ALGOL 68. Other chapters discuss the general purpose conversational system for graphical programming and automatic theorem proving based on resolution. A survey of extensible programming language is also shown.

  6. Automatic Summarization of Online Debates

    OpenAIRE

    Sanchan, Nattapong; Aker, Ahmet; Bontcheva, Kalina

    2017-01-01

    Debate summarization is one of the novel and challenging research areas in automatic text summarization which has been largely unexplored. In this paper, we develop a debate summarization pipeline to summarize key topics which are discussed or argued in the two opposing sides of online debates. We view that the generation of debate summaries can be achieved by clustering, cluster labeling, and visualization. In our work, we investigate two different clustering approaches for the generation of...

  7. Automatic computation of transfer functions

    Science.gov (United States)

    Atcitty, Stanley; Watson, Luke Dale

    2015-04-14

    Technologies pertaining to the automatic computation of transfer functions for a physical system are described herein. The physical system is one of an electrical system, a mechanical system, an electromechanical system, an electrochemical system, or an electromagnetic system. A netlist in the form of a matrix comprises data that is indicative of elements in the physical system, values for the elements in the physical system, and structure of the physical system. Transfer functions for the physical system are computed based upon the netlist.

  8. Group Dynamics in Automatic Imitation.

    Directory of Open Access Journals (Sweden)

    Ilka H Gleibs

    Full Text Available Imitation-matching the configural body movements of another individual-plays a crucial part in social interaction. We investigated whether automatic imitation is not only influenced by who we imitate (ingroup vs. outgroup member but also by the nature of an expected interaction situation (competitive vs. cooperative. In line with assumptions from Social Identity Theory, we predicted that both social group membership and the expected situation impact on the level of automatic imitation. We adopted a 2 (group membership target: ingroup, outgroup x 2 (situation: cooperative, competitive design. The dependent variable was the degree to which participants imitated the target in a reaction time automatic imitation task. 99 female students from two British Universities participated. We found a significant two-way interaction on the imitation effect. When interacting in expectation of cooperation, imitation was stronger for an ingroup target compared to an outgroup target. However, this was not the case in the competitive condition where imitation did not differ between ingroup and outgroup target. This demonstrates that the goal structure of an expected interaction will determine the extent to which intergroup relations influence imitation, supporting a social identity approach.

  9. Social influence effects on automatic racial prejudice.

    Science.gov (United States)

    Lowery, B S; Hardin, C D; Sinclair, S

    2001-11-01

    Although most research on the control of automatic prejudice has focused on the efficacy of deliberate attempts to suppress or correct for stereotyping, the reported experiments tested the hypothesis that automatic racial prejudice is subject to common social influence. In experiments involving actual interethnic contact, both tacit and expressed social influence reduced the expression of automatic prejudice, as assessed by two different measures of automatic attitudes. Moreover, the automatic social tuning effect depended on participant ethnicity. European Americans (but not Asian Americans) exhibited less automatic prejudice in the presence of a Black experimenter than a White experimenter (Experiments 2 and 4), although both groups exhibited reduced automatic prejudice when instructed to avoid prejudice (Experiment 3). Results are consistent with shared reality theory, which postulates that social regulation is central to social cognition.

  10. Real-time automatic interpolation of ambient gamma dose rates from the Dutch radioactivity monitoring network

    NARCIS (Netherlands)

    Hiemstra, P.H.; Pebesma, E.J.; Twenhöfel, C.J.W.; Heuvelink, G.B.M.

    2009-01-01

    Detection of radiological accidents and monitoring the spread of the contamination is of great importance. Following the Chernobyl accident many European countries have installed monitoring networks to perform this task. Real-time availability of automatically interpolated maps showing the spread of

  11. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Science.gov (United States)

    Jiang, Dong; Huang, Yaohuan; Zhuang, Dafang; Zhu, Yunqiang; Xu, Xinliang; Ren, Hongyan

    2012-01-01

    Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1) images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization) with convenience.

  12. Automatic Georeferencing of Aerial Images by Means of Topographic Database Information

    DEFF Research Database (Denmark)

    Høhle, Joachim

    The book includes a preface and four articles which deal with the automatic georeferencing of aerial images. The articles are the written contribution of an seminar, held at Aalborg University in October 2002. The georeferencing or orientation of aerial images is the first step in mapping tasks l...

  13. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  14. MODULEWRITER: a program for automatic generation of database interfaces.

    Science.gov (United States)

    Zheng, Christina L; Fana, Fariba; Udupi, Poornaprajna V; Gribskov, Michael

    2003-05-01

    MODULEWRITER is a PERL object relational mapping (ORM) tool that automatically generates database specific application programming interfaces (APIs) for SQL databases. The APIs consist of a package of modules providing access to each table row and column. Methods for retrieving, updating and saving entries are provided, as well as other generally useful methods (such as retrieval of the highest numbered entry in a table). MODULEWRITER provides for the inclusion of user-written code, which can be preserved across multiple runs of the MODULEWRITER program.

  15. Analysis of Phonetic Transcriptions for Danish Automatic Speech Recognition

    DEFF Research Database (Denmark)

    Kirkedal, Andreas Søeborg

    2013-01-01

    Automatic speech recognition (ASR) relies on three resources: audio, orthographic transcriptions and a pronunciation dictionary. The dictionary or lexicon maps orthographic words to sequences of phones or phonemes that represent the pronunciation of the corresponding word. The quality of a speech...... recognition system depends heavily on the dictionary and the transcriptions therein. This paper presents an analysis of phonetic/phonemic features that are salient for current Danish ASR systems. This preliminary study consists of a series of experiments using an ASR system trained on the DK-PAROLE corpus...

  16. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  17. Automatically ordering events and times in text

    CERN Document Server

    Derczynski, Leon R A

    2017-01-01

    The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning so...

  18. Question Mapping

    Science.gov (United States)

    Martin, Josh

    2012-01-01

    After accepting the principal position at Farmersville (TX) Junior High, the author decided to increase instructional rigor through question mapping because of the success he saw using this instructional practice at his prior campus. Teachers are the number one influence on student achievement (Marzano, 2003), so question mapping provides a…

  19. Causal mapping

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    2006-01-01

    The lecture note explains how to use the causal mapping method as well as the theoretical framework aoosciated to the method......The lecture note explains how to use the causal mapping method as well as the theoretical framework aoosciated to the method...

  20. On automatic machine translation evaluation

    Directory of Open Access Journals (Sweden)

    Darinka Verdonik

    2013-05-01

    Full Text Available An important task of developing machine translation (MT is evaluating system performance. Automatic measures are most commonly used for this task, as manual evaluation is time-consuming and costly. However, to perform an objective evaluation is not a trivial task. Automatic measures, such as BLEU, TER, NIST, METEOR etc., have their own weaknesses, while manual evaluations are also problematic since they are always to some extent subjective. In this paper we test the influence of a test set on the results of automatic MT evaluation for the subtitling domain. Translating subtitles is a rather specific task for MT, since subtitles are a sort of summarization of spoken text rather than a direct translation of (written text. Additional problem when translating language pair that does not include English, in our example Slovene-Serbian, is that commonly the translations are done from English to Serbian and from English to Slovenian, and not directly, since most of the TV production is originally filmed in English. All this poses additional challenges to MT and consequently to MT evaluation. Automatic evaluation is based on a reference translation, which is usually taken from an existing parallel corpus and marked as a test set. In our experiments, we compare the evaluation results for the same MT system output using three types of test set. In the first round, the test set are 4000 subtitles from the parallel corpus of subtitles SUMAT. These subtitles are not direct translations from Serbian to Slovene or vice versa, but are based on an English original. In the second round, the test set are 1000 subtitles randomly extracted from the first test set and translated anew, from Serbian to Slovenian, based solely on the Serbian written subtitles. In the third round, the test set are the same 1000 subtitles, however this time the Slovene translations were obtained by manually correcting the Slovene MT outputs so that they are correct translations of the

  1. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 4 is a collection of papers that deals with the GIER ALGOL compiler, a parameterized compiler based on mechanical linguistics, and the JOVIAL language. A couple of papers describes a commercial use of stacks, an IBM system, and what an ideal computer program support system should be. One paper reviews the system of compilation, the development of a more advanced language, programming techniques, machine independence, and program transfer to other machines. Another paper describes the ALGOL 60 system for the GIER machine including running ALGOL pro

  2. Automatic Inference of DATR Theories

    CERN Document Server

    Barg, P

    1996-01-01

    This paper presents an approach for the automatic acquisition of linguistic knowledge from unstructured data. The acquired knowledge is represented in the lexical knowledge representation language DATR. A set of transformation rules that establish inheritance relationships and a default-inference algorithm make up the basis components of the system. Since the overall approach is not restricted to a special domain, the heuristic inference strategy uses criteria to evaluate the quality of a DATR theory, where different domains may require different criteria. The system is applied to the linguistic learning task of German noun inflection.

  3. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming, Volume 2 is a collection of papers that discusses the controversy about the suitability of COBOL as a common business oriented language, and the development of different common languages for scientific computation. A couple of papers describes the use of the Genie system in numerical calculation and analyzes Mercury autocode in terms of a phrase structure language, such as in the source language, target language, the order structure of ATLAS, and the meta-syntactical language of the assembly program. Other papers explain interference or an ""intermediate

  4. Automatic construction of dipole subtraction

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, K.; Moch, S. [Deutsches Elektronen-Synchrotron DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Uwer, P. [Institut fuer Theoretische Teilchenphysik, Universitaet Karlsruhe, D-76128 Karlsruhe (Germany)

    2009-01-15

    An automatization of the dipole subtraction method is reported. We have completed three essential steps: the creation of the dipole terms, the calculation of the color linked squared Born matrix elements, and the evaluation of different helicity amplitudes. The routines have been tested for a number of complex processes. For example we have compared the output of our program for gg{yields}tt-bar gg with the results of Refs. [S. Dittmaier, P. Uwer and S. Weinzierl, Phys. Rev. Lett. 98, 262002 (2007), S. Dittmaier, P. Uwer and S. Weinzierl, arXiv:0810.0452 [hep-ph

  5. Coordinated hybrid automatic repeat request

    KAUST Repository

    Makki, Behrooz

    2014-11-01

    We develop a coordinated hybrid automatic repeat request (HARQ) approach. With the proposed scheme, if a user message is correctly decoded in the first HARQ rounds, its spectrum is allocated to other users, to improve the network outage probability and the users\\' fairness. The results, which are obtained for single- and multiple-antenna setups, demonstrate the efficiency of the proposed approach in different conditions. For instance, with a maximum of M retransmissions and single transmit/receive antennas, the diversity gain of a user increases from M to (J+1)(M-1)+1 where J is the number of users helping that user.

  6. THE RIEDER AUTOMATIC RIFLE ATTACHMENT

    OpenAIRE

    W.M. Bisset

    2012-01-01

    In March 1981, Mrs H. J. R. Rieder donated her husband's presentation British .303 SMLE Rifle No 1 Mark III (number M-45374) with the Rieder Automatic Rifle Attachment to the Military Museum at the Castle in Cape Town. With it were a number of photographs, letters, documents and plans concerning this once secret invention which was tested outside the Castle during the Second World War. Fortunately, the documents donated by Mrs Rieder include a list of the numbers of the 18 rifles to which Mr ...

  7. Participatory Maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    2016-01-01

    practice. In particular, mapping environmental damage, endangered species, and human-made disasters has become one focal point for environmental knowledge production. This type of digital map has been highlighted as a processual turn in critical cartography, whereas in related computational journalism......, it can be seen as an interactive and iterative process of mapping complex and fragile ecological developments. This article looks at computer-assisted cartography as part of environmental knowledge production. It uses InfoAmazonia, the data-journalism platform on Amazon rainforests, as an example...

  8. Automatic change detection using very high-resolution SAR images and prior knowledge about the scene

    Science.gov (United States)

    Villamil Lopez, C.; Kempf, T.; Speck, R.; Anglberger, H.; Stilla, U.

    2017-05-01

    Change detection using very high resolution SAR images is an important source of information for reconnaissance applications. Modern SAR sensors are capable of acquiring many images in short periods of time, which creates the need for a reliable automatic change detection method. In this paper, we will describe a new automatic change detection approach that combines very high resolution SAR images with prior knowledge about the imaged scene. In this case, the prior knowledge about the scene will come from vector maps, which can be obtained from a Geographic Information System (GIS). These vector maps will allow us to determine which regions are of interest for the change detection, and what kind of changes/objects can be expected there. The algorithm described in this paper will be applied to a time series of high resolution TerraSAR-X images of a port with military shipyards, and used to automatically detect ship activity and extract information about the detected ships. In this case, the vector maps were obtained from a Geographic Information System (GIS) containing map data from OpenStreetMap

  9. Future shock: automatic external defibrillators.

    Science.gov (United States)

    Einav, Sharon; Weissman, Charles; Kark, Jeremy; Lotan, Chaim; Matot, Idit

    2005-04-01

    This review provides a practical overview of the performance capabilities of automatic external defibrillators (AEDs), and of advances in technology and dissemination programmes for these devices. Arrhythmia analysis by AEDs is extremely reliable in most settings (sensitivity 81-100%, specificity 99.9-97.6%). Accurate detection of arrhythmias has also been demonstrated in children, leading the US Food and Drug Administration to approve the use of several AEDs in children aged 8 years or younger. Factors that potentially may reduce the quality of arrhythmia detection are the presence of wide complex supraventricular tachycardia and location of an arrythmic event near to high-power lines. AED use by professional basic life support providers resulted in increased survival in the prehospital setting. However, provision of AEDs to nonmedical rescue services did not result in universal improvement in patient outcome. Public access defibrillation programmes have led to higher rates of survival from cardiac arrest. The role of AEDs in hospitals has yet to be elucidated, although in-hospital mortality from ventricular arrhythmias has been shown to decrease following AED deployment. Given the correct setting, AEDs can ensure that defibrillation is not limited by lack of medical knowledge or difficulties in decision making. However, event-related variables and operator-related factors, that are yet to be determined, can significantly affect the efficacy of automatic external defibrillation.

  10. Automatic temperature controlled retinal photocoagulation

    Science.gov (United States)

    Schlott, Kerstin; Koinzer, Stefan; Ptaszynski, Lars; Bever, Marco; Baade, Alex; Roider, Johann; Birngruber, Reginald; Brinkmann, Ralf

    2012-06-01

    Laser coagulation is a treatment method for many retinal diseases. Due to variations in fundus pigmentation and light scattering inside the eye globe, different lesion strengths are often achieved. The aim of this work is to realize an automatic feedback algorithm to generate desired lesion strengths by controlling the retinal temperature increase with the irradiation time. Optoacoustics afford non-invasive retinal temperature monitoring during laser treatment. A 75 ns/523 nm Q-switched Nd:YLF laser was used to excite the temperature-dependent pressure amplitudes, which were detected at the cornea by an ultrasonic transducer embedded in a contact lens. A 532 nm continuous wave Nd:YAG laser served for photocoagulation. The ED50 temperatures, for which the probability of ophthalmoscopically visible lesions after one hour in vivo in rabbits was 50%, varied from 63°C for 20 ms to 49°C for 400 ms. Arrhenius parameters were extracted as ΔE=273 J mol-1 and A=3.1044 s-1. Control algorithms for mild and strong lesions were developed, which led to average lesion diameters of 162+/-34 μm and 189+/-34 μm, respectively. It could be demonstrated that the sizes of the automatically controlled lesions were widely independent of the treatment laser power and the retinal pigmentation.

  11. ACIR: automatic cochlea image registration

    Science.gov (United States)

    Al-Dhamari, Ibraheem; Bauer, Sabine; Paulus, Dietrich; Lissek, Friedrich; Jacob, Roland

    2017-02-01

    Efficient Cochlear Implant (CI) surgery requires prior knowledge of the cochlea's size and its characteristics. This information helps to select suitable implants for different patients. To get these measurements, a segmentation method of cochlea medical images is needed. An important pre-processing step for good cochlea segmentation involves efficient image registration. The cochlea's small size and complex structure, in addition to the different resolutions and head positions during imaging, reveals a big challenge for the automated registration of the different image modalities. In this paper, an Automatic Cochlea Image Registration (ACIR) method for multi- modal human cochlea images is proposed. This method is based on using small areas that have clear structures from both input images instead of registering the complete image. It uses the Adaptive Stochastic Gradient Descent Optimizer (ASGD) and Mattes's Mutual Information metric (MMI) to estimate 3D rigid transform parameters. The use of state of the art medical image registration optimizers published over the last two years are studied and compared quantitatively using the standard Dice Similarity Coefficient (DSC). ACIR requires only 4.86 seconds on average to align cochlea images automatically and to put all the modalities in the same spatial locations without human interference. The source code is based on the tool elastix and is provided for free as a 3D Slicer plugin. Another contribution of this work is a proposed public cochlea standard dataset which can be downloaded for free from a public XNAT server.

  12. An automatic holographic adaptive phoropter

    Science.gov (United States)

    Amirsolaimani, Babak; Peyghambarian, N.; Schwiegerling, Jim; Bablumyan, Arkady; Savidis, Nickolaos; Peyman, Gholam

    2017-08-01

    Phoropters are the most common instrument used to detect refractive errors. During a refractive exam, lenses are flipped in front of the patient who looks at the eye chart and tries to read the symbols. The procedure is fully dependent on the cooperation of the patient to read the eye chart, provides only a subjective measurement of visual acuity, and can at best provide a rough estimate of the patient's vision. Phoropters are difficult to use for mass screenings requiring a skilled examiner, and it is hard to screen young children and the elderly etc. We have developed a simplified, lightweight automatic phoropter that can measure the optical error of the eye objectively without requiring the patient's input. The automatic holographic adaptive phoropter is based on a Shack-Hartmann wave front sensor and three computercontrolled fluidic lenses. The fluidic lens system is designed to be able to provide power and astigmatic corrections over a large range of corrections without the need for verbal feedback from the patient in less than 20 seconds.

  13. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    National Research Council Canada - National Science Library

    Chunhua Li; Pengpeng Zhao; Victor S. Sheng; Xuefeng Xian; Jian Wu; Zhiming Cui

    2017-01-01

    .... Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases...

  14. Automatic Planning Research Applied To Orbital Construction

    Science.gov (United States)

    Park, William T.

    1987-02-01

    Artificial intelligence research on automatic planning could result in a new class of management aids to reduce the cost of constructing the Space Station, and would have economically important spinoffs to terrestrial industry as well. Automatic planning programs could be used to plan and schedule launch activities, material deliveries to orbit, construction procedures, and the use of machinery and tools. Numerous automatic planning programs have been written since the 1050's. We describe PARPLAN, a recently-developed experimental automatic planning program written in the AI language Prolog, that can generate plans with parallel activities.

  15. Automatic Thermal Infrared Panoramic Imaging Sensor

    National Research Council Canada - National Science Library

    Gutin, Mikhail; Tsui, Eddy K; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-01-01

    .... Automatic detection, location, and tracking of targets outside protected area ensures maximum protection and at the same time reduces the workload on personnel, increases reliability and confidence...

  16. Mining Software Repositories for Automatic Interface Recommendation

    National Research Council Canada - National Science Library

    Sun, Xiaobing; Li, Bin; Duan, Yucong; Shi, Wei; Liu, Xiangyue

    2016-01-01

    .... To help developers better take advantage of the available interfaces used in software repositories, we previously proposed an approach to automatically recommend interfaces by mining existing open...

  17. Automatic Evaluation of Machine Translation

    DEFF Research Database (Denmark)

    Martinez, Mercedes Garcia; Koglin, Arlene; Mesa-Lao, Bartolomé

    2015-01-01

    The availability of systems capable of producing fairly accurate translations has increased the popularity of machine translation (MT). The translation industry is steadily incorporating MT in their workflows engaging the human translator to post-edit the raw MT output in order to comply with a set...... of quality criteria in as few edits as possible. The quality of MT systems is generally measured by automatic metrics, producing scores that should correlate with human evaluation.In this study, we investigate correlations between one of such metrics, i.e. Translation Edit Rate (TER), and actual post......-editing effort as it is shown in post-editing process data collected under experimental conditions. Using the CasMaCat workbench as a post-editing tool, process data were collected using keystrokes and eye-tracking data from five professional translators under two different conditions: i) traditional post...

  18. Automatic Regulation of Wastewater Discharge

    Directory of Open Access Journals (Sweden)

    Bolea Yolanda

    2017-01-01

    Full Text Available Wastewater plants, mainly with secondary treatments, discharge polluted water to environment that cannot be used in any human activity. When those dumps are in the sea it is expected that most of the biological pollutants die or almost disappear before water reaches human range. This natural withdrawal of bacteria, viruses and other pathogens is due to some conditions such as the salt water of the sea and the sun effect, and the dumps areas are calculated taking into account these conditions. However, under certain meteorological phenomena water arrives to the coast without the full disappearance of pollutant elements. In Mediterranean Sea there are some periods of adverse climatic conditions that pollute the coast near the wastewater dumping. In this paper, authors present an automatic control that prevents such pollution episodes using two mathematical models, one for the pollutant transportation and the other for the pollutant removal in wastewater spills.

  19. Automatic Detection of Terminology Evolution

    Science.gov (United States)

    Tahmasebi, Nina

    As archives contain documents that span over a long period of time, the language used to create these documents and the language used for querying the archive can differ. This difference is due to evolution in both terminology and semantics and will cause a significant number of relevant documents being omitted. A static solution is to use query expansion based on explicit knowledge banks such as thesauri or ontologies. However as we are able to archive resources with more varied terminology, it will be infeasible to use only explicit knowledge for this purpose. There exist only few or no thesauri covering very domain specific terminologies or slang as used in blogs etc. In this Ph.D. thesis we focus on automatically detecting terminology evolution in a completely unsupervised manner as described in this technical paper.

  20. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  1. Automatic Image Interpolation Using Homography

    Directory of Open Access Journals (Sweden)

    Tang Cheng-Yuan

    2010-01-01

    Full Text Available While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians occlude the main subject(s. We propose to apply image interpolation (also known as inpainting techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.

  2. Research on Semi-automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Qian Jun

    2008-11-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  3. AUTOMATIC CORRECTION ALGORITHM OF HYFROLOGY FEATURE ATTRIBUTE IN NATIONAL GEOGRAPHIC CENSUS

    Directory of Open Access Journals (Sweden)

    C. Li

    2017-09-01

    Full Text Available A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  4. Research on Semi-Automatic Bomb Fetching for an EOD Robot

    Directory of Open Access Journals (Sweden)

    Zeng Jian-Jun

    2007-06-01

    Full Text Available An EOD robot system, SUPER-PLUS, which has a novel semi-automatic bomb fetching function is presented in this paper. With limited support of human, SUPER-PLUS scans the cluttered environment with a wrist-mounted laser distance sensor and plans the manipulator a collision free path to fetch the bomb. The model construction of manipulator, bomb and environment, C-space map, path planning and the operation procedure are introduced in detail. The semi-automatic bomb fetching function has greatly improved the operation performance of EOD robot.

  5. AUTOMATIC 3D BUILDING MODEL GENERATIONS WITH AIRBORNE LiDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2017-11-01

    Full Text Available LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified

  6. Automatic 3d Building Model Generations with Airborne LiDAR Data

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D

  7. Automatic toilet seat lowering apparatus

    Science.gov (United States)

    Guerty, Harold G.

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  8. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    Full Text Available In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture, which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the

  9. Automatic segmentation of diatom images for classification

    NARCIS (Netherlands)

    Jalba, Andrei C.; Wilkinson, Michael H.F.; Roerdink, Jos B.T.M.

    A general framework for automatic segmentation of diatom images is presented. This segmentation is a critical first step in contour-based methods for automatic identification of diatoms by computerized image analysis. We review existing results, adapt popular segmentation methods to this difficult

  10. Integrated Systems of Automatic Radio Equipment,

    Science.gov (United States)

    1982-05-20

    172 Chapter 4, Integrated Automatic Goniometric Systems. .. ........... 258 Chapter 5, Integrated Automatic Ranging Systems...the integrated systems, such, as the goniometrical channels of homing systems, the correction of the position of the axis of gyroscope can be realized...frequently, namely in the cases of measuring the coordinates of the moving targets from the moving object. Thus, the input value of the goniometrical

  11. ANNUAL REPORT-AUTOMATIC INDEXING AND ABSTRACTING.

    Science.gov (United States)

    Lockheed Missiles and Space Co., Palo Alto, CA. Electronic Sciences Lab.

    THE INVESTIGATION IS CONCERNED WITH THE DEVELOPMENT OF AUTOMATIC INDEXING, ABSTRACTING, AND EXTRACTING SYSTEMS. BASIC INVESTIGATIONS IN ENGLISH MORPHOLOGY, PHONETICS, AND SYNTAX ARE PURSUED AS NECESSARY MEANS TO THIS END. IN THE FIRST SECTION THE THEORY AND DESIGN OF THE "SENTENCE DICTIONARY" EXPERIMENT IN AUTOMATIC EXTRACTION IS OUTLINED. SOME OF…

  12. Solar Powered Automatic Shrimp Feeding System

    Directory of Open Access Journals (Sweden)

    Dindo T. Ani

    2015-12-01

    Full Text Available - Automatic system has brought many revolutions in the existing technologies. One among the technologies, which has greater developments, is the solar powered automatic shrimp feeding system. For instance, the solar power which is a renewable energy can be an alternative solution to energy crisis and basically reducing man power by using it in an automatic manner. The researchers believe an automatic shrimp feeding system may help solve problems on manual feeding operations. The project study aimed to design and develop a solar powered automatic shrimp feeding system. It specifically sought to prepare the design specifications of the project, to determine the methods of fabrication and assembly, and to test the response time of the automatic shrimp feeding system. The researchers designed and developed an automatic system which utilizes a 10 hour timer to be set in intervals preferred by the user and will undergo a continuous process. The magnetic contactor acts as a switch connected to the 10 hour timer which controls the activation or termination of electrical loads and powered by means of a solar panel outputting electrical power, and a rechargeable battery in electrical communication with the solar panel for storing the power. By undergoing through series of testing, the components of the modified system were proven functional and were operating within the desired output. It was recommended that the timer to be used should be tested to avoid malfunction and achieve the fully automatic system and that the system may be improved to handle changes in scope of the project.

  13. Mental imagery affects subsequent automatic defense responses

    NARCIS (Netherlands)

    Hagenaars, Muriel; Mesbah, Rahele; Cremers, Henk

    Automatic defense responses promote survival and appropriate action under threat. They have also been associated with the development of threat-related psychiatric syndromes. Targeting such automatic responses during threat may be useful in populations with frequent threat exposure. Here, two

  14. Mental Imagery Affects Subsequent Automatic Defense Responses

    NARCIS (Netherlands)

    Hagenaars, M.A.; Mesbah, R.; Cremers, H.R.

    2015-01-01

    Automatic defense responses promote survival and appropriate action under threat. They have also been associated with the development of threat-related psychiatric syndromes. Targeting such automatic responses during threat may be useful in populations with frequent threat exposure. Here, two

  15. 49 CFR 236.750 - Interlocking, automatic.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Interlocking, automatic. 236.750 Section 236.750 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION... Interlocking, automatic. An arrangement of signals, with or without other signal appliances, which functions...

  16. Towards Automatic Trunk Classification on Young Conifers

    DEFF Research Database (Denmark)

    Petri, Stig; Immerkær, John

    2009-01-01

    In the garden nursery industry providing young Nordmann firs for Christmas tree plantations, there is a rising interest in automatic classification of their products to ensure consistently high quality and reduce the cost of manual labor. This paper describes a fully automatic single-view algorithm...

  17. Automatic Grading of Spreadsheet and Database Skills

    Science.gov (United States)

    Kovacic, Zlatko J.; Green, John Steven

    2012-01-01

    Growing enrollment in distance education has increased student-to-lecturer ratios and, therefore, increased the workload of the lecturer. This growing enrollment has resulted in mounting efforts to develop automatic grading systems in an effort to reduce this workload. While research in the design and development of automatic grading systems has a…

  18. Automatic Identification System (AIS) User Requirements

    Science.gov (United States)

    2000-12-01

    Identification System (AIS) User Requirements December 2000 Final Report This document is available to the U.S. public through...Report Type Final Dates Covered (from... to) - Title and Subtitle Automatic Identification System (AIS) User Requirements Contract Number Grant...The original document contains color images. Abstract Automatic Identification System (AIS) is a new technology that should improve situational

  19. CALS Mapping

    DEFF Research Database (Denmark)

    Collin, Ib; Nielsen, Povl Holm; Larsen, Michael Holm

    1998-01-01

    To enhance the industrial applications of CALS, CALS Center Danmark has developed a cost efficient and transparent assessment, CALS Mapping, to uncover the potential of CALS - primarily dedicated to small and medium sized enterprises. The idea behind CALS Mapping is that the CALS State...... of the enterprise is compared with a Reference Enterprise Model (REM). The REM is a CALS idealised enterprise providing full product support throughout the extended enterprise and containing different manufacturing aspects, e.g. component industry, process industry, and one-piece production. This CALS idealised...... enterprise is, when applied in a given organisation modified with respect to the industry regarded, hence irrelevant measure parameters are eliminated to avoid redundancy. This assessment of CALS Mapping, quantify the CALS potential of an organisation with the purpose of providing decision support to the top...

  20. Cognitive maps

    DEFF Research Database (Denmark)

    Minder, Bettina; Laursen, Linda Nhu; Lassen, Astrid Heidemann

    2014-01-01

    . Conceptual clustering is used to analyse and order information according to concepts or variables from within the data. The cognitive maps identified are validated through the comments of some of the same experts. The study presents three cognitive maps and respective world-views explaining how the design...... and innovation field are related and under which dimensions they differ. The paper draws preliminary conclusions on the implications of the different world- views on the innovation process. With the growing importance of the design approach in innovation e.g. design thinking, a clear conception...

  1. Participatory maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    towards a new political ecology. This type of digital cartographies has been highlighted as the ‘processual turn’ in critical cartography, whereas in related computational journalism it can be seen as an interactive and iterative process of mapping complex and fragile ecological developments. This paper...... looks at computer-assisted cartography as part of environmental knowledge production. It uses InfoAmazonia, the databased platform on Amazon rainforests, as an example of affective geo-visualization within information mapping that enhances embodiment in the experience of the information. Amazonia...

  2. MAPPING INNOVATION

    DEFF Research Database (Denmark)

    Thuesen, Christian Langhoff; Koch, Christian

    2011-01-01

    trends as globalization. Three niches (Lean Construction, BIM and System Deliveries) are subject to a detailed analysis showing partly incompatible rationales and various degrees of innovation potential. The paper further discusses how existing policymaking operates in a number of tensions one being......, the innovation map can act as a medium in which policymakers, interest organization and companies can develop and coordinate future innovation activities....

  3. Meal mapping

    DEFF Research Database (Denmark)

    Scholderer, Joachim; Kügler, Jens; Olsen, Nina Veflen

    2013-01-01

    probabilities are subjected to multiple correspondence analysis and mapped into low-dimensional space. In a third step, the principal coordinates representing meal centres and side components in the correspondence analysis solution are subjected to cluster analysis to identify distinct groups of compatible...

  4. Mapping filmmaking

    DEFF Research Database (Denmark)

    Gilje, Øystein; Frølunde, Lisbeth; Lindstrand, Fredrik

    2010-01-01

    This chapter concerns mapping patterns in regards to how young filmmakers (age 15 – 20) in the Scandinavian countries learn about filmmaking. To uncover the patterns, we present portraits of four young filmmakers who participated in the Scandinavian research project Making a filmmaker. The focus...

  5. Affective Maps

    DEFF Research Database (Denmark)

    Salovaara-Moring, Inka

    of digital cartographies has been highlighted as the ‘processual turn’ in critical cartography, whereas in related computational journalism it can be seen as an interactive and iterative process of mapping complex and fragile ecological developments. This paper looks at computer-assisted cartography as part...

  6. 7 CFR 58.418 - Automatic cheese making equipment.

    Science.gov (United States)

    2010-01-01

    ... processing or packaging areas. (c) Automatic salter. The automatic salter shall be constructed of stainless.... The automatic salter shall be constructed so that it can be satisfactorily cleaned. The salting system...

  7. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  8. Automatic categorization of land-water cover types of the Green Swamp, Florida, using Skylab multispectral scanner (S-192) data

    Science.gov (United States)

    Coker, A. E.; Higer, A. L.; Rogers, R. H.; Shah, N. J.; Reed, L. E.; Walker, S.

    1975-01-01

    The techniques used and the results achieved in the successful application of Skylab Multispectral Scanner (EREP S-192) high-density digital tape data for the automatic categorizing and mapping of land-water cover types in the Green Swamp of Florida were summarized. Data was provided from Skylab pass number 10 on 13 June 1973. Significant results achieved included the automatic mapping of a nine-category and a three-category land-water cover map of the Green Swamp. The land-water cover map was used to make interpretations of a hydrologic condition in the Green Swamp. This type of use marks a significant breakthrough in the processing and utilization of EREP S-192 data.

  9. Interactive Boundary Detection for Automatic Definition of 2D Opacity Transfer Function

    Science.gov (United States)

    Rauberger, Martin; Overhoff, Heinrich Martin

    In computer assisted diagnostics nowadays, high-value 3-D visualization intake a supporting role to the traditional 2-D slice wise visualization. 3-D visualization may create intuitive visual appearances of the spatial relations of anatomical structures, based upon transfer functions mapping data values to visual parameters, e.g. color or opacity. Manual definition of these transfer functions however requires expert knowledge and can be tedious. In this paper an approach to automatizing 2-D opacity transfer function definition is presented. Upon few parameters characterizing the image volume and an user-depicted area of interest, the procedure detects organ surfaces automatically, upon which transfer functions may automatically be defined. Parameter setting still requires experience about the imaging properties of modalities, and improper setting can cause falsely detected organ surfaces. Procedure tests with CT and MRI image volumes show, that real time structure detection is even possible for noisy image volumes.

  10. Reading music modifies spatial mapping in pianists

    OpenAIRE

    Stewart, Lauren; Walsh, Vincent; Frith, Uta

    2004-01-01

    We used a novel musical Stroop task to demonstrate that musical notation is automatically processed in trained pianists. Numbers were superimposed onto musical notes, and participants played five-note sequences by mapping from numbers to fingers instead of from notes to fingers. Pianists’ reaction times were significantly affected by the congruence of the note/number pairing. Nonmusicians were unaffected.\\ud In a nonmusical analogue of the task, pianists and nonmusicians showed a qualitative ...

  11. Automatic Transmission Of Liquid Nitrogen

    Directory of Open Access Journals (Sweden)

    Sumedh Mhatre

    2015-08-01

    Full Text Available Liquid Nitrogen is one of the major substance used as a chiller in industry such as Ice cream factory Milk Diary Storage of blood sample Blood Bank etc. It helps to maintain the required product at a lower temperature for preservation purpose. We cannot fully utilise the LN2 so practically if we are using 3.75 litre LN2 for a single day then around 12 of LN2 450 ml is wasted due to vaporisation. A pressure relief valve is provided to create a pressure difference. If there is no pressure difference between the cylinder carrying LN2 and its surrounding it will results in damage of container as well as wastage of LN2.Transmission of LN2 from TA55 to BA3 is carried manually .So care must be taken for the transmission of LN2 in order to avoid its wastage. With the help of this project concept the transmission of LN2 will be carried automatically so as to reduce the wastage of LN2 in case of manual operation.

  12. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  13. Automatic image cropping for republishing

    Science.gov (United States)

    Cheatle, Phil

    2010-02-01

    Image cropping is an important aspect of creating aesthetically pleasing web pages and repurposing content for different web or printed output layouts. Cropping provides both the possibility of improving the composition of the image, and also the ability to change the aspect ratio of the image to suit the layout design needs of different document or web page formats. This paper presents a method for aesthetically cropping images on the basis of their content. Underlying the approach is a novel segmentation-based saliency method which identifies some regions as "distractions", as an alternative to the conventional "foreground" and "background" classifications. Distractions are a particular problem with typical consumer photos found on social networking websites such as FaceBook, Flickr etc. Automatic cropping is achieved by identifying the main subject area of the image and then using an optimization search to expand this to form an aesthetically pleasing crop. Evaluation of aesthetic functions like auto-crop is difficult as there is no single correct solution. A further contribution of this paper is an automated evaluation method which goes some way towards handling the complexity of aesthetic assessment. This allows crop algorithms to be easily evaluated against a large test set.

  14. Reading music modifies spatial mapping in pianists.

    Science.gov (United States)

    Stewart, Lauren; Walsh, Vincent; Frith, Uta

    2004-02-01

    We used a novel musical Stroop task to demonstrate that musical notation is automatically processed in trained pianists. Numbers were superimposed onto musical notes, and participants played five-note sequences by mapping from numbers to fingers instead of from notes to fingers. Pianists' reaction times were significantly affected by the congruence of the note/number pairing. Nonmusicians were unaffected. In a nonmusical analogue of the task, pianists and nonmusicians showed a qualitative difference on performance of a vertical-to-horizontal stimulus-response mapping task. Pianists were faster when stimuli specifying a leftward response were presented in vertically lower locations and stimuli specifying a rightward response were presented in vertically higher locations. Nonmusicians showed the reverse pattern. No group differences were found on a task that required horizontal-to-horizontal mappings. We suggest that, as a result of learning to read and play keyboard music, pianists acquire vertical-to-horizontal visuomotor mappings that generalize outside the musical context.

  15. Automatic model acquisition and aerial image understanding

    Science.gov (United States)

    Jaynes, Christopher O.

    This thesis introduces a model-based technique for the automatic recognition and three-dimensional reconstruction of buildings directly from a single range image or stereo processing of multiple optical views of an urban site. Initially, focus-of-attention regions that are likely to contain buildings are segmented from the scene. A perceptual grouping algorithm detects building boundaries as closed polygons in the optical image. When a digital elevation map (DEM) is the only input source available, building regions are detected through direct analysis of the elevation data. Both methods then utilize the key idea of matching a database of shape models against the DEM using a model-indexing procedure that compares orientation histograms for each parameterized model in the database to a histogram that corresponds to the DEM region. The set of models (surfaces) that most closely match the DEM region are used as the initial estimates in a robust surface fitting technique that refines the model parameters (such as orientation and peak-roof angle) of each hypothesized roof surface. The surface model that converges to the DEM with the lowest residual fit error is retained as the most likely description of the surface. The database of surface models contains a limited number of canonical shapes common to rooftops, such as planes, peaks, domes, and gables. Reconstruction of complex shapes is achieved through a composition of different parameterizations of the canonical shape models. We show how the technique can be recursively applied to a range image to segment and reconstruct buildings as well as rooftop substructure. The ability of the model-indexing technique to separate surface models under different resolutions of the parameter space and different levels of noise in the DEM is studied. The approach is evaluated on several datasets, and we demonstrate that this two-phase reconstruction approach allows robust and accurate reconstruction of a wide variety of building

  16. MAPPING INNOVATION

    DEFF Research Database (Denmark)

    Thuesen, Christian Langhoff; Koch, Christian

    2011-01-01

    By adopting a theoretical framework from strategic niche management research (SNM) this paper presents an analysis of the innovation system of the Danish Construction industry. The analysis shows a multifaceted landscape of innovation around an existing regime, built around existing ways of working...... and developed over generations. The regime is challenged from various niches and the socio-technical landscape through trends as globalization. Three niches (Lean Construction, BIM and System Deliveries) are subject to a detailed analysis showing partly incompatible rationales and various degrees of innovation...... potential. The paper further discusses how existing policymaking operates in a number of tensions one being between government and governance. Based on the concepts from SNM the paper introduces an innovation map in order to support the development of meta-governance policymaking. By mapping some...

  17. Classification Space: A Multivariate Procedure For Automatic? Document Indexing And Retrieval.

    Science.gov (United States)

    Ossorio, P G

    1966-10-01

    A conceptual approach to linguistic data processing problems is sketched and empirical illustrations are presented of the major software components- indexing, storage, and retrieval-of a document processing system which offers, in principle, the advantages of complete automation, unlimited cross- indexing, effective sequential retrieval, sub-documentary indexing reflecting heterogeneity of subject matter within a document, and a procedure for automatically identifying retrieval requests which would be inadequately handled by the system. The indexing schema, designated as a "Classification Space" consists of a Euclidean model for mapping subject matter similarity within a given subject matter domain. A schema of this kind is empirically derived for certain fields of Engineering and Chemistry. A set of five related empirical studies provide convincing evidence that when appropriate experimental procedures are followed a very stable C-Space for a given content domain can be constructed on a surprisingly small data base. Other empirical studies demonstrate specific computational procedures for effective automatic indexing of documents in a C-Space, using a relatively small system vocabulary. One study demonstrates that a C-Space maps subject matter relevance as well as subject matter similarity, and thereby pro- motes effective sequential retrieval ; this result is also shown under conditions of automatic indexing. Negative results are found in an attempt to use the structural linguistic distinction of subject and object as a means of improving techniques for automatic indexing.

  18. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  19. Automatic adjustment of astrochronologic correlations

    Science.gov (United States)

    Zeeden, Christian; Kaboth, Stefanie; Hilgen, Frederik; Laskar, Jacques

    2017-04-01

    Here we present an algorithm for the automated adjustment and optimisation of correlations between proxy data and an orbital tuning target (or similar datasets as e.g. ice models) for the R environment (R Development Core Team 2008), building on the 'astrochron' package (Meyers et al.2014). The basis of this approach is an initial tuning on orbital (precession, obliquity, eccentricity) scale. We use filters of orbital frequency ranges related to e.g. precession, obliquity or eccentricity of data and compare these filters to an ensemble of target data, which may consist of e.g. different combinations of obliquity and precession, different phases of precession and obliquity, a mix of orbital and other data (e.g. ice models), or different orbital solutions. This approach allows for the identification of an ideal mix of precession and obliquity to be used as tuning target. In addition, the uncertainty related to different tuning tie points (and also precession- and obliquity contributions of the tuning target) can easily be assessed. Our message is to suggest an initial tuning and then obtain a reproducible tuned time scale, avoiding arbitrary chosen tie points and replacing these by automatically chosen ones, representing filter maxima (or minima). We present and discuss the above outlined approach and apply it to artificial and geological data. Artificial data are assessed to find optimal filter settings; real datasets are used to demonstrate the possibilities of such an approach. References: Meyers, S.R. (2014). Astrochron: An R Package for Astrochronology. http://cran.r-project.org/package=astrochron R Development Core Team (2008). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.

  20. Automatic thoracic body region localization

    Science.gov (United States)

    Bai, PeiRui; Udupa, Jayaram K.; Tong, YuBing; Xie, ShiPeng; Torigian, Drew A.

    2017-03-01

    Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the craniocaudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≍ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued. and the skeleton and pleural spaces used as a reference objects

  1. 2010 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2010 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  2. Authentic Material and Automaticity for Teaching English

    National Research Council Canada - National Science Library

    Widyastuti Widyastuti

    2017-01-01

    .... It has been suggested that Authentic Material and Automaticity Theory not only creates a friendly and fun condition in teaching reading but helps students to study comprehensibly so they are able...

  3. Automatic coding of online collaboration protocols

    NARCIS (Netherlands)

    Erkens, Gijsbert; Janssen, J.J.H.M.

    2006-01-01

    An automatic coding procedure is described to determine the communicative functions of messages in chat discussions. Five main communicative functions are distinguished: argumentative (indicating a line of argumentation or reasoning), responsive (e.g., confirmations, denials, and answers),

  4. Automatic lexical classification: bridging research and practice.

    Science.gov (United States)

    Korhonen, Anna

    2010-08-13

    Natural language processing (NLP)--the automatic analysis, understanding and generation of human language by computers--is vitally dependent on accurate knowledge about words. Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale.

  5. Automatic Radiometric Normalization of Multitemporal Satellite Imagery

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg; Schmidt, Michael

    2004-01-01

    The linear scale invariance of the multivariate alteration detection (MAD) transformation is used to obtain invariant pixels for automatic relative radiometric normalization of time series of multispectral data. Normalization by means of ordinary least squares regression method is compared...

  6. 2009 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2009 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  7. 2014 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2014 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  8. 2012 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2012 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  9. 2011 United States Automatic Identification System Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 2011 United States Automatic Identification System Database contains vessel traffic data for planning purposes within the U.S. coastal waters. The database is...

  10. 12th Portuguese Conference on Automatic Control

    CERN Document Server

    Soares, Filomena; Moreira, António

    2017-01-01

    The biennial CONTROLO conferences are the main events promoted by The CONTROLO 2016 – 12th Portuguese Conference on Automatic Control, Guimarães, Portugal, September 14th to 16th, was organized by Algoritmi, School of Engineering, University of Minho, in partnership with INESC TEC, and promoted by the Portuguese Association for Automatic Control – APCA, national member organization of the International Federation of Automatic Control – IFAC. The seventy-five papers published in this volume cover a wide range of topics. Thirty-one of them, of a more theoretical nature, are distributed among the first five parts: Control Theory; Optimal and Predictive Control; Fuzzy, Neural and Genetic Control; Modeling and Identification; Sensing and Estimation. The papers go from cutting-edge theoretical research to innovative control applications and show expressively how Automatic Control can be used to increase the well being of people. .

  11. The Rationalization of Automatic Units for HPDC Technology

    Directory of Open Access Journals (Sweden)

    A. Herman

    2012-04-01

    Full Text Available The paper deals with problem of optimal used automatic workplace for HPDC technology - mainly from aspects of operations sequence, efficiency of work cycle and planning of using and servicing of HPDC casting machine. Presented are possible ways to analyse automatic units for HPDC. The experimental part was focused on the rationalization of the current work cycle time for die casting of aluminium alloy. The working place was described in detail in the project. The measurements were carried out in detail with the help of charts and graphs mapped cycle of casting workplace. Other parameters and settings have been identified.The proposals for improvements were made after the first measurements and these improvements were subsequently verified. The main actions were mainly software modifications of casting center. It is for the reason that today's sophisticated workplaces have the option of a relatively wide range of modifications without any physical harm to machines themselves. It is possible to change settings or unlock some unsatisfactory parameters.

  12. An automatic taxonomy of galaxy morphology using unsupervised machine learning

    Science.gov (United States)

    Hocking, Alex; Geach, James E.; Sun, Yi; Davey, Neil

    2018-01-01

    We present an unsupervised machine learning technique that automatically segments and labels galaxies in astronomical imaging surveys using only pixel data. Distinct from previous unsupervised machine learning approaches used in astronomy we use no pre-selection or pre-filtering of target galaxy type to identify galaxies that are similar. We demonstrate the technique on the Hubble Space Telescope (HST) Frontier Fields. By training the algorithm using galaxies from one field (Abell 2744) and applying the result to another (MACS 0416.1-2403), we show how the algorithm can cleanly separate early and late type galaxies without any form of pre-directed training for what an 'early' or 'late' type galaxy is. We then apply the technique to the HST Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) fields, creating a catalogue of approximately 60 000 classifications. We show how the automatic classification groups galaxies of similar morphological (and photometric) type and make the classifications public via a catalogue, a visual catalogue and galaxy similarity search. We compare the CANDELS machine-based classifications to human-classifications from the Galaxy Zoo: CANDELS project. Although there is not a direct mapping between Galaxy Zoo and our hierarchical labelling, we demonstrate a good level of concordance between human and machine classifications. Finally, we show how the technique can be used to identify rarer objects and present lensed galaxy candidates from the CANDELS imaging.

  13. Automatic Thread-Level Parallelization in the Chombo AMR Library

    Energy Technology Data Exchange (ETDEWEB)

    Christen, Matthias; Keen, Noel; Ligocki, Terry; Oliker, Leonid; Shalf, John; Van Straalen, Brian; Williams, Samuel

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number of existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.

  14. Automatic inpainting scheme for video text detection and removal.

    Science.gov (United States)

    Mosleh, Ali; Bouguila, Nizar; Ben Hamza, Abdessamad

    2013-11-01

    We present a two stage framework for automatic video text removal to detect and remove embedded video texts and fill-in their remaining regions by appropriate data. In the video text detection stage, text locations in each frame are found via an unsupervised clustering performed on the connected components produced by the stroke width transform (SWT). Since SWT needs an accurate edge map, we develop a novel edge detector which benefits from the geometric features revealed by the bandlet transform. Next, the motion patterns of the text objects of each frame are analyzed to localize video texts. The detected video text regions are removed, then the video is restored by an inpainting scheme. The proposed video inpainting approach applies spatio-temporal geometric flows extracted by bandlets to reconstruct the missing data. A 3D volume regularization algorithm, which takes advantage of bandlet bases in exploiting the anisotropic regularities, is introduced to carry out the inpainting task. The method does not need extra processes to satisfy visual consistency. The experimental results demonstrate the effectiveness of both our proposed video text detection approach and the video completion technique, and consequently the entire automatic video text removal and restoration process.

  15. Calculation of climatic reference values and its use for automatic outlier detection in meteorological datasets

    Directory of Open Access Journals (Sweden)

    B. Téllez

    2008-04-01

    Full Text Available The climatic reference values for monthly and annual average air temperature and total precipitation in Catalonia – northeast of Spain – are calculated using a combination of statistical methods and geostatistical techniques of interpolation. In order to estimate the uncertainty of the method, the initial dataset is split into two parts that are, respectively, used for estimation and validation. The resulting maps are then used in the automatic outlier detection in meteorological datasets.

  16. : Developing an automatic text cartographer tool for assisting dyslexic learners with text comprehension

    OpenAIRE

    Laurent, Mario; Chanier, Thierry

    2013-01-01

    People with language impairement, lixe dyslexia, are in important trouble to read and to write. This trouble persist when they use a computer environment such as a word processor. They need some adapted tools. In this purpose, we are developing the LICI, a tool to generate automatically a map, conceptual or heuristic, from a text. This will facilitate text understanding in a short time and will greatly help dyslexics during school activities or learning tasks.; Les personnes atteintes de trou...

  17. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    Science.gov (United States)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  18. Automatic shape model building based on principal geodesic analysis bootstrapping

    DEFF Research Database (Denmark)

    Dam, Erik B; Fletcher, P Thomas; Pizer, Stephen M

    2008-01-01

    We present a novel method for automatic shape model building from a collection of training shapes. The result is a shape model consisting of the mean model and the major modes of variation with a dense correspondence map between individual shapes. The framework consists of iterations where a medial...... shape representation is deformed into the training shapes followed by computation of the shape mean and modes of shape variation. In the first iteration, a generic shape model is used as starting point - in the following iterations in the bootstrap method, the resulting mean and modes from the previous...... iteration are used. Thereby, we gradually capture the shape variation in the training collection better and better. Convergence of the method is explicitly enforced. The method is evaluated on collections of artificial training shapes where the expected shape mean and modes of variation are known by design...

  19. Automatic Review of Abstract State Machines by Meta Property Verification

    Science.gov (United States)

    Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia

    2010-01-01

    A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.

  20. Automatic Control of Freeboard and Turbine Operation

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Frigaard, Peter Bak; Friis-Madsen, Erik

    The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003.......The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003....

  1. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathan

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  2. An Automatic Monitoring System for Vehicle Drivers

    OpenAIRE

    Selime Ozaktas; Feyza Galip; Ibrahim Furkan Ince; Md. Haidar Sharif

    2016-01-01

    — it is a key issue to prevent life and property from the accidents caused by vehicle drivers. Alcohol can, speed, drowsy driving, and sudden heart attack are the major reasons for road accidents, which can lead to severe physical injuries, deaths, and serious economic losses. Various methods have been proposed to detect automatically those causes to prevent accidents. We have addressed an automatic system to provide protection of drivers and travelers by dint of computer vision techniques al...

  3. An Automatic Monitoring System for Vehicle Drivers

    OpenAIRE

    Selime Ozaktas; Feyza Galip; Ibrahim Furkan Ince; Md. Haidar Sharif

    2016-01-01

    It is a key issue to prevent life and property from the accidents caused by vehicle drivers. Alcohol can, speed, drowsy driving, and sudden heart attack are the major reasons for road accidents, which can lead to severe physical injuries, deaths, and serious economic losses. Various methods have been proposed to detect automatically those causes to prevent accidents. We have addressed an automatic system to provide protection of drivers and travelers by dint of computer vision techniques alon...

  4. Automatic attitudes and health information avoidance.

    Science.gov (United States)

    Howell, Jennifer L; Ratliff, Kate A; Shepperd, James A

    2016-08-01

    Early detection of disease is often crucially important for positive health outcomes, yet people sometimes decline opportunities for early detection (e.g., opting not to screen). Although some health-information avoidance reflects a deliberative decision, we propose that information avoidance can also reflect an automatic, nondeliberative reaction. In the present research, we investigated whether people's automatic attitude toward learning health information predicted their avoidance of risk feedback. In 3 studies, we gave adults the opportunity to learn their risk for a fictitious disease (Study 1), melanoma skin cancer (Study 2), or heart disease (Study 3), and examined whether they opted to learn their risk. The primary predictors were participants' attitudes about learning health information measured using a traditional (controlled) self-report instrument and using speeded (automatic) self-report measure. In addition, we prompted participants in Study 3 to contemplate their motives for seeking or avoiding information prior to making their decision. Across the 3 studies, self-reported (controlled) and implicitly measured (automatic) attitudes about learning health information independently predicted avoidance of the risk feedback, suggesting that automatic attitudes explain unique variance in the decision to avoid health information. In Study 3, prompting participants to contemplate their reasons for seeking versus avoiding health information reduced information avoidance. Surprisingly, it did so by inducing reliance on automatic, rather than controlled, attitudes. The data suggests that automatic processes play an important role in predicting health information avoidance and suggest that interventionists aiming to increase information seeking might fruitfully target automatic processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Towards Automatic Resource Bound Analysis for OCaml

    OpenAIRE

    Hoffmann, Jan; Das, Ankush; Weng, Shu-Chun

    2016-01-01

    This article presents a resource analysis system for OCaml programs. This system automatically derives worst-case resource bounds for higher-order polymorphic programs with user-defined inductive types. The technique is parametric in the resource and can derive bounds for time, memory allocations and energy usage. The derived bounds are multivariate resource polynomials which are functions of different size parameters that depend on the standard OCaml types. Bound inference is fully automatic...

  6. Self-Control Over Automatic Associations

    OpenAIRE

    Gonsalkorale, K; Sherman, JW; Allen, TJ

    2010-01-01

    © 2010 by Oxford University Press, Inc. All rights reserved. Processes that permit control over automatic impulses are critical to a range of goaldirected behaviors. This chapter examines the role of self-control in implicit attitudes. It is widely assumed that implicit attitude measures reflect the automatic activation of stored associations, whose expression cannot be altered by controlled processes. We review research from the Quad model (Sherman et al., 2008) to highlight the importance o...

  7. Online Automatic Post-Editing across Domains

    OpenAIRE

    Chatterjee, Rajen; Gebremelak, Gebremedhen; Negri, Matteo; Turchi, Marco

    2017-01-01

    Recent advances in automatic post-editing (APE) have shown that it is possible to automatically correct systematic errors made by machine translation systems. However, most of the current APE techniques have only been tested in controlled batch environments, where training and test data are sampled from the same distribution and the training set is fully available. In this paper, we propose an online APE system based on an instance selection mechanism that is able to efficiently work with a s...

  8. Tinnitus, anxiety and automatic processing of affective information: an explorative study.

    Science.gov (United States)

    Ooms, Els; Vanheule, Stijn; Meganck, Reitske; Vinck, Bart; Watelet, Jean-Baptiste; Dhooge, Ingeborg

    2013-03-01

    Anxiety is found to play an important role in the severity complaint of tinnitus patients. However, when investigating anxiety in tinnitus patients, most studies make use of verbal reports of affect (e.g., self-report questionnaires and/or interviews). These methods reflect conscious appraisals of anxiety, but do not map underlying processing mechanisms. Nonetheless, such mechanisms, like the automatic processing of affective information, are important as they modulate emotional experience and emotion-related behaviour. Research showed that highly anxious people process threatening information (e.g., fearful and angry faces) faster than non-anxious people. Therefore, this study investigates whether tinnitus patients process affective stimuli (happy, sad, fearful, and angry faces) in the same way as highly anxious people do. Our sample consisted out of 67 consecutive tinnitus patients. Relationships between tinnitus severity, pitch, loudness, hearing loss, and the automatic processing of affective information were explored. Results indicate that especially in severely distressed tinnitus patients, the severity complaint is highly related to the automatic processing of fearful (r = 0.37, p anxiety, we did find that the audiological characteristic, loudness, tends to be in some degree related to the automatic processing of fearful faces (r = 0.25, p = 0.08). We conclude that tinnitus is an anxiety-related problem on an automatic processing level.

  9. Reasoning Maps

    OpenAIRE

    Falcão, Renato Pinto de Queiroz

    2003-01-01

    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de Produção. Esta dissertação apresenta uma ferramenta de apoio à decisão, baseada na Metodologia Multicritérios de Apoio à Decisão - MCDA, através do desenvolvimento de um software denominado Reasoning Maps. O software permite, de maneira integrada, a construção de mapas cognitivos, suas diversas análises topológicas e o cadastramento e análise de alternativas. Abor...

  10. Projective mapping

    DEFF Research Database (Denmark)

    Dehlholm, Christian; Brockhoff, Per B.; Bredie, Wender Laurentius Petrus

    2012-01-01

    instructions and influence heavily the product placements and the descriptive vocabulary (Dehlholm et.al., 2012b). The type of assessors performing the method influences results with an extra aspect in Projective Mapping compared to more analytical tests, as the given spontaneous perceptions are much dependent...... the applied framework, semantic restrictions, the choice of type of assessors and the validation of product separations. The applied framework concerns the response surface as presented to the assessor in different shapes, e.g. rectangular, square or round. Semantic restrictions are a part of the assessor...

  11. Automated lineament mapping from remotely sensed data: case ...

    African Journals Online (AJOL)

    The automatic procedures of PCI LINE and Imagine Objective Line Extraction were adopted to extract lineaments from Landsat OLI imagery (band 6) and Digital Elevation Models (SPOT DEM and ASTER DEM) of Osun Drainage Basin, Southwestern Nigeria. This was with a view to optimally map lineaments within the basin ...

  12. 30 CFR 77.314 - Automatic temperature control instruments.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic temperature control instruments. 77... UNDERGROUND COAL MINES Thermal Dryers § 77.314 Automatic temperature control instruments. (a) Automatic temperature control instruments for thermal dryer system shall be of the recording type. (b) Automatic...

  13. Mapping of

    Directory of Open Access Journals (Sweden)

    Sayed M. Arafat

    2014-06-01

    Full Text Available Land cover map of North Sinai was produced based on the FAO-Land Cover Classification System (LCCS of 2004. The standard FAO classification scheme provides a standardized system of classification that can be used to analyze spatial and temporal land cover variability in the study area. This approach also has the advantage of facilitating the integration of Sinai land cover mapping products to be included with the regional and global land cover datasets. The total study area is covering a total area of 20,310.4 km2 (203,104 hectare. The landscape classification was based on SPOT4 data acquired in 2011 using combined multispectral bands of 20 m spatial resolution. Geographic Information System (GIS was used to manipulate the attributed layers of classification in order to reach the maximum possible accuracy. GIS was also used to include all necessary information. The identified vegetative land cover classes of the study area are irrigated herbaceous crops, irrigated tree crops and rain fed tree crops. The non-vegetated land covers in the study area include bare rock, bare soils (stony, very stony and salt crusts, loose and shifting sands and sand dunes. The water bodies were classified as artificial perennial water bodies (fish ponds and irrigated canals and natural perennial water bodies as lakes (standing. The artificial surfaces include linear and non-linear features.

  14. Practical automatic Arabic license plate recognition system

    Science.gov (United States)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  15. Attention to Automatic Movements in Parkinson's Disease: Modified Automatic Mode in the Striatum.

    Science.gov (United States)

    Wu, Tao; Liu, Jun; Zhang, Hejia; Hallett, Mark; Zheng, Zheng; Chan, Piu

    2015-10-01

    We investigated neural correlates when attending to a movement that could be made automatically in healthy subjects and Parkinson's disease (PD) patients. Subjects practiced a visuomotor association task until they could perform it automatically, and then directed their attention back to the automated task. Functional MRI was obtained during the early-learning, automatic stage, and when re-attending. In controls, attention to automatic movement induced more activation in the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex, and rostral supplementary motor area. The motor cortex received more influence from the cortical motor association regions. In contrast, the pattern of the activity and connectivity of the striatum remained at the level of the automatic stage. In PD patients, attention enhanced activity in the DLPFC, premotor cortex, and cerebellum, but the connectivity from the putamen to the motor cortex decreased. Our findings demonstrate that, in controls, when a movement achieves the automatic stage, attention can influence the attentional networks and cortical motor association areas, but has no apparent effect on the striatum. In PD patients, attention induces a shift from the automatic mode back to the controlled pattern within the striatum. The shifting between controlled and automatic behaviors relies in part on striatal function. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. a Laser-Slam Algorithm for Indoor Mobile Mapping

    Science.gov (United States)

    Zhang, Wenjun; Zhang, Qiao; Sun, Kai; Guo, Sheng

    2016-06-01

    A novel Laser-SLAM algorithm is presented for real indoor environment mobile mapping. SLAM algorithm can be divided into two classes, Bayes filter-based and graph optimization-based. The former is often difficult to guarantee consistency and accuracy in largescale environment mapping because of the accumulative error during incremental mapping. Graph optimization-based SLAM method often assume predetermined landmarks, which is difficult to be got in unknown environment mapping. And there most likely has large difference between the optimize result and the real data, because the constraints are too few. This paper designed a kind of sub-map method, which could map more accurately without predetermined landmarks and avoid the already-drawn map impact on agent's location. The tree structure of sub-map can be indexed quickly and reduce the amount of memory consuming when mapping. The algorithm combined Bayes-based and graph optimization-based SLAM algorithm. It created virtual landmarks automatically by associating data of sub-maps for graph optimization. Then graph optimization guaranteed consistency and accuracy in large-scale environment mapping and improved the reasonability and reliability of the optimize results. Experimental results are presented with a laser sensor (UTM 30LX) in official buildings and shopping centres, which prove that the proposed algorithm can obtain 2D maps within 10cm precision in indoor environment range from several hundreds to 12000 square meter.

  17. Automatic Fastening Large Structures: a New Approach

    Science.gov (United States)

    Lumley, D. F.

    1985-01-01

    The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.

  18. Automaticity: Componential, Causal, and Mechanistic Explanations.

    Science.gov (United States)

    Moors, Agnes

    2016-01-01

    The review first discusses componential explanations of automaticity, which specify non/automaticity features (e.g., un/controlled, un/conscious, non/efficient, fast/slow) and their interrelations. Reframing these features as factors that influence processes (e.g., goals, attention, and time) broadens the range of factors that can be considered (e.g., adding stimulus intensity and representational quality). The evidence reviewed challenges the view of a perfect coherence among goals, attention, and consciousness, and supports the alternative view that (a) these and other factors influence the quality of representations in an additive way (e.g., little time can be compensated by extra attention or extra stimulus intensity) and that (b) a first threshold of this quality is required for unconscious processing and a second threshold for conscious processing. The review closes with a discussion of causal explanations of automaticity, which specify factors involved in automatization such as repetition and complexity, and a discussion of mechanistic explanations, which specify the low-level processes underlying automatization.

  19. Automatic Power Line Inspection Using UAV Images

    Directory of Open Access Journals (Sweden)

    Yong Zhang

    2017-08-01

    Full Text Available Power line inspection ensures the safe operation of a power transmission grid. Using unmanned aerial vehicle (UAV images of power line corridors is an effective way to carry out these vital inspections. In this paper, we propose an automatic inspection method for power lines using UAV images. This method, known as the power line automatic measurement method based on epipolar constraints (PLAMEC, acquires the spatial position of the power lines. Then, the semi patch matching based on epipolar constraints (SPMEC dense matching method is applied to automatically extract dense point clouds within the power line corridor. Obstacles can then be automatically detected by calculating the spatial distance between a power line and the point cloud representing the ground. Experimental results show that the PLAMEC automatically measures power lines effectively with a measurement accuracy consistent with that of manual stereo measurements. The height root mean square (RMS error of the point cloud was 0.233 m, and the RMS error of the power line was 0.205 m. In addition, we verified the detected obstacles in the field and measured the distance between the canopy and power line using a laser range finder. The results show that the difference of these two distances was within ±0.5 m.

  20. a Conceptual Framework for Indoor Mapping by Using Grammars

    Science.gov (United States)

    Hu, X.; Fan, H.; Zipf, A.; Shang, J.; Gu, F.

    2017-09-01

    Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users' location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  1. Mapping Resilience

    DEFF Research Database (Denmark)

    Carruth, Susan

    2015-01-01

    Resilience theory is a growing discipline with great relevance for the discipline of planning, particularly in fields like energy planning that face great uncertainty and rapidly transforming contexts. Building on the work of the Stockholm Resilience Centre, this paper begins by outlining...... the relationship between resilience and energy planning, suggesting that planning in, and with, time is a core necessity in this domain. It then reviews four examples of graphically mapping with time, highlighting some of the key challenges, before tentatively proposing a graphical language to be employed...... by planners when aiming to construct resilient energy plans. It concludes that a graphical language has the potential to be a significant tool, flexibly facilitating cross-disciplinary communication and decision-making, while emphasising that its role is to support imaginative, resilient planning rather than...

  2. Urban forest topographical mapping using UAV LIDAR

    Science.gov (United States)

    Putut Ash Shidiq, Iqbal; Wibowo, Adi; Kusratmoko, Eko; Indratmoko, Satria; Ardhianto, Ronni; Prasetyo Nugroho, Budi

    2017-12-01

    Topographical data is highly needed by many parties, such as government institution, mining companies and agricultural sectors. It is not just about the precision, the acquisition time and data processing are also carefully considered. In relation with forest management, a high accuracy topographic map is necessary for planning, close monitoring and evaluating forest changes. One of the solution to quickly and precisely mapped topography is using remote sensing system. In this study, we test high-resolution data using Light Detection and Ranging (LiDAR) collected from unmanned aerial vehicles (UAV) to map topography and differentiate vegetation classes based on height in urban forest area of University of Indonesia (UI). The semi-automatic and manual classifications were applied to divide point clouds into two main classes, namely ground and vegetation. There were 15,806,380 point clouds obtained during the post-process, in which 2.39% of it were detected as ground.

  3. Automatic and Accurate Shadow Detection Using Near-Infrared Information.

    Science.gov (United States)

    Rüfenacht, Dominic; Fredembach, Clément; Süsstrunk, Sabine

    2014-08-01

    We present a method to automatically detect shadows in a fast and accurate manner by taking advantage of the inherent sensitivity of digital camera sensors to the near-infrared (NIR) part of the spectrum. Dark objects, which confound many shadow detection algorithms, often have much higher reflectance in the NIR. We can thus build an accurate shadow candidate map based on image pixels that are dark both in the visible and NIR representations. We further refine the shadow map by incorporating ratios of the visible to the NIR image, based on the observation that commonly encountered light sources have very distinct spectra in the NIR band. The results are validated on a new database, which contains visible/NIR images for a large variety of real-world shadow creating illuminant conditions, as well as manually labeled shadow ground truth. Both quantitative and qualitative evaluations show that our method outperforms current state-of-the-art shadow detection algorithms in terms of accuracy and computational efficiency.

  4. Automatic structural matching of 3D image data

    Science.gov (United States)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  5. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  6. Automatic welding of stainless steel tubing

    Science.gov (United States)

    Clautice, W. E.

    1978-01-01

    The use of automatic welding for making girth welds in stainless steel tubing was investigated as well as the reduction in fabrication costs resulting from the elimination of radiographic inspection. Test methodology, materials, and techniques are discussed, and data sheets for individual tests are included. Process variables studied include welding amperes, revolutions per minute, and shielding gas flow. Strip chart recordings, as a definitive method of insuring weld quality, are studied. Test results, determined by both radiographic and visual inspection, are presented and indicate that once optimum welding procedures for specific sizes of tubing are established, and the welding machine operations are certified, then the automatic tube welding process produces good quality welds repeatedly, with a high degree of reliability. Revised specifications for welding tubing using the automatic process and weld visual inspection requirements at the Kennedy Space Center are enumerated.

  7. Applications of automatic differentiation in topology optimization

    DEFF Research Database (Denmark)

    Nørgaard, Sebastian A.; Sagebaum, Max; Gauger, Nicolas R.

    2017-01-01

    The goal of this article is to demonstrate the applicability and to discuss the advantages and disadvantages of automatic differentiation in topology optimization. The technique makes it possible to wholly or partially automate the evaluation of derivatives for optimization problems and is demons......The goal of this article is to demonstrate the applicability and to discuss the advantages and disadvantages of automatic differentiation in topology optimization. The technique makes it possible to wholly or partially automate the evaluation of derivatives for optimization problems...... and is demonstrated on two separate, previously published types of problems in topology optimization. Two separate software packages for automatic differentiation, CoDiPack and Tapenade are considered, and their performance and usability trade-offs are discussed and compared to a hand coded adjoint gradient...

  8. Automatic inference of indexing rules for MEDLINE.

    Science.gov (United States)

    Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent

    2008-11-19

    Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  9. Automatic weld torch guidance control system

    Science.gov (United States)

    Smaith, H. E.; Wall, W. A.; Burns, M. R., Jr.

    1982-01-01

    A highly reliable, fully digital, closed circuit television optical, type automatic weld seam tracking control system was developed. This automatic tracking equipment is used to reduce weld tooling costs and increase overall automatic welding reliability. The system utilizes a charge injection device digital camera which as 60,512 inidividual pixels as the light sensing elements. Through conventional scanning means, each pixel in the focal plane is sequentially scanned, the light level signal digitized, and an 8-bit word transmitted to scratch pad memory. From memory, the microprocessor performs an analysis of the digital signal and computes the tracking error. Lastly, the corrective signal is transmitted to a cross seam actuator digital drive motor controller to complete the closed loop, feedback, tracking system. This weld seam tracking control system is capable of a tracking accuracy of + or - 0.2 mm, or better. As configured, the system is applicable to square butt, V-groove, and lap joint weldments.

  10. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  11. Automaticity in reading isiZulu

    Directory of Open Access Journals (Sweden)

    Sandra Land

    2016-03-01

    Full Text Available Automaticity, or instant recognition of combinations of letters as units of language, is essential for proficient reading in any language. The article explores automaticity amongst competent adult first-language readers of isiZulu, and the factors associated with it or its opposite - active decoding. Whilst the transparent spelling patterns of isiZulu aid learner readers, some of its orthographical features may militate against their gaining automaticity. These features are agglutination; a conjoined writing system; comparatively long, complex words; and a high rate of recurring strings of particular letters. This implies that optimal strategies for teaching reading in orthographically opaque languages such as English should not be assumed to apply to languages with dissimilar orthographies.Keywords: Orthography; Eye movement; Reading; isiZulu

  12. Testing interactive effects of automatic and conflict control processes during response inhibition - A system neurophysiological study.

    Science.gov (United States)

    Chmielewski, Witold X; Beste, Christian

    2017-02-01

    In everyday life successful acting often requires to inhibit automatic responses that might not be appropriate in the current situation. These response inhibition processes have been shown to become aggravated with increasing automaticity of pre-potent response tendencies. Likewise, it has been shown that inhibitory processes are complicated by a concurrent engagement in additional cognitive control processes (e.g. conflicting monitoring). Therefore, opposing processes (i.e. automaticity and cognitive control) seem to strongly impact response inhibition. However, possible interactive effects of automaticity and cognitive control for the modulation of response inhibition processes have yet not been examined. In the current study we examine this question using a novel experimental paradigm combining a Go/NoGo with a Simon task in a system neurophysiological approach combining EEG recordings with source localization analyses. The results show that response inhibition is less accurate in non-conflicting than in conflicting stimulus-response mappings. Thus it seems that conflicts and the resulting engagement in conflict monitoring processes, as reflected in the N2 amplitude, may foster response inhibition processes. This engagement in conflict monitoring processes leads to an increase in cognitive control, as reflected by an increased activity in the anterior and posterior cingulate areas, while simultaneously the automaticity of response tendencies is decreased. Most importantly, this study suggests that the quality of conflict processes in anterior cingulate areas and especially the resulting interaction of cognitive control and automaticity of pre-potent response tendencies are important factors to consider, when it comes to the modulation of response inhibition processes. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Automatic emotional expression analysis from eye area

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  14. Automatic speech recognition a deep learning approach

    CERN Document Server

    Yu, Dong

    2015-01-01

    This book summarizes the recent advancement in the field of automatic speech recognition with a focus on discriminative and hierarchical models. This will be the first automatic speech recognition book to include a comprehensive coverage of recent developments such as conditional random field and deep learning techniques. It presents insights and theoretical foundation of a series of recent models such as conditional random field, semi-Markov and hidden conditional random field, deep neural network, deep belief network, and deep stacking models for sequential learning. It also discusses practical considerations of using these models in both acoustic and language modeling for continuous speech recognition.

  15. The Ballistic Flight of an Automatic Duck

    Directory of Open Access Journals (Sweden)

    Fabienne Collignon

    2012-10-01

    Full Text Available This article analyses Jacques de Vaucanson's automatic duck and its successive appearances in Thomas Pynchon's work (both 'Mason & Dixon' and, by extension, 'Gravity's Rainbow' to discuss the correlations between (self- evolving technologies and space age gadgets. The Cold War serves, therefore, as the frame of reference for this article, which is further preoccupied with the geographical positions that automatons or prototype cyborgs occupy: the last part of the essay analyses Walter Benjamin's 'Arcades Project', where mechanical hens stand at the entrance to dreamworlds. Automatic fowl guard, and usher into being, new technologised worlds.

  16. The Ballistic Flight of an Automatic Duck

    Directory of Open Access Journals (Sweden)

    Fabienne Collignon

    2012-10-01

    Full Text Available This article analyses Jacques de Vaucanson's automatic duck and its successive appearances in Thomas Pynchon's work (both Mason & Dixon and, by extension, Gravity's Rainbow to discuss the correlations between (self- evolving technologies and space age gadgets. The Cold War serves, therefore, as the frame of reference for this article, which is further preoccupied with the geographical positions that automatons or prototype cyborgs occupy: the last part of the essay analyses Walter Benjamin's Arcades Project, where mechanical hens stand at the entrance to dreamworlds. Automatic fowl guard, and usher into being, new technologised worlds.

  17. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  18. Automatic identification of species with neural networks.

    Science.gov (United States)

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  19. Automatic identification of species with neural networks

    Directory of Open Access Journals (Sweden)

    Andrés Hernández-Serna

    2014-11-01

    Full Text Available A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  20. Automatic Control Of Length Of Welding Arc

    Science.gov (United States)

    Iceland, William F.

    1991-01-01

    Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.

  1. Automatic malware analysis an emulator based approach

    CERN Document Server

    Yin, Heng

    2012-01-01

    Malicious software (i.e., malware) has become a severe threat to interconnected computer systems for decades and has caused billions of dollars damages each year. A large volume of new malware samples are discovered daily. Even worse, malware is rapidly evolving becoming more sophisticated and evasive to strike against current malware analysis and defense systems. Automatic Malware Analysis presents a virtualized malware analysis framework that addresses common challenges in malware analysis. In regards to this new analysis framework, a series of analysis techniques for automatic malware analy

  2. Automatic segmentation of vertebrae from radiographs

    DEFF Research Database (Denmark)

    Mysling, Peter; Petersen, Peter Kersten; Nielsen, Mads

    2011-01-01

    Segmentation of vertebral contours is an essential task in the design of automatic tools for vertebral fracture assessment. In this paper, we propose a novel segmentation technique which does not require operator interaction. The proposed technique solves the segmentation problem in a hierarchical...... is constrained by a conditional shape model, based on the variability of the coarse spine location estimates. The technique is evaluated on a data set of manually annotated lumbar radiographs. The results compare favorably to the previous work in automatic vertebra segmentation, in terms of both segmentation...

  3. Study on Reactive Automatic Compensation System Design

    Science.gov (United States)

    Zhe, Sun; Qingyang, Liang; Peiqing, Luo; Chenfei, Zhang

    At present, low-voltage side of transformer is public in urban distribution network, as inductive load of household appliances is increasing, the power factor decreased, this lead to a large loss of public transformer low voltage side, the supply voltage indicators can not meet user's requirements. Therefore, the design of reactive power compensation system has become another popular research. This paper introduces the principle of reactive power compensation, analyzes key technologies of reactive power compensation, design an overall program of reactive power automatic compensation system to conquer various deficiencies of reactive power automatic compensation equipment.

  4. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  5. The automatic component of habit in health behavior: habit as cue-contingent automaticity.

    Science.gov (United States)

    Orbell, Sheina; Verplanken, Bas

    2010-07-01

    Habit might be usefully characterized as a form of automaticity that involves the association of a cue and a response. Three studies examined habitual automaticity in regard to different aspects of the cue-response relationship characteristic of unhealthy and healthy habits. In each study, habitual automaticity was assessed by the Self-Report Habit Index (SRHI). In Study 1 SRHI scores correlated with attentional bias to smoking cues in a Stroop task. Study 2 examined the ability of a habit cue to elicit an unwanted habit response. In a prospective field study, habitual automaticity in relation to smoking when drinking alcohol in a licensed public house (pub) predicted the likelihood of cigarette-related action slips 2 months later after smoking in pubs had become illegal. In Study 3 experimental group participants formed an implementation intention to floss in response to a specified situational cue. Habitual automaticity of dental flossing was rapidly enhanced compared to controls. The studies provided three different demonstrations of the importance of cues in the automatic operation of habits. Habitual automaticity assessed by the SRHI captured aspects of a habit that go beyond mere frequency or consistency of the behavior. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  6. Mapped Landmark Algorithm for Precision Landing

    Science.gov (United States)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  7. ShakeMaps during the Emilia sequence

    Directory of Open Access Journals (Sweden)

    Valentino Lauciani

    2012-10-01

    Full Text Available ShakeMap is a software package that can be used to generate maps of ground shaking for various peak ground motion (PGM parameters, including peak ground acceleration (PGA, peak ground velocity, and spectral acceleration response at 0.3 s, 1.0 s and 3.0 s, and instrumentally derived intensities. ShakeMap has been implemented in Italy at the Istituto Nazionale di Geofisica e Vulcanologia (INGV; National Institute of Geophysics and Volcanology since 2006 (http://shakemap.rm.ingv.it, with the primary aim being to help the Dipartimento della Protezione Civile (DPC; Civil Protection Department civil defense agency in the definition of rapid and accurate information on where earthquake damage is located, to correctly direct rescue teams and to organize emergency responses. Based on the ShakeMap software package [Wald et al. 1999, Worden et al. 2010], which was developed by the U.S. Geological Survey (USGS, the INGV is constructing shake maps for Ml ≥3.0, with the adoption of a fully automatic procedure based on manually revised locations and magnitudes [Michelini et al. 2008]. The focus of this study is the description of the progressive generation of these shake maps for the sequence that struck the Emilia-Romagna Region in May 2012. […

  8. Portable Map-Reduce Utility for MIT SuperCloud Environment

    Science.gov (United States)

    2015-09-17

    Google [2]. The open source community has its own implementations such as Hadoop MapReduce framework [3]. Although its underlying concept has...popular with the Hadoop MapReduce framework for the Java community. The Map Reduce programming model provides a number of benefits such as automatic...utility to MIT SuperCloud systems [5], which works on a central storage system instead of distributed filesystem such as Hadoop distributed filesystem

  9. Mapping the Climate.

    Science.gov (United States)

    1981-01-19

    Computer Mapping in Geoecology , The Harvard Library of Computer Graphics, 1979 Mapping Collection, Vol. 5, pp 11-27. 5. Robinson, A.H. (1974) A New Map...Use of Synagraphic Computer Mapping in Geoecology , The Harvard Library of Computer Graphics, 1979 Mapping Collection, Vol. 5, pp 11-27. 5. Robinson, A

  10. Human Mind Maps

    Science.gov (United States)

    Glass, Tom

    2016-01-01

    When students generate mind maps, or concept maps, the maps are usually on paper, computer screens, or a blackboard. Human Mind Maps require few resources and little preparation. The main requirements are space where students can move around and a little creativity and imagination. Mind maps can be used for a variety of purposes, and Human Mind…

  11. THE ACCURACY OF AUTOMATIC PHOTOGRAMMETRIC TECHNIQUES ON ULTRA-LIGHT UAV IMAGERY

    Directory of Open Access Journals (Sweden)

    O. Küng

    2012-09-01

    Full Text Available This paper presents an affordable, fully automated and accurate mapping solutions based on ultra-light UAV imagery. Several datasets are analysed and their accuracy is estimated. We show that the accuracy highly depends on the ground resolution (flying height of the input imagery. When chosen appropriately this mapping solution can compete with traditional mapping solutions that capture fewer high-resolution images from airplanes and that rely on highly accurate orientation and positioning sensors on board. Due to the careful integration with recent computer vision techniques, the post processing is robust and fully automatic and can deal with inaccurate position and orientation information which are typically problematic with traditional techniques.

  12. Automatic landmark annotation and dense correspondence registration for 3D human facial images.

    Science.gov (United States)

    Guo, Jianya; Mei, Xi; Tang, Kun

    2013-07-22

    Traditional anthropometric studies of human face rely on manual measurements of simple features, which are labor intensive and lack of full comprehensive inference. Dense surface registration of three-dimensional (3D) human facial images holds great potential for high throughput quantitative analyses of complex facial traits. However there is a lack of automatic high density registration method for 3D faical images. Furthermore, current approaches of landmark recognition require further improvement in accuracy to support anthropometric applications. Here we describe a novel non-rigid registration method for fully automatic 3D facial image mapping. This method comprises two steps: first, seventeen facial landmarks are automatically annotated, mainly via PCA-based feature recognition following 3D-to-2D data transformation. Second, an efficient thin-plate spline (TPS) protocol is used to establish the dense anatomical correspondence between facial images, under the guidance of the predefined landmarks. We demonstrate that this method is highly accurate in landmark recognition, with an average RMS error of ~1.7 mm. The registration process is highly robust, even for different ethnicities. This method supports fully automatic registration of dense 3D facial images, with 17 landmarks annotated at greatly improved accuracy. A stand-alone software has been implemented to assist high-throughput high-content anthropometric analysis.

  13. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  14. Automatic TLI recognition system beta prototype testing

    Energy Technology Data Exchange (ETDEWEB)

    Lassahn, G.D.

    1996-06-01

    This report describes the beta prototype automatic target recognition system ATR3, and some performance tests done with this system. This is a fully operational system, with a high computational speed. It is useful for findings any kind of target in digitized image data, and as a general purpose image analysis tool.

  15. Automatic visual inspection of hybrid microcircuits

    Energy Technology Data Exchange (ETDEWEB)

    Hines, R.E.

    1980-05-01

    An automatic visual inspection system using a minicomputer and a video digitizer was developed for inspecting hybrid microcircuits (HMC) and thin-film networks (TFN). The system performed well in detecting missing components on HMCs and reduced the testing time for each HMC by 75%.

  16. Automatic Water Sensor Window Opening System

    KAUST Repository

    Percher, Michael

    2013-12-05

    A system can automatically open at least one window of a vehicle when the vehicle is being submerged in water. The system can include a water collector and a water sensor, and when the water sensor detects water in the water collector, at least one window of the vehicle opens.

  17. Emotional characters for automatic plot creation

    NARCIS (Netherlands)

    Theune, Mariet; Rensen, S.; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Nijholt, Antinus; Göbel, S.; Spierling, U.; Hoffmann, A.; Iurgel, I.; Schneider, O.; Dechau, J.; Feix, A.

    The Virtual Storyteller is a multi-agent framework for automatic story generation. In this paper we describe how plots emerge from the actions of semi-autonomous character agents, focusing on the influence of the characters’ emotions on plot development.

  18. 8 CFR 1205.1 - Automatic revocation.

    Science.gov (United States)

    2010-01-01

    ... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Automatic revocation. 1205.1 Section 1205.1 Aliens and Nationality EXECUTIVE OFFICE FOR IMMIGRATION REVIEW, DEPARTMENT OF JUSTICE IMMIGRATION... 18 years of age or older, an emancipated minor, or a corporation incorporated in the United States...

  19. A Statistical Approach to Automatic Speech Summarization

    Directory of Open Access Journals (Sweden)

    Chiori Hori

    2003-02-01

    Full Text Available This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG. We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  20. Automatic Smoker Detection from Telephone Speech Signals

    DEFF Research Database (Denmark)

    Alavijeh, Amir Hossein Poorjam; Hesaraki, Soheila; Safavi, Saeid

    2017-01-01

    This paper proposes an automatic smoking habit detection from spontaneous telephone speech signals. In this method, each utterance is modeled using i-vector and non-negative factor analysis (NFA) frameworks, which yield low-dimensional representation of utterances by applying factor analysis on G...

  1. Automatic alignment of audiobooks in Afrikaans

    CSIR Research Space (South Africa)

    Van Heerden, CJ

    2012-11-01

    Full Text Available to perform Maximum A Posteriori adaptation on the baseline models. The corresponding value for models trained on the audiobook data is 0.996. An automatic measure of alignment accuracy is also introduced and compared to accuracies measured relative to a gold...

  2. Performance evaluation of automatic voltage regulators ...

    African Journals Online (AJOL)

    Performance of various Automatic Voltage Regulators (AVR's) in Nigeria and the causes of their inability to regulate at their set points have been investigated. The result indicates that the imported AVRs fail to give the 220 volts as displayed on the name plate at the specified low set point (such as 100, 120 volts etc) on ...

  3. Automatic speech recognition in air traffic control

    Science.gov (United States)

    Karlsson, Joakim

    1990-01-01

    Automatic Speech Recognition (ASR) technology and its application to the Air Traffic Control system are described. The advantages of applying ASR to Air Traffic Control, as well as criteria for choosing a suitable ASR system are presented. Results from previous research and directions for future work at the Flight Transportation Laboratory are outlined.

  4. MARZ: Manual and automatic redshifting software

    Science.gov (United States)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  5. Automatic Assessment of 3D Modeling Exams

    Science.gov (United States)

    Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.

    2012-01-01

    Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…

  6. Semantic-Aware Automatic Video Editing

    NARCIS (Netherlands)

    S. Bocconi

    2004-01-01

    textabstractOne of the challenges of multimedia applications is to provide user-tailored access to information encoded in different media. Particularly, previous research has not yet fully explored how to automatically compose different video segments according to a communicative goal. We propose a

  7. Automatic thematic classification of election manifestos

    NARCIS (Netherlands)

    Verberne, S.; D'hondt, E.; van den Bosch, A.; Marx, M.

    2014-01-01

    We digitized three years of Dutch election manifestos annotated by the Dutch political scientist Isaac Lipschits. We used these data to train a classifier that can automatically label new, unseen election manifestos with themes. Having the manifestos in a uniform XML format with all paragraphs

  8. An automatic lightning detection and photographic system

    Science.gov (United States)

    Wojtasinski, R. J.; Holley, L. D.; Gray, J. L.; Hoover, R. B.

    1973-01-01

    Conventional 35-mm camera is activated by an electronic signal every time lightning strikes in general vicinity. Electronic circuit detects lightning by means of antenna which picks up atmospheric radio disturbances. Camera is equipped with fish-eye lense, automatic shutter advance, and small 24-hour clock to indicate time when exposures are made.

  9. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... incrementalizing a broad range of static analyses....

  10. Two Systems for Automatic Music Genre Recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2012-01-01

    We re-implement and test two state-of-the-art systems for automatic music genre classification; but unlike past works in this area, we look closer than ever before at their behavior. First, we look at specific instances where each system consistently applies the same wrong label across multiple...

  11. Creative Automaticity: The Writing of Business Spanish.

    Science.gov (United States)

    Nino-Murcia, Mercedes

    Ways to develop "creative automaticity" for writing in a foreign language are examined. The paper focuses on transactional writing (i.e., writing with the purpose of transferring precise information by such means as telex messages, business letters, memoranda, and short reports), and its main features of clarity and conventionality. It is noted…

  12. Semi-automatic approach for music classification

    Science.gov (United States)

    Zhang, Tong

    2003-11-01

    Audio categorization is essential when managing a music database, either a professional library or a personal collection. However, a complete automation in categorizing music into proper classes for browsing and searching is not yet supported by today"s technology. Also, the issue of music classification is subjective to some extent as each user may have his own criteria for categorizing music. In this paper, we propose the idea of semi-automatic music classification. With this approach, a music browsing system is set up which contains a set of tools for separating music into a number of broad types (e.g. male solo, female solo, string instruments performance, etc.) using existing music analysis methods. With results of the automatic process, the user may further cluster music pieces in the database into finer classes and/or adjust misclassifications manually according to his own preferences and definitions. Such a system may greatly improve the efficiency of music browsing and retrieval, while at the same time guarantee accuracy and user"s satisfaction of the results. Since this semi-automatic system has two parts, i.e. the automatic part and the manual part, they are described separately in the paper, with detailed descriptions and examples of each step of the two parts included.

  13. Automatic Synthesis of Robust and Optimal Controllers

    DEFF Research Database (Denmark)

    Cassez, Franck; Jessen, Jan Jacob; Larsen, Kim Guldstrand

    2009-01-01

    In this paper, we show how to apply recent tools for the automatic synthesis of robust and near-optimal controllers for a real industrial case study. We show how to use three different classes of models and their supporting existing tools, Uppaal-TiGA for synthesis, phaver for verification...

  14. Very Portable Remote Automatic Weather Stations

    Science.gov (United States)

    John R. Warren

    1987-01-01

    Remote Automatic Weather Stations (RAWS) were introduced to Forest Service and Bureau of Land Management field units in 1978 following development, test, and evaluation activities conducted jointly by the two agencies. The original configuration was designed for semi-permanent installation. Subsequently, a need for a more portable RAWS was expressed, and one was...

  15. Low Speed Control for Automatic Welding

    Science.gov (United States)

    Iceland, W. E.

    1982-01-01

    Amplifier module allows rotating positioner of automatic welding machine to operate at speeds below normal range. Low speeds are precisely regulated by a servomechanism as are normal-range speeds. Addition of module to standard welding machine makes it unnecessary to purchase new equipment for low-speed welding.

  16. Full-Automatic Parking registration and payment

    DEFF Research Database (Denmark)

    Agerholm, Niels; Lahrmann, Harry; Jørgensen, Brian

    2014-01-01

    As part of ITS Platform North Denmark, a full-automatic GNSS-based parking payment (PP) system was developed (PP app). On the basis of the parking position and parking time, the PP app can determine the price of parking and collect the amount from the car owner’s bank account. The driver is infor...

  17. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob

    benchmarks, finding that the code parallelized in this way runs up to 8.3 times faster on an octo-core Intel Xeon 5570 system and up to 12.5 times faster on a quad-core IBM POWER6 system. Benchmark performance varies significantly between the systems. This suggests that semi-automatic parallelization should...

  18. Automatization of Student Assessment Using Multimedia Technology.

    Science.gov (United States)

    Taniar, David; Rahayu, Wenny

    Most use of multimedia technology in teaching and learning to date has emphasized the teaching aspect only. An application of multimedia in examinations has been neglected. This paper addresses how multimedia technology can be applied to the automatization of assessment, by proposing a prototype of a multimedia question bank, which is able to…

  19. Natural language processing techniques for automatic test ...

    African Journals Online (AJOL)

    ... user and allows him/her to submit essay answers back into the application system. Evaluation results with the system show that the generated questions achieved average accuracies of 87.5% and 88.1% by two human experts. Keywords: Discourse Connectives, Machine Learning, Automatic Test Generation E-Learning.

  20. Automatic invariant detection in dynamic web applications

    NARCIS (Netherlands)

    Groeneveld, F.; Mesbah, A.; Van Deursen, A.

    2010-01-01

    The complexity of modern web applications increases as client-side JavaScript and dynamic DOM programming are used to offer a more interactive web experience. In this paper, we focus on improving the dependability of such applications by automatically inferring invariants from the client-side and

  1. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  2. Automatic Thesaurus Generation for Chinese Documents.

    Science.gov (United States)

    Tseng, Yuen-Hsien

    2002-01-01

    Reports an approach to automatic thesaurus construction for Chinese documents. Presents an effective Chinese keyword extraction algorithm. Compared to previous studies, this method speeds up the thesaurus generation process drastically. It also achieves a similar percentage level of term relatedness. Includes three tables and four figures.…

  3. Automatic prejudice in childhood and early adolescence

    NARCIS (Netherlands)

    Degner, J.; Wentura, D.

    2010-01-01

    Four cross-sectional studies are presented that investigated the automatic activation of prejudice in children and adolescents (aged 9 years to 15 years). Therefore, 4 different versions of the affective priming task were used, with pictures of ingroup and outgroup members being presented as

  4. The CHilean Automatic Supernova sEarch

    DEFF Research Database (Denmark)

    Hamuy, M.; Pignata, G.; Maza, J.

    2012-01-01

    The CHilean Automatic Supernova sEarch (CHASE) project began in 2007 with the goal to discover young, nearby southern supernovae in order to (1) better understand the physics of exploding stars and their progenitors, and (2) refine the methods to derive extragalactic distances. During the first...

  5. Automatic Positioning System of Small Agricultural Robot

    Science.gov (United States)

    Momot, M. V.; Proskokov, A. V.; Natalchenko, A. S.; Biktimirov, A. S.

    2016-08-01

    The present article discusses automatic positioning systems of agricultural robots used in field works. The existing solutions in this area have been analyzed. The article proposes an original solution, which is easy to implement and is characterized by high- accuracy positioning.

  6. Semi-Automatic Identification of Humpback Whales

    NARCIS (Netherlands)

    E.B. Ranguelova (Elena); M.J. Huiskes (Mark); E.J. Pauwels (Eric); K. Dawson-Howe; A.C. Kokaram; F. Shevlin

    2004-01-01

    htmlabstractThis paper describes current work on a photo-id system for humpback whales. Individuals of this species can be uniquely identied by the light and dark pigmentation patches on their tails. We propose semi-automatic algorithm based on marker-controlled watershed transformation for

  7. Automatic Amharic text news classification: Aneural networks ...

    African Journals Online (AJOL)

    The study is on classification of Amharic news automatically using neural networks approach. Learning Vector Quantization (LVQ) algorithm is employed to classify new instance of Amharic news based on classifier developed using training dataset. Two weighting schemes, Term Frequency (TF) and Term Frequency by ...

  8. A Statistical Approach to Automatic Speech Summarization

    Science.gov (United States)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  9. Computer Corner: Automatic Differentiation and APL.

    Science.gov (United States)

    Neidinger, Richard D.

    1989-01-01

    Described are several programs that enable the user to evaluate derivatives to order n of any elementary function by using the combination of automatic differentiation method and A Programming Language (APL). Programs calculating first- and higher-order derivatives are presented. Selected APL symbols are appended. (YP)

  10. A Semi-Automatic Variability Search

    Science.gov (United States)

    Maciejewski, G.; Niedzielski, A.

    Technical features of the Semi-Automatic Variability Search (SAVS) operating at the Astronomical Observatory of the Nicolaus Copernicus University and the results of the first year of observations are presented. The user-friendly software developed for reduction of acquired CCD images and detection of new variable stars is also described.

  11. Automatically predicting mood from expressed emotions

    NARCIS (Netherlands)

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused

  12. Facilitating online discussions by automatic summarization

    NARCIS (Netherlands)

    Wubben, S.; Verberne, S.; Krahmer, E.J.; Bosch, A.P.J. van den

    2015-01-01

    In the DISCOSUMO project, we aim to develop a computational toolkit to automatically summarize discussion forum threads. In this paper, we present the initial design of the toolkit, the data that we work with and the challenges we face. Discussion threads on a single topic can easily consist

  13. Accurate automatic profile monitoring. Genaue automatische Profilkontrolle

    Energy Technology Data Exchange (ETDEWEB)

    Sacher, F. (Amberg Messtechnik AG (Germany))

    1994-06-09

    It is almost inconceivable that the present tunnelling methods will not employ modern surveying and monitoring technologies. Accurate, automatic profile monitoring is an aid to optimization of construction work in technical, financial and scheduling respects. These aspects are explained in more detail on the basis of a description of use, various practical examples and a cost analysis. (orig.)

  14. Concept Mapping

    Science.gov (United States)

    Brennan, Laura K.; Brownson, Ross C.; Kelly, Cheryl; Ivey, Melissa K.; Leviton, Laura C.

    2016-01-01

    Background From 2003 to 2008, 25 cross-sector, multidisciplinary community partnerships funded through the Active Living by Design (ALbD) national program designed, planned, and implemented policy and environmental changes, with complementary programs and promotions. This paper describes the use of concept-mapping methods to gain insights into promising active living intervention strategies based on the collective experience of community representatives implementing ALbD initiatives. Methods Using Concept Systems software, community representatives (n=43) anonymously generated actions and changes in their communities to support active living (183 original statements, 79 condensed statements). Next, respondents (n=26, from 23 partnerships) sorted the 79 statements into self-created categories, or active living intervention approaches. Respondents then rated statements based on their perceptions of the most important strategies for creating community changes (n=25, from 22 partnerships) and increasing community rates of physical activity (n=23, from 20 partnerships). Cluster analysis and multidimensional scaling were used to describe data patterns. Results ALbD community partnerships identified three active living intervention approaches with the greatest perceived importance to create community change and increase population levels of physical activity: changes to the built and natural environment, partnership and collaboration efforts, and land-use and transportation policies. The relative importance of intervention approaches varied according to subgroups of partnerships working with different populations. Conclusions Decision makers, practitioners, and community residents can incorporate what has been learned from the 25 community partnerships to prioritize active living policy, physical project, promotional, and programmatic strategies for work in different populations and settings. PMID:23079266

  15. Automatic segmentation of MR brain images in multiple sclerosis patients

    Science.gov (United States)

    Avula, Ramesh T. V.; Erickson, Bradley J.

    1996-04-01

    A totally automatic scheme for segmenting brain from extracranial tissues and to classify all intracranial voxels as CSF, gray matter (GM), white matter (WM), or abnormality such as multiple sclerosis (MS) lesions is presented in this paper. It is observed that in MR head images, if a tissue's intensity values are normalized, its relationship to the other tissues is essentially constant for a given type of image. Based on this approach, the subcutaneous fat surrounding the head is normalized to classify other tissues. Spatially registered 3 mm MR head image slices of T1 weighted, fast spin echo [dual echo T2 weighted and proton density (PD) weighted images] and fast fluid attenuated inversion recovery (FLAIR) sequences are used for segmentation. Subcutaneous fat surrounding the skull was identified based on intensity thresholding from T1 weighted images. A multiparametric space map was developed for CSF, GM and WM by normalizing each tissue with respect to the mean value of corresponding subcutaneous fat on each pulse sequence. To reduce the low frequency noise without blurring the fine morphological high frequency details an anisotropic diffusion filter was applied to all images before segmentation. An initial slice by slice classification was followed by morphological operations to delete any brides connecting extracranial segments. Finally 3-dimensional region growing of the segmented brain extracts GM, WM and pathology. The algorithm was tested on sequential scans of 10 patients with MS lesions. For well registered sequences, tissues and pathology have been accurately classified. This procedure does not require user input or image training data sets, and shows promise for automatic classification of brain and pathology.

  16. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  17. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  18. ShakeMap

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — ShakeMap is a product of the USGS Earthquake Hazards Program in conjunction with the regional seismic networks. ShakeMaps provide near-real-time maps of ground...

  19. Coaxial Filters Optimization Using Tuning Space Mapping in CST Studio

    Directory of Open Access Journals (Sweden)

    D. Wolansky

    2011-04-01

    Full Text Available This paper deals with the optimization of coaxial filters using Tuning Space Mapping (TSM method implemented to CST environment. The function of fine and coarse model and their link between each other is explained. In addition, supporting macros programmed in VBA language, which are used for maximum efficiency of the optimization from the user´s point of view, are mentioned. Macros are programmed in CST and are also used for automatic calibration constants determination and for automatic calibration process between the coarse model and the fine model. The whole algorithm is illustrated on the particular seven-order filter design and optimized results are compared to measured ones.

  20. Automatic contrast : Evidence that automatic comparison with the social self affects evaluative responses

    NARCIS (Netherlands)

    Ruys, Kirsten I.; Spears, Russell; Gordijn, Ernestine H.; de Vries, Nanne K.

    The aim of the present research was to investigate whether unconsciously presented affective information may cause opposite evaluative responses depending on what social category the information originates from. We argue that automatic comparison processes between the self and the unconscious

  1. Venus Quadrangle Geological Mapping: Use of Geoscience Data Visualization Systems in Mapping and Training

    Science.gov (United States)

    Head, James W.; Huffman, J. N.; Forsberg, A. S.; Hurwitz, D. M.; Basilevsky, A. T.; Ivanov, M. A.; Dickson, J. L.; Kumar, P. Senthil

    2008-01-01

    We are currently investigating new technological developments in computer visualization and analysis in order to assess their importance and utility in planetary geological analysis and mapping [1,2]. Last year we reported on the range of technologies available and on our application of these to various problems in planetary mapping [3]. In this contribution we focus on the application of these techniques and tools to Venus geological mapping at the 1:5M quadrangle scale. In our current Venus mapping projects we have utilized and tested the various platforms to understand their capabilities and assess their usefulness in defining units, establishing stratigraphic relationships, mapping structures, reaching consensus on interpretations and producing map products. We are specifically assessing how computer visualization display qualities (e.g., level of immersion, stereoscopic vs. monoscopic viewing, field of view, large vs. small display size, etc.) influence performance on scientific analysis and geological mapping. We have been exploring four different environments: 1) conventional desktops (DT), 2) semi-immersive Fishtank VR (FT) (i.e., a conventional desktop with head-tracked stereo and 6DOF input), 3) tiled wall displays (TW), and 4) fully immersive virtual reality (IVR) (e.g., "Cave Automatic Virtual Environment," or Cave system). Formal studies demonstrate that fully immersive Cave environments are superior to desktop systems for many tasks [e.g., 4].

  2. Lozi-like maps

    OpenAIRE

    Misiurewicz, Michal; Štimac, Sonja

    2017-01-01

    We define a broad class of piecewise smooth plane homeomorphisms which have properties similar to the properties of Lozi maps, including the existence of a hyperbolic attractor. We call those maps Lozi-like. For those maps one can apply our previous results on kneading theory for Lozi maps. We show a strong numerical evidence that there exist Lozi-like maps that have kneading sequences different than those of Lozi maps.

  3. Mapping Mutations on Phylogenies

    DEFF Research Database (Denmark)

    Nielsen, Rasmus

    2005-01-01

    This chapter provides a short review of recent methodologies developed for mapping mutations on phylogenies. Mapping of mutations, or character changes in general, using the maximum parsimony principle has been one of the most powerful tools in phylogenetics, and it has been used in a variety...... uncertainty in the mapping. Recently developed probabilistic methods can incorporate statistical uncertainty in the character mappings. In these methods, focus is on a probability distribution of mutational mappings instead of a single estimate of the mutational mapping....

  4. SYSTEM FOR AUTOMATIC SELECTION OF THE SPEED RATE OF ELECTRIC VEHICLES FOR REDUCING THE POWER CONSUMPTION

    Directory of Open Access Journals (Sweden)

    K. O. Soroka

    2017-06-01

    Full Text Available Purpose. The work is aimed to design a system for automatic selection of the optimal traffic modes and automatic monitoring of the electric energy consumption by electric transport. This automatic system should provide for the minimum energy expenses. Methodology. Current methodologies: 1 mathematical modeling of traffic modes of ground electric vehicles; 2 comparison of modelling results with the statistical monitoring; 3 system development for automatic choice of traffic modes of electric transport with minimal electrical energy consumptions taking into account the given route schedules and the limitations imposed by the general traffic rules. Findings. The authors obtained a mathematical dependency of the energy consumption by electric transport enterprises on the monthly averaged environment temperature was obtained. A system which allows for an automatic selection of the speed limit and provides automatic monitoring of the electrical energy consumption by electric vehicles was proposed in the form of local network, which works together with existing GPS system. Originality. A mathematical model for calculating the motion curves and energy consumption of electric vehicles has been developed. This model takes into account the characteristic values of the motor engine and the steering system, the change of the mass when loading or unloading passengers, the slopes and radii of the roads, the limitations given by the general traffic rules, and other factors. The dependency of the energy consumption on the averaged monthly environment temperature for public electric transport companies has been calculated. Practical value. The developed mathematical model simplifies the calculations of the traffic dynamics and energy consumption. It can be used for calculating the routing maps, for design and upgrade of the power networks, for development of the electricity saving measures. The system simplifies the work of the vehicle driver and allows reducing

  5. Automatically classifying question types for consumer health questions

    National Research Council Canada - National Science Library

    Roberts, Kirk; Kilicoglu, Halil; Fiszman, Marcelo; Demner-Fushman, Dina

    2014-01-01

    We present a method for automatically classifying consumer health questions. Our thirteen question types are designed to aid in the automatic retrieval of medical answers from consumer health resources...

  6. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Science.gov (United States)

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki

    2014-09-01

    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from http://www.cadlive.jp/CADLIVE_MATLAB/ together with an instruction.

  7. Development of a Motion Sensing and Automatic Positioning Universal Planisphere Using Augmented Reality Technology

    Directory of Open Access Journals (Sweden)

    Wernhuar Tarng

    2017-01-01

    Full Text Available This study combines the augmented reality technology and the sensor functions of GPS, electronic compass, and 3-axis accelerometer on mobile devices to develop a motion sensing and automatic positioning universal planisphere. It can create local star charts according to the current date, time, and position and help users locate constellations on the planisphere easily through motion sensing operation. By holding the mobile device towards the target constellation in the sky, the azimuth and elevation angles are obtained automatically for mapping to its correct position on the star chart. The proposed system combines observational activities with physical operation and spatial cognition for developing correct astronomical concepts, thus making learning more effective. It contains a built-in 3D virtual starry sky to enable observation in classroom for supporting teaching applications. The learning process can be shortened by setting varying observation date, time, and latitude. Therefore, it is a useful tool for astronomy education.

  8. The XH-map algorithm: A method to process stereo video to produce a real-time obstacle map

    Science.gov (United States)

    Rosselot, Donald; Hall, Ernest L.

    2005-10-01

    This paper presents a novel, simple and fast algorithm to produce a "floor plan" obstacle map in real time using video. The XH-map algorithm is a transformation of stereo vision data in disparity map space into a two dimensional obstacle map space using a method that can be likened to a histogram reduction of image information. The classic floor-ground background noise problem is addressed with a simple one-time semi-automatic calibration method incorporated into the algorithm. This implementation of this algorithm utilizes the Intel Performance Primitives library and OpenCV libraries for extremely fast and efficient execution, creating a scaled obstacle map from a 480x640x256 stereo pair in 1.4 milliseconds. This algorithm has many applications in robotics and computer vision including enabling an "Intelligent Robot" robot to "see" for path planning and obstacle avoidance.

  9. Automatic TLI recognition system, user`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Lassahn, G.D.

    1997-02-01

    This report describes how to use an automatic target recognition system (version 14). In separate volumes are a general description of the ATR system, Automatic TLI Recognition System, General Description, and a programmer`s manual, Automatic TLI Recognition System, Programmer`s Guide.

  10. 46 CFR 78.47-53 - Automatic ventilation dampers.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 3 2010-10-01 2010-10-01 false Automatic ventilation dampers. 78.47-53 Section 78.47-53... Fire and Emergency Equipment, Etc. § 78.47-53 Automatic ventilation dampers. (a) The manual operating positions for automatic fire dampers in ventilation ducts passing through main vertical zone bulkheads shall...

  11. 30 CFR 77.1401 - Automatic controls and brakes.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Automatic controls and brakes. 77.1401 Section... MINES Personnel Hoisting § 77.1401 Automatic controls and brakes. Hoists and elevators shall be equipped with overspeed, overwind, and automatic stop controls and with brakes capable of stopping the elevator...

  12. The Use of Automatic Indexing for Authority Control.

    Science.gov (United States)

    Dillon, Martin; And Others

    1981-01-01

    Uses an experimental system for authority control on a collection of bibliographic records to demonstrate the resemblance between thesaurus-based automatic indexing and automatic authority control. Details of the automatic indexing system are given, results discussed, and the benefits of the resemblance examined. Included are a rules appendix and…

  13. 49 CFR 236.825 - System, automatic train control.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false System, automatic train control. 236.825 Section..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.825 System, automatic train control. A system so arranged that its operation will automatically...

  14. The ‘Continuing Misfortune’ of Automatism in Early Surrealism

    Directory of Open Access Journals (Sweden)

    Tessel M. Bauduin

    2015-09-01

    Full Text Available In the 1924 Manifesto of Surrealism surrealist leader André Breton (1896-1966 defined Surrealism as ‘psychic automatism in its pure state,’ positioning ‘psychic automatism’ as both a concept and a technique. This definition followed upon an intense period of experimentation with various forms of automatism among the proto-surrealist group; predominantly automatic writing, but also induced dream states. This article explores how surrealist ‘psychic automatism’ functioned as a mechanism for communication, or the expression of thought as directly as possible through the unconscious, in the first two decades of Surrealism. It touches upon automatic writing, hysteria as an automatic bodily performance of the unconscious, dreaming and the experimentation with induced dream states, and automatic drawing and other visual arts-techniques that could be executed more or less automatically as well. For all that the surrealists reinvented automatism for their own poetic, artistic and revolutionary aims, the automatic techniques were primarily drawn from contemporary Spiritualism, psychical research and experimentation with mediums, and the article teases out the connections to mediumistic automatism. It is demonstrated how the surrealists effectively and successfully divested automatism of all things spiritual. It furthermore becomes clear that despite various mishaps, automatism in many forms was a very successful creative technique within Surrealism.

  15. Automatic Evaluations in Clinically Anxious and Nonanxious Children and Adolescents

    Science.gov (United States)

    Vervoort, Leentje; Wolters, Lidewij H.; Hogendoorn, Sanne M.; Prins, Pier J. M.; de Haan, Else; Nauta, Maaike H.; Boer, Frits

    2010-01-01

    Automatic evaluations of clinically anxious and nonanxious children (n = 40, aged 8-16, 18 girls) were compared using a pictorial performance-based measure of automatic affective associations. Results showed a threat-related evaluation bias in clinically anxious but not in nonanxious children. In anxious participants, automatic evaluations of…

  16. Automaticity of walking: functional significance, mechanisms, measurement and rehabilitation strategies

    Directory of Open Access Journals (Sweden)

    David J Clark

    2015-05-01

    Full Text Available Automaticity is a hallmark feature of walking in adults who are healthy and well-functioning. In the context of walking, ‘automaticity’ refers to the ability of the nervous system to successfully control typical steady state walking with minimal use of attention-demanding executive control resources. Converging lines of evidence indicate that walking deficits and disorders are characterized in part by a shift in the locomotor control strategy from healthy automaticity to compensatory executive control. This is potentially detrimental to walking performance, as an executive control strategy is not optimized for locomotor control. Furthermore, it places excessive demands on a limited pool of executive reserves. The result is compromised ability to perform basic and complex walking tasks and heightened risk for adverse mobility outcomes including falls. Strategies for rehabilitation of automaticity are not well defined, which is due to both a lack of systematic research into the causes of impaired automaticity and to a lack of robust neurophysiological assessments by which to gauge automaticity. These gaps in knowledge are concerning given the serious functional implications of compromised automaticity. Therefore, the objective of this article is to advance the science of automaticity of walking by consolidating evidence and identifying gaps in knowledge regarding: a functional significance of automaticity; b neurophysiology of automaticity; c measurement of automaticity; d mechanistic factors that compromise automaticity; and e strategies for rehabilitation of automaticity.

  17. 14 CFR 171.267 - Glide path automatic monitor system.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Glide path automatic monitor system. 171... Landing System (ISMLS) § 171.267 Glide path automatic monitor system. (a) The ISMLS glide path equipment must provide an automatic monitor system that transmits a warning to designated local and remote...

  18. 14 CFR 171.263 - Localizer automatic monitor system.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Localizer automatic monitor system. 171.263... System (ISMLS) § 171.263 Localizer automatic monitor system. (a) The ISMLS localizer equipment must provide an automatic monitor system that transmits a warning to designated local and remote control points...

  19. Automatic polygon layers integration and its implementation

    Directory of Open Access Journals (Sweden)

    Ondřej Skoupý

    2012-01-01

    Full Text Available Land cover change analysis is one of the most important tools for landscape management purposes, as it enables exploring of long-term natural processes especially in contrast with anthropogenic factors. Such analysis is always dependent on quality of available data. Due to long tradition of map making and quality and accuracy of preserved historical cartographic data in the Czech Republic it is possible to perform an effective land use change analysis using maps dating even back to early nineteenth century. Clearly, because map making methodology has evolved since then, the primary problem of land cover change analysis are different sources and thus different formats of analyzed data which need to be integrated, both spatially and contextually, into one coherent data set. One of the most difficult problems is caused by the fact that due to different map acquisition methodologies the maps are loaded with various errors originating from measurement, map drawing, storage, digitalization and finally georeferencing and possible vectorization. This means that some apparent changes may be for example caused by different methodology and accuracy of mapping a landscape feature that has not actually changed its shape and spatial position through the time. This work deals with spatial integration of data, namely identifying corresponding lines in map layers from different epochs and adjusting the borders plotted in the less accurate map to spatially correspond to the more accurate map. For such a purpose, a special program had to be created. It basically follows the work by Malach et al., 2009 who introduced their Layer Integrator. This work however presents a significantly different approach to creating an integration tool.

  20. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  1. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  2. 11th Portuguese Conference on Automatic Control

    CERN Document Server

    Matos, Aníbal; Veiga, Germano

    2015-01-01

    During the last 20 years the Portuguese association of automatic control, Associação Portuguesa de Controlo Automático, with the sponsorship of IFAC have established the CONTROLO conference as a reference international forum where an effective exchange of knowledge and experience amongst researchers active in various theoretical and applied areas of systems and control can take place, always including considerable space for promoting new technical applications and developments, real-world challenges and success stories. In this 11th edition the CONTROLO conference evolved by introducing two strategic partnerships with Spanish and Brazilian associations in automatic control, Comité Español de Automática and Sociedade Brasileira de Automatica, respectively.

  3. Automatic Species Identification of Live Moths

    Science.gov (United States)

    Mayo, Michael; Watson, Anna T.

    A collection consisting of the images of 774 live moth individuals, each moth belonging to one of 35 different UK species, was analysed to determine if data mining techniques could be used effectively for automatic species identification. Feature vectors were extracted from each of the moth images and the machine learning toolkit WEKA was used to classify the moths by species using the feature vectors. Whereas a previous analysis of this image dataset reported in the literature [1] required that each moth's least worn wing region be highlighted manually for each image, WEKA was able to achieve a greater level of accuracy (85%) using support vector machines without manual specification of a region of interest at all. This paper describes the features that were extracted from the images, and the various experiments using different classifiers and datasets that were performed. The results show that data mining can be usefully applied to the problem of automatic species identification of live specimens in the field.

  4. Method for automatically scramming a nuclear reactor

    Science.gov (United States)

    Ougouag, Abderrafi M.; Schultz, Richard R.; Terry, William K.

    2005-12-27

    An automatically scramming nuclear reactor system. One embodiment comprises a core having a coolant inlet end and a coolant outlet end. A cooling system operatively associated with the core provides coolant to the coolant inlet end and removes heated coolant from the coolant outlet end, thus maintaining a pressure differential therebetween during a normal operating condition of the nuclear reactor system. A guide tube is positioned within the core with a first end of the guide tube in fluid communication with the coolant inlet end of the core, and a second end of the guide tube in fluid communication with the coolant outlet end of the core. A control element is positioned within the guide tube and is movable therein between upper and lower positions, and automatically falls under the action of gravity to the lower position when the pressure differential drops below a safe pressure differential.

  5. The automatic alignment system of GEO 600

    CERN Document Server

    Grote, H; Freise, A; Gossler, S; Willke, B; Lück, H B; Ward, H; Casey, M; Strain, K A; Robertson, D; Hough, J; Danzmann, K

    2002-01-01

    This paper gives an overview of the automatic mirror alignment system of the modecleaner and main interferometer of the GEO 600 gravitational wave detector. In order to achieve the required sensitivity of the detector, the eigenmodes of all optical cavities have to be aligned with respect to the incoming beams (or vice versa) and kept aligned for long measuring periods. Moreover the beam spots have to be centred on the mirrors to minimize coupling of residual angular mirror motion into changes of the optical path length. An overview of the principles and setup for the automatic alignment is given, and first results of the modecleaner and 1200 m cavity alignment system are presented, including the error-point spectra of mirror angular motions, which are smaller than 10 sup - sup 8 rad Hz sup - sup 1 sup / sup 2 below 10 Hz.

  6. Automatic modulation recognition of communication signals

    CERN Document Server

    Azzouz, Elsayed Elsayed

    1996-01-01

    Automatic modulation recognition is a rapidly evolving area of signal analysis. In recent years, interest from the academic and military research institutes has focused around the research and development of modulation recognition algorithms. Any communication intelligence (COMINT) system comprises three main blocks: receiver front-end, modulation recogniser and output stage. Considerable work has been done in the area of receiver front-ends. The work at the output stage is concerned with information extraction, recording and exploitation and begins with signal demodulation, that requires accurate knowledge about the signal modulation type. There are, however, two main reasons for knowing the current modulation type of a signal; to preserve the signal information content and to decide upon the suitable counter action, such as jamming. Automatic Modulation Recognition of Communications Signals describes in depth this modulation recognition process. Drawing on several years of research, the authors provide a cr...

  7. Automatic photointerpretation via texture and morphology analysis

    Science.gov (United States)

    Tou, J. T.

    1982-01-01

    Computer-based techniques for automatic photointerpretation based upon information derived from texture and morphology analysis of images are discussed. By automatic photointerpretation, is meant the determination of semantic descriptions of the content of the images by computer. To perform semantic analysis of morphology, a heirarchical structure of knowledge representation was developed. The simplest elements in a morphology are strokes, which are used to form alphabets. The alphabets are the elements for generating words, which are used to describe the function or property of an object or a region. The words are the elements for constructing sentences, which are used for semantic description of the content of the image. Photointerpretation based upon morphology is then augmented by textural information. Textural analysis is performed using a pixel-vector approach.

  8. Automatic balancing valves in distribution networks today

    Energy Technology Data Exchange (ETDEWEB)

    Golestan, F. [Flow Design, Inc., Dallas, TX (United States)

    1996-12-31

    Automatic flow-limiting (self-actuated) valves have been in the heating, ventilating, and air-conditioning (HVAC) market for some time now. Their principle of operation is based on fluid momentum and Bernoulli`s theorem. Basically, they absorb pressure to keep the flow rate constant. The general operation and their flow characteristics are described in the 1992 ASHRAE Handbook--Systems and Equipment, chapter 43 (ASHRAE 1992). The application and interaction of these valves with other system components, when installed in hydronic distribution networks, are outlined in this presentation. A simple, multilevel piping network is analyzed. The network consists of a pump, connecting piping, an automatic temperature control valve (ATC), a coil, and balancing valves.

  9. Paediatric Automatic Phonological Analysis Tools (APAT).

    Science.gov (United States)

    Saraiva, Daniela; Lousada, Marisa; Hall, Andreia; Jesus, Luis M T

    2017-12-01

    To develop the pediatric Automatic Phonological Analysis Tools (APAT) and to estimate inter and intrajudge reliability, content validity, and concurrent validity. The APAT were constructed using Excel spreadsheets with formulas. The tools were presented to an expert panel for content validation. The corpus used in the Portuguese standardized test Teste Fonético-Fonológico - ALPE produced by 24 children with phonological delay or phonological disorder was recorded, transcribed, and then inserted into the APAT. Reliability and validity of APAT were analyzed. The APAT present strong inter- and intrajudge reliability (>97%). The content validity was also analyzed (ICC = 0.71), and concurrent validity revealed strong correlations between computerized and manual (traditional) methods. The development of these tools contributes to fill existing gaps in clinical practice and research, since previously there were no valid and reliable tools/instruments for automatic phonological analysis, which allowed the analysis of different corpora.

  10. Automatic document navigation for digital content remastering

    Science.gov (United States)

    Lin, Xiaofan; Simske, Steven J.

    2003-12-01

    This paper presents a novel method of automatically adding navigation capabilities to re-mastered electronic books. We first analyze the need for a generic and robust system to automatically construct navigation links into re-mastered books. We then introduce the core algorithm based on text matching for building the links. The proposed method utilizes the tree-structured dictionary and directional graph of the table of contents to efficiently conduct the text matching. Information fusion further increases the robustness of the algorithm. The experimental results on the MIT Press digital library project are discussed and the key functional features of the system are illustrated. We have also investigated how the quality of the OCR engine affects the linking algorithm. In addition, the analogy between this work and Web link mining has been pointed out.

  11. Development of an automatic pipeline scanning system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae H.; Lee, Jae C.; Moon, Soon S.; Eom, Heung S.; Choi, Yu R

    1999-11-01

    Pressure pipe inspection in nuclear power plants is one of the mandatory regulation items. Comparing to manual ultrasonic inspection, automatic inspection has the benefits of more accurate and reliable inspection results and reduction of radiation disposal. final object of this project is to develop an automatic pipeline inspection system of pressure pipe welds in nuclear power plants. We developed a pipeline scanning robot with four magnetic wheels and 2-axis manipulator for controlling ultrasonic transducers, and developed the robot control computer which controls the robot to navigate along inspection path exactly. We expect our system can contribute to reduction of inspection time, performance enhancement, and effective management of inspection results. The system developed by this project can be practically used for inspection works after field tests. (author)

  12. Meteorological Automatic Weather Station (MAWS) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Holdridge, Donna J [Argonne National Lab. (ANL), Argonne, IL (United States); Kyrouac, Jenni A [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-01

    The Meteorological Automatic Weather Station (MAWS) is a surface meteorological station, manufactured by Vaisala, Inc., dedicated to the balloon-borne sounding system (BBSS), providing surface measurements of the thermodynamic state of the atmosphere and the wind speed and direction for each radiosonde profile. These data are automatically provided to the BBSS during the launch procedure and included in the radiosonde profile as the surface measurements of record for the sounding. The MAWS core set of measurements is: Barometric Pressure (hPa), Temperature (°C), Relative Humidity (%), Arithmetic-Averaged Wind Speed (m/s), and Vector-Averaged Wind Direction (deg). The sensors that collect the core variables are mounted at the standard heights defined for each variable.

  13. Intelligent Storage System Based on Automatic Identification

    Directory of Open Access Journals (Sweden)

    Kolarovszki Peter

    2014-09-01

    Full Text Available This article describes RFID technology in conjunction with warehouse management systems. Article also deals with automatic identification and data capture technologies and each processes, which are used in warehouse management system. It describes processes from entering goods into production to identification of goods and also palletizing, storing, bin transferring and removing goods from warehouse. Article focuses on utilizing AMP middleware in WMS processes in Nowadays, the identification of goods in most warehouses is carried through barcodes. In this article we want to specify, how can be processes described above identified through RFID technology. All results are verified by measurement in our AIDC laboratory, which is located at the University of Žilina, and also in Laboratory of Automatic Identification Goods and Services located in GS1 Slovakia. The results of our research bring the new point of view and indicate the ways using of RFID technology in warehouse management system.

  14. Automatic Classification of Attacks on IP Telephony

    Directory of Open Access Journals (Sweden)

    Jakub Safarik

    2013-01-01

    Full Text Available This article proposes an algorithm for automatic analysis of attack data in IP telephony network with a neural network. Data for the analysis is gathered from variable monitoring application running in the network. These monitoring systems are a typical part of nowadays network. Information from them is usually used after attack. It is possible to use an automatic classification of IP telephony attacks for nearly real-time classification and counter attack or mitigation of potential attacks. The classification use proposed neural network, and the article covers design of a neural network and its practical implementation. It contains also methods for neural network learning and data gathering functions from honeypot application.

  15. Automatic-Control System for Safer Brazing

    Science.gov (United States)

    Stein, J. A.; Vanasse, M. A.

    1986-01-01

    Automatic-control system for radio-frequency (RF) induction brazing of metal tubing reduces probability of operator errors, increases safety, and ensures high-quality brazed joints. Unit combines functions of gas control and electric-power control. Minimizes unnecessary flow of argon gas into work area and prevents electrical shocks from RF terminals. Controller will not allow power to flow from RF generator to brazing head unless work has been firmly attached to head and has actuated micro-switch. Potential shock hazard eliminated. Flow of argon for purging and cooling must be turned on and adjusted before brazing power applied. Provision ensures power not applied prematurely, causing damaged work or poor-quality joints. Controller automatically turns off argon flow at conclusion of brazing so potentially suffocating gas does not accumulate in confined areas.

  16. Automatic Indexing Based on Term Activity

    Science.gov (United States)

    Matsumura, Naohiro; Ohsawa, Yukio; Ishizuka, Mitsuru

    With the increasing number of electronic documents, automatic indexing from a document is an essential approach in information retrieval systems, such as search engines. This paper proposes an automatic indexing method named PAI (Priming Activation Indexing) which extracts keywords expressing assertions of a document. The basic idea is that since an author writes a document for insisting on his/her main point, impressive terms to be born in the mind of the reader could represent the asserted keywords of the document. Our approach employs a spreading activation model to extract keywords based on the activity of terms without using corpus, thesaurus, syntactic analysis, dependency relations between terms, and the other knowledge except for stop-word list. Experimental evaluations are reported by applying PAI to both papers and the archives of a mailing-list.

  17. Automatic Phonetic Transcription for Danish Speech Recognition

    DEFF Research Database (Denmark)

    Kirkedal, Andreas Søeborg

    to acquire and expensive to create. For languages with productive compounding or agglutinative languages like German and Finnish, respectively, phonetic dictionaries are also hard to maintain. For this reason, automatic phonetic transcription tools have been produced for many languages. The quality...... of automatic phonetic transcriptions vary greatly with respect to language and transcription strategy. For some languages where the difference between the graphemic and phonetic representations are small, graphemic transcriptions can be used to create ASR systems with acceptable performance. In other languages...... for English and now extended to cover 50 languages. Due to the nature of open source software, the quality of language support depends greatly on who encoded them. The Danish version was created by a Danish native speaker and contains more than 8,600 spelling-to-phoneme rules and more than 11,000 rules...

  18. Differential automatic zero-adjusting amplifier.

    Science.gov (United States)

    Broersen, B; Van Krevelen, F; van Heusden, J T; van Heukelom, J S

    1979-07-01

    A method is described for building a low-voltage-drift differential dc amplifier featuring automatic zero adjustment, a high input impedance, and a bandwidth of 10 kHz. This is achieved by an asymmetric two-step process between the input signal and ground. Bandwidth can be extended by the use of a second amplifier during the ground-sampling time. The amplifier can be made with standard electronic components. A major advantage of this method is that an existing amplifier can easily be converted into a low-voltage-drift amplifier by adding the essential elements of the described automatic zero-adjusting amplifier to its input stage. To illustrate the method a practical example is constructed featuring a drift of 0.2 microV/ degrees C.

  19. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2000-01-01

    . The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and control design. As soon as the process is specified or computed, a structure...... for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process...

  20. Towards Automatic Decentralized Control Structure Selection

    DEFF Research Database (Denmark)

    . The control structure selection problem is formulated as a special MILP employing cost coefficients which are computed using Parseval's theorem combined with RGA and IMC concepts. This approach enables selection and tuning of large-scale plant-wide decentralized controllers through efficient combination......A subtask in integration of design and control of chemical processes is the selection of a control structure. Automating the selection of the control structure enables sequential integration of process and controld esign. As soon as the process is specified or computed, a structure...... for decentralized control is determined automatically, and the resulting decentralized control structure is automatically tuned using standard techniques. Dynamic simulation of the resulting process system gives immediate feedback to the process design engineer regarding practical operability of the process...

  1. Smart Automatic Newspaper Vending Machine Controller IC

    OpenAIRE

    Pandey Sumit; Pal Amrindra; Sharma Sandeep

    2017-01-01

    A machine,  used for  dispensing  items  like  snacks, beverages, lottery tickets,  etc  to customers automatically  meaning without manual intervention is referred to as a  Vending Machine.  Vending Machines  are  part of life in most of the major cities  in India and  across  the globe. The objective of this paper is design a  Smart  Automatic News Paper Vending Machine Controller IC.  The input to this machine is  currency  in Indian rupees and it delivers the product to  the costumer. Thi...

  2. Automatic selection of resting-state networks with functional magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Silvia Francesca eStorti

    2013-05-01

    Full Text Available Functional magnetic resonance imaging (fMRI during a resting-state condition can reveal the co-activation of specific brain regions in distributed networks, called resting-state networks, which are selected by independent component analysis (ICA of the fMRI data. One of the major difficulties with component analysis is the automatic selection of the ICA features related to brain activity. In this study we describe a method designed to automatically select networks of potential functional relevance, specifically, those regions known to be involved in motor function, visual processing, executive functioning, auditory processing, memory, and the default-mode network. To do this, image analysis was based on probabilistic ICA as implemented in FSL software. After decomposition, the optimal number of components was selected by applying a novel algorithm which takes into account, for each component, Pearson's median coefficient of skewness of the spatial maps generated by FSL, followed by clustering, segmentation, and spectral analysis. To evaluate the performance of the approach, we investigated the resting-state networks in 25 subjects. For each subject, three resting-state scans were obtained with a Siemens Allegra 3 T scanner (NYU data set. Comparison of the visually and the automatically identified neuronal networks showed that the algorithm had high accuracy (first scan: 95%, second scan: 95%, third scan: 93% and precision (90%, 90%, 84%. The reproducibility of the networks for visual and automatic selection was very close: it was highly consistent in each subject for the default-mode network (≥ 92% and the occipital network, which includes the medial visual cortical areas (≥ 94%, and consistent for the attention network (≥ 80%, the right and/or left lateralized frontoparietal attention networks, and the temporal-motor network (≥ 80%. The automatic selection method may be used to detect neural networks and reduce subjectivity in ICA

  3. Automatic Verification of Autonomous Robot Missions

    Science.gov (United States)

    2014-01-01

    for a mission related to the search for a biohazard. Keywords: mobile robots, formal verification , performance guarantees, automatic translation 1...tested. 2 Related Work Formal verification of systems is critical when failure creates a high cost, such as life or death scenarios. A variety of...robot. 3.3 PARS Process algebras are specification languages that allow for formal verification of concurrent systems. Process Algebra for Robot

  4. Automatic segmentation of speech into syllables

    OpenAIRE

    Mertens, Piet

    1987-01-01

    A multiple pass procedure for the automatic segmentation of syllabic units is described which involves (1) a broad segmentation triggered by the dips in the intensity curve of band-pass filtered speech, (2) a further segmentation on the basis of the shape of the curve, and (3) the readjustment of the syllabic nucleus within syllable boundaries, based on the intensity of the unfiltered speech.

  5. Automatic sleep monitoring using ear-EEG

    OpenAIRE

    ,

    2017-01-01

    The monitoring of sleep patterns without patient’s inconvenience or involvement of a medical specialist is a clinical question of significant importance. To this end, we propose an automatic sleep stage monitoring system based on an affordable, unobtrusive, discreet, and long-term wearable in-ear sensor for recording the electroencephalogram (ear-EEG). The selected features for sleep pattern classification from a single ear-EEG channel include the spectral edge frequency and multi-scale fuzzy...

  6. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  7. The Automatic External Cardioverter-Defibrillator

    OpenAIRE

    Antoni Martínez-Rubio; Gonzalo Barón-Esquivias

    2004-01-01

    In-hospital cardiac arrest remains a major problem but new technologies allowing fully automatic external defibrillation are available. These technologies allow the concept of “external therapeutic monitoring” of lethal arrhythmias. Since early defibrillation improves outcome by decreasing morbidity and mortality, the use of this device should improve the outcome of in-hospital cardiac arrest victims. Furthermore, the use of these devices could allow safe monitoring and treatment of patients ...

  8. Automatic determination of seismic phase arrival times

    Science.gov (United States)

    Kang, T. S.; Kim, M.; Rhie, J.

    2016-12-01

    Determination of P- and S-wave phase arrival times is significant factors in microseismic detection and thus hypocenter source inversion. If analysts try to pick P- and S-wave phase arrival times of microseismic events manually, they are at risk for inconsistency in picking due to subjective determination of P- and S-wave phase arrival times among them and get to spend too much time in doing the job. This study presents a method for the automatic detection of event and determination of arrival times of seismic phases. An implementation of the method is consisting of five steps. The first is the initial declaration of an event in continuous seismic data using a characteristic function which is also designed specifically in this study. The second is the automatic determination of P-wave phase arrival time using the normalized squared-envelope function. The third is the application of three-axis rotation using an energy ratio among three-component seismograms of the event. The fourth is the automatic determination of S-wave phase arrival time. The final step is the removal of falsely determined time in some records using the Wadati diagram which plots S-P times against P-wave phase arrival times over stations used in the picking stage. Application of the method to the continuous waveform data from a temporary broadband seismograph network consisting of 20 stations distributed in Jeju Island shows that the automatic event detection and determination of phase arrival times are carried out with accuracy.

  9. Automatic system for ionization chamber current measurements.

    Science.gov (United States)

    Brancaccio, Franco; Dias, Mauro S; Koskinas, Marina F

    2004-12-01

    The present work describes an automatic system developed for current integration measurements at the Laboratório de Metrologia Nuclear of Instituto de Pesquisas Energéticas e Nucleares. This system includes software (graphic user interface and control) and a module connected to a microcomputer, by means of a commercial data acquisition card. Measurements were performed in order to check the performance and for validating the proposed design.

  10. Design and Optimize an Automatic Medicine Box

    OpenAIRE

    Lai, Lin; Wu, Shengmin

    2010-01-01

    Our work is to design an automatic and intelligent medicine box to help most people to take the medicine easily, especially for the arthritis, the elderly, the forgetful people and the kids. First, five different structures have been designed by CAD Inventor software. One of them, which can be carried out the most easily in reality, has been chosen to complete with a dynamic system. Considering the characteristic of medicine, there are two schemes in the structure designs are ideal. Several c...

  11. Automatic Extraction of JPF Options and Documentation

    Science.gov (United States)

    Luks, Wojciech; Tkachuk, Oksana; Buschnell, David

    2011-01-01

    Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.

  12. Linguistic challenges in automatic summarization technology

    OpenAIRE

    Diedrichsen, Elke

    2017-01-01

    [EN] Automatic summarization is a field of Natural Language Processing that is increasingly used in industry today. The goal of the summarization process is to create a summary of one document or a multiplicity of documents that will retain the sense and the most important aspects while reducing the length considerably, to a size that may be user-defined. One differentiates between extraction-based and abstraction-based summarization. In an extraction-based system, the words and sentences are...

  13. Automatic Crowd Analysis from Airborne Images

    OpenAIRE

    Sirmacek, Beril; Reinartz, Peter

    2011-01-01

    Recently automatic detection of people and crowded areas from images became a very important research field, since it can provide crucial information especially for police departments and crisis management teams. Detection of crowd and measuring the density of people can prevent possible accidents or unpleasant conditions to appear. Understanding behavioral dynamics of large people groups can also help to estimate future states of underground passages, shopping center like public entrances...

  14. Automatic Detection of Cyberbullying on Social Media

    OpenAIRE

    Engman, Love

    2016-01-01

    Bullying on social media is a dire problem for many youths, leading to severe health problems. In this thesis we describe the construction of a software prototype capable of automatically identifying bullying comments on the social media platform ASKfm using Natural Language Processing (NLP) and Machine Learning (ML) techniques. State of the art NLP and ML algorithms from previous research are studied and evaluated for the task of identifying bullying comments in a data set from ASKfm. The be...

  15. Automatic recognition of quarantine citrus diseases

    OpenAIRE

    Stegmayer, Georgina; Milone, Diego Humberto; Garran, Sergio; Burdyn, Lourdes

    2017-01-01

    Citrus exports to foreign markets are severely limited today by fruit diseases. Some of them, like citrus canker, black spot and scab, are quarantine for the markets. For this reason, it is important to perform strict controls before fruits are exported to avoid the inclusion of citrus affected by them. Nowadays, technical decisions are based on visual diagnosis of human experts, highly dependent on the degree of individual skills. This work presents a model capable of automatic recognize the...

  16. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  17. Automatic Control System Switching Roadway Lighting

    OpenAIRE

    Agus Trimuji Susilo; Lingga Hermanto Drs. MM

    2002-01-01

    Lack of attention to the information officer street lights, cause is not exactly the timewhen the blame lights street lighting street - the street in this city protocol. And whenit was already dark, the lights had not lit, so it can harm the users of the road. Werecommend that when it got bright lights - the lights switched off late, so muchelectricity is wasted with nothing - nothing.Given the problems above, the automatic switching is required that can control all thelights - the existing l...

  18. Automatic modulation classification principles, algorithms and applications

    CERN Document Server

    Zhu, Zhechen

    2014-01-01

    Automatic Modulation Classification (AMC) has been a key technology in many military, security, and civilian telecommunication applications for decades. In military and security applications, modulation often serves as another level of encryption; in modern civilian applications, multiple modulation types can be employed by a signal transmitter to control the data rate and link reliability. This book offers comprehensive documentation of AMC models, algorithms and implementations for successful modulation recognition. It provides an invaluable theoretical and numerical comparison of AMC algo

  19. Automatic program generation from specifications using PROLOG

    Science.gov (United States)

    Pelin, Alex; Morrow, Paul

    1988-01-01

    An automatic program generator which creates PROLOG programs from input/output specifications is described. The generator takes as input descriptions of the input and output data types, a set of transformations and the input/output relation. Abstract data types are used as models for data. They are defined as sets of terms satisfying a system of equations. The tests, the transformations and the input/output relation are also specified by equations.

  20. Automatic generation of multilingual sports summaries

    OpenAIRE

    Hasan, Fahim Muhammad

    2011-01-01

    Natural Language Generation is a subfield of Natural Language Processing, which is concerned with automatically creating human readable text from non-linguistic forms of information. A template-based approach to Natural Language Generation utilizes base formats for different types of sentences, which are subsequently transformed to create the final readable forms of the output. In this thesis, we investigate the suitability of a template-based approach to multilingual Natural Language Generat...

  1. Research on automatic control system of greenhouse

    Science.gov (United States)

    Liu, Yi; Qi, Guoyang; Li, Zeyu; Wu, Qiannan; Meng, Yupeng

    2017-03-01

    This paper introduces a kind of automatic control system of single-chip microcomputer and a temperature and humidity sensor based on the greenhouse, describes the system's hardware structure, working principle and process, and a large number of experiments on the effect of the control system, the results show that the system can ideally control temperature and room temperature and humidity, can be used in indoor breeding and planting, and has the versatility and portability.

  2. Automatic surface inoculation of agar trays.

    Science.gov (United States)

    Wilkins, J. R.; Mills, S. M.; Boykin, E. H.

    1972-01-01

    Description of a machine and technique for the automatic inoculation of a plastic tray containing agar media with a culture, using either a conventional inoculation loop or a cotton swab. The design of the machine is simple, it is easy to use, and it relieves the operator from the manual task of streaking cultures. The described technique makes possible the visualization of the overall qualitative and, to some extent, quantitative relationships of various bacterial types in a sample tested.

  3. Automatic code generator for higher order integrators

    Science.gov (United States)

    Mushtaq, Asif; Olaussen, Kåre

    2014-05-01

    Some explicit algorithms for higher order symplectic integration of a large class of Hamilton's equations have recently been discussed by Mushtaq et al. Here we present a Python program for automatic numerical implementation of these algorithms for a given Hamiltonian, both for double precision and multiprecision computations. We provide examples of how to use this program, and illustrate behavior of both the code generator and the generated solver module(s).

  4. Automatic Singing Performance Evaluation for Untrained Singers

    Science.gov (United States)

    Cao, Chuan; Li, Ming; Wu, Xiao; Suo, Hongbin; Liu, Jian; Yan, Yonghong

    In this letter, we present an automatic approach of objective singing performance evaluation for untrained singers by relating acoustic measurements to perceptual ratings of singing voice quality. Several acoustic parameters and their combination features are investigated to find objective correspondences of the perceptual evaluation criteria. Experimental results show relative strong correlation between perceptual ratings and the combined features and the reliability of the proposed evaluation system is tested to be comparable to human judges.

  5. Automatically Detecting Authors’ Native Language

    Science.gov (United States)

    2011-03-01

    models that can help them to build their own system for automatically evaluating essays for TOEFL exams that are tailored to the student’s native...available. Therefore, there is good chance that the test data may have data that never appeared in the training data, which would result zero...words. In other words, LDA can be used to test if each document in a subcorpus has similar topic proportions that are different from the topic

  6. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  7. Automatic Control of Personal Rapid Transit Vehicles

    Science.gov (United States)

    Smith, P. D.

    1972-01-01

    The requirements for automatic longitudinal control of a string of closely packed personal vehicles are outlined. Optimal control theory is used to design feedback controllers for strings of vehicles. An important modification of the usual optimal control scheme is the inclusion of jerk in the cost functional. While the inclusion of the jerk term was considered, the effect of its inclusion was not sufficiently studied. Adding the jerk term will increase passenger comfort.

  8. Automatically Modeling Linguistic Categories in Spanish

    Science.gov (United States)

    de Luise, M. D. López; Hisgen, D.; Soffer, M.

    This paper presents an approach to process Spanish linguistic categories automatically. The approach is based in a module of a prototype named WIH (Word Intelligent Handler), which is a project to develop a conversational bot. It basically learns category usage sequence in a sentence. It extracts a weighting metric to discriminate most common structures in real dialogs. Such a metric is important to define the preferred organization to be used by the robot to build an answer.

  9. Automatic Anthropometric System Development Using Machine Learning

    Directory of Open Access Journals (Sweden)

    Long The Nguyen

    2016-08-01

    Full Text Available The contactless automatic anthropometric system is proposed for the reconstruction of the 3D-model of the human body using the conventional smartphone. Our approach involves three main steps. The first step is the extraction of 12 anthropological features. Then we determine the most important features. Finally, we employ these features to build the 3D model of the human body and classify them according to gender and the commonly used sizes. 

  10. Automatic evidence retrieval for systematic reviews.

    Science.gov (United States)

    Choong, Miew Keen; Galgani, Filippo; Dunn, Adam G; Tsafnat, Guy

    2014-10-01

    Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.

  11. Mapping the Heart

    Science.gov (United States)

    Hulse, Grace

    2012-01-01

    In this article, the author describes how her fourth graders made ceramic heart maps. The impetus for this project came from reading "My Map Book" by Sara Fanelli. This book is a collection of quirky, hand-drawn and collaged maps that diagram a child's world. There are maps of her stomach, her day, her family, and her heart, among others. The…

  12. Automatic Detection of Dominance and Expected Interest

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia; Vitrià, Jordi; Anguera, M. Teresa

    2010-12-01

    Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.

  13. Mental imagery affects subsequent automatic defense responses

    Directory of Open Access Journals (Sweden)

    Muriel A Hagenaars

    2015-06-01

    Full Text Available Automatic defense responses promote survival and appropriate action under threat. They have also been associated with the development of threat-related psychiatric syndromes. Targeting such automatic responses during threat may be useful in populations with frequent threat exposure. Here, two experiments explored whether mental imagery as a pre-trauma manipulation could influence fear bradycardia (a core characteristic of freezing during subsequent analogue trauma (affective picture viewing. Image-based interventions have proven successful in the treatment of threat-related disorders, and are easily applicable. In Experiment 1 43 healthy participants were randomly assigned to an imagery script condition. Participants executed a passive viewing task with blocks of neutral, pleasant and unpleasant pictures after listening to an auditory script that was either related (with a positive or a negative outcome or unrelated to the unpleasant pictures from the passive viewing task. Heart rate was assessed during script listening and during passive viewing. Imagining negative related scripts resulted in greater bradycardia (neutral-unpleasant contrast than imagining positive scripts, especially unrelated. This effect was replicated in Experiment 2 (N = 51, again in the neutral-unpleasant contrast. An extra no-script condition showed that bradycardia was not induced by the negative related script, but rather that a positive script attenuated bradycardia. These preliminary results might indicate reduced vigilance after unrelated positive events. Future research should replicate these findings using a larger sample. Either way, the findings show that highly automatic defense behavior can be influenced by relatively simple mental imagery manipulations.

  14. Evaluation of automatic vacuum- assisted compaction solutions

    Directory of Open Access Journals (Sweden)

    M. Brzeziński

    2011-01-01

    Full Text Available Currently on the mould-making machines market the companies like: DiSA, KUENKEL WAGNER, HAFLINGER, HEINRICH WAGNER SINTO, HUNTER, SAVELLI AND TECHNICAL play significant role. These companies are the manufacturers of various solutions in machines and instalations applied in foundry engineering. Automatic foundry machines for compaction of green sand have the major role in mechanisation and automation processes of making the mould. The concept of operation of automatic machines is based on the static and dynamic methods of compacting the green sand. The method which gains the importance is the compacting method by using the energy of the air pressure. It's the initial stage or the supporting process of compacting the green sand. However in the automatic mould making machines using this method it's essential to use the additional compaction of the mass in order to receive the final parameters of the form. In the constructional solutions of the machines there is the additional division which concerns the method of putting the sand into the mould box. This division distinquishes the transport of the sand with simultaneous compaction or the putting of the sand without the pre-compaction. As the solutions of the major manufacturers are often the subject for application in various foundries, the authors of the paper would like/have the confidence to present their own evaluation process confirmed by their own researches and independent analysis of the producers' solutions.

  15. Automatic indexing in a drug information portal.

    Science.gov (United States)

    Sakji, Saoussen; Letord, Catherine; Dahamna, Badisse; Kergourlay, Ivan; Pereira, Suzanne; Joubert, Michel; Darmoni, Stéfan

    2009-01-01

    The objective of this work is to create a bilingual (French/English) Drug Information Portal (DIP), in a multi-terminological context and to emphasize its exploitation by an ATC automatic indexing allowing having more pertinent information about substances, organs or systems on which drugs act and their therapeutic and chemical characteristics. The development of the DIP was based on the CISMeF portal, which catalogues and indexes the most important and quality-controlled sources of institutional health information in French. DIP has created specific functionalities and uses specific drugs terminologies such as the ATC classification which used to automatic index the DIP resources. DIP is the result of collaboration between the CISMeF team and the VIDAL Company, specialized in drug information. DIP is conceived to facilitate the user information retrieval. The ATC automatic indexing provided relevant results in 76% of cases. Using multi-terminological context and in the framework of the drug field, indexing drugs with the appropriate codes or/and terms revealed to be very important to have the appropriate information storage and retrieval. The main challenge in the coming year is to increase the accuracy of the approach.

  16. An Automatic Indirect Immunofluorescence Cell Segmentation System

    Directory of Open Access Journals (Sweden)

    Yung-Kuan Chan

    2014-01-01

    Full Text Available Indirect immunofluorescence (IIF with HEp-2 cells has been used for the detection of antinuclear autoantibodies (ANA in systemic autoimmune diseases. The ANA testing allows us to scan a broad range of autoantibody entities and to describe them by distinct fluorescence patterns. Automatic inspection for fluorescence patterns in an IIF image can assist physicians, without relevant experience, in making correct diagnosis. How to segment the cells from an IIF image is essential in developing an automatic inspection system for ANA testing. This paper focuses on the cell detection and segmentation; an efficient method is proposed for automatically detecting the cells with fluorescence pattern in an IIF image. Cell culture is a process in which cells grow under control. Cell counting technology plays an important role in measuring the cell density in a culture tank. Moreover, assessing medium suitability, determining population doubling times, and monitoring cell growth in cultures all require a means of quantifying cell population. The proposed method also can be used to count the cells from an image taken under a fluorescence microscope.

  17. Improving suspended sediment measurements by automatic samplers.

    Science.gov (United States)

    Gettel, Melissa; Gulliver, John S; Kayhanian, Masoud; DeGroot, Gregory; Brand, Joshua; Mohseni, Omid; Erickson, Andrew J

    2011-10-01

    Suspended solids either as total suspended solids (TSS) or suspended sediment concentration (SSC) is an integral particulate water quality parameter that is important in assessing particle-bound contaminants. At present, nearly all stormwater runoff quality monitoring is performed with automatic samplers in which the sampling intake is typically installed at the bottom of a storm sewer or channel. This method of sampling often results in a less accurate measurement of suspended sediment and associated pollutants due to the vertical variation in particle concentration caused by particle settling. In this study, the inaccuracies associated with sampling by conventional intakes for automatic samplers have been verified by testing with known suspended sediment concentrations and known particle sizes ranging from approximately 20 μm to 355 μm under various flow rates. Experimental results show that, for samples collected at a typical automatic sampler intake position, the ratio of sampled to feed suspended sediment concentration is up to 6600% without an intake strainer and up to 300% with a strainer. When the sampling intake is modified with multiple sampling tubes and fitted with a wing to provide lift (winged arm sampler intake), the accuracy of sampling improves substantially. With this modification, the differences between sampled and feed suspended sediment concentration were more consistent and the sampled to feed concentration ratio was accurate to within 10% for particle sizes up to 250 μm.

  18. Authentic Material and Automaticity for Teaching English

    Directory of Open Access Journals (Sweden)

    Widyastuti Widyastuti

    2017-07-01

    Full Text Available This article discusses how to make students of Science Education in first year feel interesting in English lesson, understanding the text well and can communicate English fluency. It has been suggested that Authentic Material and Automaticity Theory not only creates a friendly and fun condition in teaching reading but helps students to study comprehensibly so they are able to understand the text, structure, vocabulary easily, read fluently and they also can communicate in English. The authentic material can make the teaching learning process fun and eliminate boring because the topics and materials can be found in internet so it will be more visually and  interactive . Automaticity theory can solve the problem of students who must memorize words that make them feel boring and forget the words soon. The other benefit is the students can exposure the real language being used in a real context and stimulate studens’idea, encouarage them to relate themselves with real-life experiencesThese strategies can make the students understand easily and enjoy the teaching learning process. By combining authentic material and automaticity strategies for teaching English in science education, will develop readers (students to become fully competent and fluent.

  19. Automatization and working memory capacity in schizophrenia.

    Science.gov (United States)

    van Raalten, Tamar R; Ramsey, Nick F; Jansma, J Martijn; Jager, Gerry; Kahn, René S

    2008-03-01

    Working memory (WM) dysfunction in schizophrenia is characterized by inefficient WM recruitment and reduced capacity, but it is not yet clear how these relate to one another. In controls practice of certain cognitive tasks induces automatization, which is associated with reduced WM recruitment and increased capacity of concurrent task performance. We therefore investigated whether inefficient function and reduced capacity in schizophrenia was associated with a failure in automatization. FMRI data was acquired with a verbal WM task with novel and practiced stimuli in 18 schizophrenia patients and 18 controls. Participants performed a dual-task outside the scanner to test WM capacity. Patients showed intact performance on the WM task, which was paralleled by excessive WM activity. Practice improved performance and reduced WM activity in both groups. The difference in WM activity after practice predicted performance cost in controls but not in patients. In addition, patients showed disproportionately poor dual-task performance compared to controls, especially when processing information that required continuous adjustment in WM. Our findings support the notion of inefficient WM function and reduced capacity in schizophrenia. This was not related to a failure in automatization, but was evident when processing continuously changing information. This suggests that inefficient WM function and reduced capacity may be related to an inability to process information requiring frequent updating.

  20. Automatic lumbar spine measurement in CT images

    Science.gov (United States)

    Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun

    2017-03-01

    Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.

  1. How automatic is manual gear shifting?

    Science.gov (United States)

    Shinar, D; Meir, M; Ben-Shoham, I

    1998-12-01

    Manual gear shifting is often used as an example of an automated (vs. controlled) process in driving. The present study provided an empirical evaluation of this assumption by evaluating sign detection and recall performance of novice and experienced drivers driving manual shift and automatic transmission cars in a downtown area requiring frequent gear shifting. The results showed that manual gear shifting significantly impaired sign detection performance of novice drivers using manual gears compared with novice drivers using an automatic transmission, whereas no such differences existed between the two transmission types for experienced drivers. The results clearly demonstrate that manual gear shifting is a complex psychomotor skill that is not easily (or quickly) automated and that until it becomes automated, it is an attention-demanding task that may impair other monitoring aspects of driving performance. Actual or potential applications of this research include a reevaluation of the learning process in driving and the need for phased instruction in driving from automatic gears to manual gears.

  2. Posttraining sleep enhances automaticity in perceptual discrimination.

    Science.gov (United States)

    Atienza, Mercedes; Cantero, Jose L; Stickgold, Robert

    2004-01-01

    Perceptual learning can develop over extended periods, with slow, at times sleep-dependent, improvement seen several days after training. As a result, performance can become more automatic, that is, less dependent on voluntary attention. This study investigates whether the brain correlates of this enhancement of automaticity are sleep-dependent. Event-related potentials produced in response to complex auditory stimuli were recorded while subjects' attention was focused elsewhere. We report here that following training on an auditory discrimination task, performance continued to improve, without significant further training, for 72 hr. At the same time, several event-related potential components became evident 48-72 hr after training. Posttraining sleep deprivation prevented neither the continued performance improvement nor the slow development of cortical dynamics related to an enhanced familiarity with the task. However, those brain responses associated with the automatic shift of attention to unexpected stimuli failed to develop. Thus, in this auditory learning paradigm, posttraining sleep appears to reduce the voluntary attentional effort required for successful perceptual discrimination by facilitating the intrusion of a potentially meaningful stimulus into one's focus of attention for further evaluation.

  3. What Automaticity Deficit? Activation of Lexical Information by Readers with Dyslexia in a Rapid Automatized Naming Stroop-Switch Task

    Science.gov (United States)

    Jones, Manon W.; Snowling, Margaret J.; Moll, Kristina

    2016-01-01

    Reading fluency is often predicted by rapid automatized naming (RAN) speed, which as the name implies, measures the automaticity with which familiar stimuli (e.g., letters) can be retrieved and named. Readers with dyslexia are considered to have less "automatized" access to lexical information, reflected in longer RAN times compared with…

  4. USGS Map Indices Overlay Map Service from The National Map

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The USGS Map Indices service from The National Map (TNM) consists of 1x1 Degree, 30x60 Minute (100K), 15 Minute (63K), 7.5 Minute (24K), and 3.75 Minute grid...

  5. Applicability of vulnerability maps

    Energy Technology Data Exchange (ETDEWEB)

    Andersen, L.J.; Gosk, E. (Geological Survey of Denmark, Copenhagen (Denmark))

    A number of aspects to vulnerability maps are discussed: the vulnerability concept, mapping purposes, possible users, and applicability of vulnerability maps. Problems associated with general-type vulnerability mapping, including large-scale maps, universal pollutant, and universal pollution scenario are also discussed. An alternative approach to vulnerability assessment - specific vulnerability mapping for limited areas, specific pollutant, and predefined pollution scenario - is suggested. A simplification of the vulnerability concept is proposed in order to make vulnerability mapping more objective and by this means more comparable. An extension of the vulnerability concept to the rest of the hydrogeological cycle (lakes, rivers, and the sea) is proposed. Some recommendations regarding future activities are given.

  6. 7. Annex II: Maps

    OpenAIRE

    Aeberli, Annina

    2012-01-01

    Map 1: States of South Sudan UN OCHA (2012) Republic of South Sudan – States, as of 15 July 2012, Reliefweb http://reliefweb.int/map/south-sudan-republic/republic-south-sudan-states-15-july-2012-reference-map, accessed 31 July 2012. Map 2: Counties of South Sudan UN OCHA (2012) Republic of South Sudan – Counties, as of 16 July 2012, Reliefweb http://reliefweb.int/map/south-sudan-republic/republic-south-sudan-counties-16-july-2012-reference-map, accessed 31 July 2012. Map 3: Eastern Equato...

  7. The digital geologic map of Wyoming in ARC/INFO format

    Science.gov (United States)

    Green, G.N.; Drouillard, P.H.

    1994-01-01

    This geologic map was prepared as part of a study of digital methods and techniques as applied to complex geologic maps. The geologic map was digitized from the original scribe sheets used to prepare the published Geologic Map of Wyoming (Love and Christiansen, 1985). Consequently, the digital version is at 1:500,000 scale using the Lambert Conformal Conic map projection parameters of the State base map. Stable base contact prints of the scribe sheets were scanned on a Tektronix 4991 digital scanner. The scanner automatically converts the scanned image to an ASCII vector format. These vectors were transferred to a VAX minicomputer, where they were then loaded into ARC/INFO. Each vector and polygon was given attributes derived from the original 1985 geologic map. Descriptors: The Digital Geologic Map of Wyoming in ARC/INFO Format Open-File Report 94-0425

  8. Google Maps: You Are Here

    Science.gov (United States)

    Jacobsen, Mikael

    2008-01-01

    Librarians use online mapping services such as Google Maps, MapQuest, Yahoo Maps, and others to check traffic conditions, find local businesses, and provide directions. However, few libraries are using one of Google Maps most outstanding applications, My Maps, for the creation of enhanced and interactive multimedia maps. My Maps is a simple and…

  9. AIRS Maps from Space Processing Software

    Science.gov (United States)

    Thompson, Charles K.; Licata, Stephen J.

    2012-01-01

    This software package processes Atmospheric Infrared Sounder (AIRS) Level 2 swath standard product geophysical parameters, and generates global, colorized, annotated maps. It automatically generates daily and multi-day averaged colorized and annotated maps of various AIRS Level 2 swath geophysical parameters. It also generates AIRS input data sets for Eyes on Earth, Puffer-sphere, and Magic Planet. This program is tailored to AIRS Level 2 data products. It re-projects data into 1/4-degree grids that can be combined and averaged for any number of days. The software scales and colorizes global grids utilizing AIRS-specific color tables, and annotates images with title and color bar. This software can be tailored for use with other swath data products for the purposes of visualization.

  10. [The effects of rumination on automatic thoughts and depressive symptoms].

    Science.gov (United States)

    Nishikawa, Daiji; Matsunaga, Miki; Furutani, Kaichiro

    2013-12-01

    This study investigated the effects of rumination (reflective pondering and brooding) on automatic thoughts (both negative and positive) and depressive symptoms. University students (N=183; 96 men) completed the Self-Rating Depression Scale (SDS), Automatic Thoughts Questionnaire-Revised (ATQ-R), and Response Style Scale (RSS). We conducted a path analysis which included gender as a factor. The results revealed that brooding was associated with negative automatic thoughts. Negative automatic thoughts contributed to the aggravation of depressive symptoms. In contrast, reflective pondering was associated with positive automatic thoughts. Positive automatic thoughts contributed to the reduction of depressive symptoms. These results indicate that rumination does not affect depressive symptoms directly. We suggest that rumination affects depressive symptoms indirectly through automatic thoughts, and that there are gender differences in the influence process.

  11. Automatic Estimation of Volcanic Ash Plume Height using WorldView-2 Imagery

    Science.gov (United States)

    McLaren, David; Thompson, David R.; Davies, Ashley G.; Gudmundsson, Magnus T.; Chien, Steve

    2012-01-01

    We explore the use of machine learning, computer vision, and pattern recognition techniques to automatically identify volcanic ash plumes and plume shadows, in WorldView-2 imagery. Using information of the relative position of the sun and spacecraft and terrain information in the form of a digital elevation map, classification, the height of the ash plume can also be inferred. We present the results from applying this approach to six scenes acquired on two separate days in April and May of 2010 of the Eyjafjallajokull eruption in Iceland. These results show rough agreement with ash plume height estimates from visual and radar based measurements.

  12. Automatic Language Identification with Discriminative Language Characterization Based on SVM

    Science.gov (United States)

    Suo, Hongbin; Li, Ming; Lu, Ping; Yan, Yonghong

    Robust automatic language identification (LID) is the task of identifying the language from a short utterance spoken by an unknown speaker. The mainstream approaches include parallel phone recognition language modeling (PPRLM), support vector machine (SVM) and the general Gaussian mixture models (GMMs). These systems map the cepstral features of spoken utterances into high level scores by classifiers. In this paper, in order to increase the dimension of the score vector and alleviate the inter-speaker variability within the same language, multiple data groups based on supervised speaker clustering are employed to generate the discriminative language characterization score vectors (DLCSV). The back-end SVM classifiers are used to model the probability distribution of each target language in the DLCSV space. Finally, the output scores of back-end classifiers are calibrated by a pair-wise posterior probability estimation (PPPE) algorithm. The proposed language identification frameworks are evaluated on 2003 NIST Language Recognition Evaluation (LRE) databases and the experiments show that the system described in this paper produces comparable results to the existing systems. Especially, the SVM framework achieves an equal error rate (EER) of 4.0% in the 30-second task and outperforms the state-of-art systems by more than 30% relative error reduction. Besides, the performances of proposed PPRLM and GMMs algorithms achieve an EER of 5.1% and 5.0% respectively.

  13. Management of natural resources through automatic cartographic inventory

    Science.gov (United States)

    Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. Over those parts of the ARNICA test site where ERTS-1 data were available, the search for correspondences between images and ground truth acquired by the vegetation and geology maps was quite positive. The probability of recognition of soil use types can be estimated at: (1) 100% for water plans, rivers, canals, swamplands, and wetlands; (2) 80%-100% for the major types of forestry, farmland zones, moorlands and pasturelands, and urbanization; (3) 20%-50% for communication lines; (4) 60%-80% for forestry species and organization of agricultural areas; (5) 40%-60% for finer discrimination between forest types and more accurate identification of cultivations; (6) 60%-90% for major geological features. These percentages will be improved upon as soon as it is possible to use the repetitive imagery. An early use of automatic cartography using ERTS-1 imagery was made possible for pine forests in the Central Pyrenees, the densitometric signature of which were particularly significant. Important observations were made in related fields of water resources, snow survey, estuary dynamics, and meteorology.

  14. Statistical Lip-Appearance Models Trained Automatically Using Audio Information

    Directory of Open Access Journals (Sweden)

    Daubias Philippe

    2002-01-01

    Full Text Available We aim at modeling the appearance of the lower face region to assist visual feature extraction for audio-visual speech processing applications. In this paper, we present a neural network based statistical appearance model of the lips which classifies pixels as belonging to the lips, skin, or inner mouth classes. This model requires labeled examples to be trained, and we propose to label images automatically by employing a lip-shape model and a red-hue energy function. To improve the performance of lip-tracking, we propose to use blue marked-up image sequences of the same subject uttering the identical sentences as natural nonmarked-up ones. The easily extracted lip shapes from blue images are then mapped to the natural ones using acoustic information. The lip-shape estimates obtained simplify lip-tracking on the natural images, as they reduce the parameter space dimensionality in the red-hue energy minimization, thus yielding better contour shape and location estimates. We applied the proposed method to a small audio-visual database of three subjects, achieving errors in pixel classification around 6%, compared to 3% for hand-placed contours and 20% for filtered red-hue.

  15. Trends of Science Education Research: An Automatic Content Analysis

    Science.gov (United States)

    Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien

    2010-08-01

    This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education from 1990 to 2007. The multi-stage clustering technique was employed to investigate with what topics, to what development trends, and from whose contribution that the journal publications constructed as a science education research field. This study found that the research topic of Conceptual Change & Concept Mapping was the most studied topic, although the number of publications has slightly declined in the 2000's. The studies in the themes of Professional Development, Nature of Science and Socio-Scientific Issues, and Conceptual Chang and Analogy were found to be gaining attention over the years. This study also found that, embedded in the most cited references, the supporting disciplines and theories of science education research are constructivist learning, cognitive psychology, pedagogy, and philosophy of science.

  16. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    Science.gov (United States)

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  17. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  18. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  19. Segmenting into Adequate Units for Automatic Recognition of Emotion-Related Episodes: A Speech-Based Approach

    Directory of Open Access Journals (Sweden)

    Anton Batliner

    2010-01-01

    Full Text Available We deal with the topic of segmenting emotion-related (emotional/affective episodes into adequate units for analysis and automatic processing/classification—a topic that has not been addressed adequately so far. We concentrate on speech and illustrate promising approaches by using a database with children's emotional speech. We argue in favour of the word as basic unit and map sequences of words on both syntactic and ‘‘emotionally consistent” chunks and report classification performances for an exhaustive modelling of our data by mapping word-based paralinguistic emotion labels onto three classes representing valence (positive, neutral, negative, and onto a fourth rest (garbage class.

  20. Automatic identification of IASLC-defined mediastinal lymph node stations on CT scans using multi-atlas organ segmentation

    Science.gov (United States)

    Hoffman, Joanne; Liu, Jiamin; Turkbey, Evrim; Kim, Lauren; Summers, Ronald M.

    2015-03-01

    Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.