WorldWideScience

Sample records for automatic data processing

  1. Automatically processing physical data from LHD experiments

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, M., E-mail: emoto.masahiko@nifs.ac.jp; Ida, K.; Suzuki, C.; Yoshida, M.; Akiyama, T.; Nakamura, Y.; Sakamoto, R.; Yokoyama, M.; Yoshinuma, M.

    2014-05-15

    Physical data produced by large helical device (LHD) experiments is supplied by the Kaiseki server, and registers more than 200 types of diagnostic data. Dependencies exist amongst the data; i.e., in many cases, the calculation of one data requires other data. Therefore, to obtain unregistered data, one needs to calculate not only the diagnostic data itself but also the dependent data; however, because the data is registered by different scientists, each scientist must separately calculate and register their respective data. To simplify this complicated procedure, we have developed an automatic calculation system called AutoAna. The calculation programs of AutoAna are distributed on a network, and the number of such programs can be easily increased dynamically. Our system is therefore scalable and ready for substantial increases in the size of the target data.

  2. Automatic data processing of nondestructive testing results

    International Nuclear Information System (INIS)

    The ADP system for the documentation of inservice inspection results of nuclear power plants is described. The same system can be used during the whole operational life time of the plant. To make this possible the ADP system has to be independent of the type of hardware, data recording and software. The computer programs are made using Fortran IV programming language. The results of nondestructive testing are recorded in an inspection register by ADP methods. Different outputs can be utilized for planning, performance and reporting of inservice inspections. (author)

  3. Towards Automatic Capturing of Manual Data Processing Provenance

    OpenAIRE

    Wombacher, Andreas; Huq, Mohammad R.

    2011-01-01

    Often data processing is not implemented by a work ow system or an integration application but is performed manually by humans along the lines of a more or less specified procedure. Collecting provenance information during manual data processing can not be automated. Further, manual collection of provenance information is error prone and time consuming. Therefore, we propose to infer provenance information based on the read and write access of users. The derived provenance information is comp...

  4. Automatic Data Processing, 4-1. Military Curriculum Materials for Vocational and Technical Education.

    Science.gov (United States)

    Army Ordnance Center and School, Aberdeen Proving Ground, MD.

    These two texts and student workbook for a secondary/postsecondary-level correspondence course in automatic data processing comprise one of a number of military-developed curriculum packages selected for adaptation to vocational instruction and curriculum development in a civilian setting. The purpose stated for the individualized, self-paced…

  5. Automatic classification of oranges using image processing and data mining techniques

    OpenAIRE

    Mercol, Juan Pablo; Gambini, María Juliana; Santos, Juan Miguel

    2008-01-01

    Data mining is the discovery of patterns and regularities from large amounts of data using machine learning algorithms. This can be applied to object recognition using image processing techniques. In fruits and vegetables production lines, the quality assurance is done by trained people who inspect the fruits while they move in a conveyor belt, and classify them in several categories based on visual features. In this paper we present an automatic orange’s classification system, which us...

  6. Automatic processing of high-rate, high-density multibeam echosounder data

    Science.gov (United States)

    Calder, B. R.; Mayer, L. A.

    2003-06-01

    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth's crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a "current-best" estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is

  7. Automatic Scaling Hadoop in the Cloud for Efficient Process of Big Geospatial Data

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    2016-09-01

    Full Text Available Efficient processing of big geospatial data is crucial for tackling global and regional challenges such as climate change and natural disasters, but it is challenging not only due to the massive data volume but also due to the intrinsic complexity and high dimensions of the geospatial datasets. While traditional computing infrastructure does not scale well with the rapidly increasing data volume, Hadoop has attracted increasing attention in geoscience communities for handling big geospatial data. Recently, many studies were carried out to investigate adopting Hadoop for processing big geospatial data, but how to adjust the computing resources to efficiently handle the dynamic geoprocessing workload was barely explored. To bridge this gap, we propose a novel framework to automatically scale the Hadoop cluster in the cloud environment to allocate the right amount of computing resources based on the dynamic geoprocessing workload. The framework and auto-scaling algorithms are introduced, and a prototype system was developed to demonstrate the feasibility and efficiency of the proposed scaling mechanism using Digital Elevation Model (DEM interpolation as an example. Experimental results show that this auto-scaling framework could (1 significantly reduce the computing resource utilization (by 80% in our example while delivering similar performance as a full-powered cluster; and (2 effectively handle the spike processing workload by automatically increasing the computing resources to ensure the processing is finished within an acceptable time. Such an auto-scaling approach provides a valuable reference to optimize the performance of geospatial applications to address data- and computational-intensity challenges in GIScience in a more cost-efficient manner.

  8. Automatic processing and modeling of GPR data for pavement thickness and properties

    Science.gov (United States)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  9. Explodet Project:. Methods of Automatic Data Processing and Analysis for the Detection of Hidden Explosive

    Science.gov (United States)

    Lecca, Paola

    2003-12-01

    The research of the INFN Gruppo Collegato di Trento in the ambit of EXPLODET project for the humanitarian demining, is devoted to the development of a software procedure for the automatization of data analysis and decision taking about the presence of hidden explosive. Innovative algorithms of likely background calculation, a system based on neural networks for energy calibration and simple statistical methods for the qualitative consistency check of the signals are the main parts of the software performing the automatic data elaboration.

  10. Data mining process automatization of air pollution data by the LISp-Miner system

    OpenAIRE

    Ochodnická, Zuzana

    2014-01-01

    This thesis is focused on the area of automated data mining. The aim of this thesis is a description of the area of automated data mining, creation of a design of an automated data mining tasks creation process for verification of set domain knowledge and new knowledge search, and also an implementation of verification of set domain knowledge of attribute dependency type influence with search space adjustments. The implementation language is the LMCL language that enables usage of the LISp-Mi...

  11. Grid infrastructure for automatic processing of SAR data for flood applications

    Science.gov (United States)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be

  12. Automatic layout of ventilation systems by means of electronic data processing

    Energy Technology Data Exchange (ETDEWEB)

    Altena, H.; Priess, H.; Fries, E.; Hoffmann, G.

    1982-12-09

    A working group developed a mehtod for the automatic projection of ventilation systems by means of electronic data processing. The purpose of this was to increase the information content of this document and to obtain a useful tool for ventilation planning while reducing the efforts required for elaboration of ventilation plans. A program system was developed by means of which ventilation plans can be plotted in consideration of the regulations set by the mining authorities. The program system was applied for the first time at Osterfeld mine. The plan is clearly organized, accurate, and easy to understand. This positive experience suggests that computer-aided plans should be more widely applied. The mining authorities support this view.

  13. Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

    Science.gov (United States)

    Shuping, Ralph; Krzaczek, Robert; Vacca, William D.; Charcos-Llorens, Miguel; Reach, William T.; Alles, Rosemary; Clarke, Melanie; Melchiorri, Riccardo; Radomski, James T.; Shenoy, Sachindev S.; Sandel, David; Omelian, Eric

    2015-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. SOFIA is designed to execute observations at altitudes between 37,000 and 45,00 feet, above 99% of atmospheric water vapor. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. Once this post-processing is complete, the data can be used in scientific analysis and publications. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both automatic ("pipeline") and manual modes to process data from a variety of instruments. In this poster paper, we present an overview of the DPS concepts and architecture, as well as operational results from the first two SOFIA observing cycles (2013--2014).

  14. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    Science.gov (United States)

    Kortström, Jari; Tiira, Timo; Kaisko, Outi

    2016-03-01

    The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.

  15. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  16. Automatic defect detection for TFT-LCD array process using quasiconformal kernel support vector data description.

    Science.gov (United States)

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  17. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  18. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS......). In the second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable...

  19. Software for the FODS installation automatic control, data acquisition and processing

    International Nuclear Information System (INIS)

    Software for the focusing two-arm spectrometer designed for the study of particle production processes with large transverse momenta is described. The interaction of programs, information file flow, special software for operation with an intensive particle beam are considered in detail. Organization of beam characteristic monitoring for drift chambers and electronics calibration are described. Basic principles, used at designing the software of two-computer complex with HP-2100A and ES-1040 are presented. The software described is used when carrying out experiments on the study of hadron production in proton-proton, proton-deuteron and proton-nuclear interactions in the course of which azimuthal correlations of γ-quanta with large transverse momenta are measured

  20. Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

    CERN Document Server

    Shuping, R Y; Vacca, W D; Charcos-Llorens, M; Reach, W T; Alles, R; Clarke, M; Melchiorri, R; Radomski, J; Shenoy, S; Sandel, D; Omelian, E B

    2014-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both auto...

  1. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  2. A novel GIS-based tool for estimating present-day ocean reference depth using automatically processed gridded bathymetry data

    Science.gov (United States)

    Jurecka, Mirosława; Niedzielski, Tomasz; Migoń, Piotr

    2016-05-01

    This paper presents a new method for computing the present-day value of the reference depth (dr) which is an essential input information for assessment of past sea-level changes. The method applies a novel automatic geoprocessing tool developed using Python script and ArcGIS, and uses recent data about ocean floor depth, sediment thickness, and age of oceanic crust. The procedure is multi-step and involves creation of a bathymetric dataset corrected for sediment loading and isostasy, delineation of subduction zones, computation of perpendicular sea-floor profiles, and statistical analysis of these profiles versus crust age. The analysis of site-specific situations near the subduction zones all around the world shows a number of instances where the depth of the oceanic crust stabilizes at a certain level before reaching the subduction zone, and this occurs at depths much lower than proposed in previous approaches to the reference depth issue. An analysis of Jurassic and Cretaceous oceanic lithosphere shows that the most probable interval at which the reference depth occurs is 5300-5800 m. This interval is broadly consistent with dr estimates determined using the Global Depth-Heatflow model (GDH1), but is significantly lower than dr estimates calculated on a basis of the Parsons-Sclater Model (PSM).

  3. GAIT-ER-AID: An Expert System for Analysis of Gait with Automatic Intelligent Pre-Processing of Data

    OpenAIRE

    Bontrager, EL.; Perry, J.; Bogey, R.; Gronley, J.; Barnes, L.; Bekey, G.; Kim, JW

    1990-01-01

    This paper describes the architecture and applications of an expert system designed to identify the specific muscles responsible for a given dysfunctional gait pattern. The system consists of two parts: a data analysis expert system (DA/ES) and a gait pathology expert system (GP/ES). The DA/ES processes raw data on joint angles, foot-floor contact patterns and EMG's from relevant muscles and synthesizes them into a data frame for use by the GP/ES. Various aspects of the intelligent data pre-p...

  4. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable......Two kinds of cognitive processes can be distinguished: Automatic processes, which are mostly subconscious, are learned and changed very slowly, and are not subject to the capacity limitations of working memory, and strategic processes, which are conscious, are subject to capacity limitations......, and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...

  5. A CityGML extension for traffic-sign objects that guides the automatic processing of data collected using Mobile Mapping technology

    Science.gov (United States)

    Varela-González, M.; Riveiro, B.; Arias-Sánchez, P.; González-Jorge, H.; Martínez-Sánchez, J.

    2014-11-01

    The rapid evolution of integral schemes, accounting for geometric and semantic data, has been importantly motivated by the advances in the last decade in mobile laser scanning technology; automation in data processing has also recently influenced the expansion of the new model concepts. This paper reviews some important issues involved in the new paradigms of city 3D modelling: an interoperable schema for city 3D modelling (cityGML) and mobile mapping technology to provide the features that composing the city model. This paper focuses in traffic signs, discussing their characterization using cityGML in order to ease the implementation of LiDAR technology in road management software, as well as analysing some limitations of the current technology in the labour of automatic detection and classification.

  6. Etna_NETVIS: A dedicated tool for automatically pre-processing high frequency data useful to extract geometrical parameters and track the evolution of the lava field

    Science.gov (United States)

    Marsella, Maria; Junior Valentino D'Aranno, Peppe; De Bonis, Roberto; Nardinocchi, Carla; Scifoni, Silvia; Scutti, Marianna; Sonnessa, Alberico; Wahbeh, Wissam; Biale, Emilio; Coltelli, Mauro; Pecora, Emilio; Prestifilippo, Michele; Proietti, Cristina

    2016-04-01

    In volcanic areas, where it could be difficult to gain access to the most critical zones for carrying out direct surveys, digital photogrammetry techniques are rarely experimented, although in many cases they proved to have remarkable potentialities, as the possibility to follow the evolution of volcanic (fracturing, vent positions, lava fields, lava front positions) and deformation processes (inflation/deflation and instability phenomena induced by volcanic activity). These results can be obtained, in the framework of standard surveillance activities, by acquiring multi-temporal datasets including Digital Orthophotos (DO) and Digital Elevation Models (DEM) to be used for implementing a quantitative and comparative analysis. The frequency of the surveys can be intensified during emergency phases to implement a quasi real-time monitoring for supporting civil protection actions. The high level of accuracy and the short time required for image processing make digital photogrammetry a suitable tool for controlling the evolution of volcanic processes which are usually characterized by large and rapid mass displacements. In order to optimize and extend the existing permanent ground NEtwork of Thermal and VIsible Sensors located on Mt. Etna (Etna_NETVIS) and to improve the observation of the most active areas, an approach for monitoring surface sin-eruptive processes was implemented. A dedicated tool for automatically pre-processing high frequency data, useful to extract geometrical parameters as well as to track the evolution of the lava field, was developed and tested both in simulated and real scenarios. The tool allows to extract a coherent multi-temporal dataset of orthophotos useful to evaluate active flow area and to estimate effusion rates. Furthermore, Etna_NETVIS data were used to downscale the information derived from satellite data and/or to integrate the satellite datasets in case of incomplete coverage or missing acquisitions. This work was developed in the

  7. Data processing

    CERN Document Server

    Fry, T F

    2013-01-01

    Data Processing discusses the principles, practices, and associated tools in data processing. The book is comprised of 17 chapters that are organized into three parts. The first part covers the characteristics, systems, and methods of data processing. Part 2 deals with the data processing practice; this part discusses the data input, output, and storage. The last part discusses topics related to systems and software in data processing, which include checks and controls, computer language and programs, and program elements and structures. The text will be useful to practitioners of computer-rel

  8. Automatic recognition of lactating sow behaviors through depth image processing

    Science.gov (United States)

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  9. The Caltech Tomography Database and Automatic Processing Pipeline.

    Science.gov (United States)

    Ding, H Jane; Oikonomou, Catherine M; Jensen, Grant J

    2015-11-01

    Here we describe the Caltech Tomography Database and automatic image processing pipeline, designed to process, store, display, and distribute electron tomographic data including tilt-series, sample information, data collection parameters, 3D reconstructions, correlated light microscope images, snapshots, segmentations, movies, and other associated files. Tilt-series are typically uploaded automatically during collection to a user's "Inbox" and processed automatically, but can also be entered and processed in batches via scripts or file-by-file through an internet interface. As with the video website YouTube, each tilt-series is represented on the browsing page with a link to the full record, a thumbnail image and a video icon that delivers a movie of the tomogram in a pop-out window. Annotation tools allow users to add notes and snapshots. The database is fully searchable, and sets of tilt-series can be selected and re-processed, edited, or downloaded to a personal workstation. The results of further processing and snapshots of key results can be recorded in the database, automatically linked to the appropriate tilt-series. While the database is password-protected for local browsing and searching, datasets can be made public and individual files can be shared with collaborators over the Internet. Together these tools facilitate high-throughput tomography work by both individuals and groups. PMID:26087141

  10. XML-Based Automatic Test Data Generation

    OpenAIRE

    Halil Ibrahim Bulbul; Turgut Bakir

    2012-01-01

    Software engineering aims at increasing quality and reliability while decreasing the cost of the software. Testing is one of the most time-consuming phases of the software development lifecycle. Improvement in software testing results in decrease in cost and increase in quality of the software. Automation in software testing is one of the most popular ways of software cost reduction and reliability improvement. In our work we propose a system called XML-based automatic test data generation th...

  11. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  12. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    Science.gov (United States)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning

  13. [Use of the Elektronika-T3-16M special-purpose computer for the automatic processing of cytophotometric and cytofluorimetric data].

    Science.gov (United States)

    Loktionov, A S; Prianishnikov, V A

    1981-05-01

    A system has been proposed to provide the automatic analysis of data on: a) point cytophotometry, b) two-wave cytophotometry, c) cytofluorimetry. The system provides the input of the data from a photomultiplier to a specialized computer "Electronica-T3-16M" in addition to the simultaneous statistical analysis of these. The information on the programs used is presented. The advantages of the system, compared with some commercially available cytophotometers, are indicated.

  14. Automatic inline defect detection for a thin film transistor–liquid crystal display array process using locally linear embedding and support vector data description

    International Nuclear Information System (INIS)

    Defect detection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacturing. This paper proposes an inline defect-detection (IDD) system, by which the defects can be automatically detected in a TFT array process. The IDD system is composed of three stages: the image preprocessing, the appearance-based classification and the decision-making stages. In the first stage, the pixels can be segmented from an input image based on the designed pixel segmentation method. The pixels are then sent into the appearance-based classification stage for defect and non-defect classification. Two novel methods are embedded in this stage: the locally linear embedding (LLE) and the support vector data description (SVDD). LLE is able to substantially reduce the dimensions of the input pixels by manifold learning and SVDD is able to effectively discriminate the normal pixels from the defective ones with a hypersphere by one-class classification. After aggregating the classification results, the third stage outputs the final detection result. Experimental results, carried out on real images provided by a LCD manufacturer, show that the IDD system can not only achieve a high defect-detection rate of over 98%, but also accomplish the task of inline defect detection within 4 s for one input image

  15. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  16. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    Science.gov (United States)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  17. Data processing

    International Nuclear Information System (INIS)

    The 1988 progress report of the Data Processing laboratory (Polytechnic School, France), is presented. The laboratory research fields are: the semantics, the tests and the semantic analysis of the codes, the formal calculus, the software applications, the algorithms, the neuron networks and VLSI (Very Large Scale Integration). The investigations concerning the polynomial rings are performed by means of the standard basis approach. Among the research topics, the Pascal codes, the parallel processing, the combinatorial, statistical and asymptotic properties of the fundamental data processing tools, the signal processing and the pattern recognition. The published papers, the congress communications and the thesis are also included

  18. Automatic Control Technology of Process Equipment Based on Process Data Driven%基于工艺数据驱动的工艺装备自动控制技术

    Institute of Scientific and Technical Information of China (English)

    柳敏

    2015-01-01

    With the increasingly fierce market competition, production-oriented enterprises to occupy market advantage have to improve product quality and shorten the manufacturing cycle, and the movement precision of industrial equipment and important influence on the degree of automation, so has been widely followed, for promoting the manufacture process of complex product production effect, need to process on the basis of data driven system study of its process equipment automatic control system, through the analysis of process data provided by the system, building equipment automatic control technology for object information model of industry, formed on the basis of the process equipment automatic control technology to control the management platform, to guarantee the production quality of the products at the same time, promote the development of the automation, digital, this paper based on the technology of data driven process equipment automatic control system of research, form the relevant process information model and process equipment automatic control technology, combined with personal experience for application authentication, in an effort to promote manufacturing production capacity.%随着市场竞争的日渐激烈,生产型企业为了占具市场优势,必须在提升产品质量的同时缩短制造周期,而工艺装备的运动精度和自动化程度对此产生直接影响,所以一直受到广泛关注.为提升复杂产品制造过程的生产效果,需要在工艺数据驱动的基础上,对其工艺装备自动控制体系进行系统研究,通过对提供的工艺数据进行系统分析,构建以工装自动控制工艺为对象的产业信息模型,形成以工艺装备自动控制技术为基础的控制管理平台,在保证产品生产质量的同时,推动其自动化、数字化发展.本文通过对基于工艺数据驱动的工艺装备自动控制体系的研究,并结合个人经验,对形成的相关工艺信息模型和工艺装备自

  19. Data mining of geospatial data: combining visual and automatic methods

    OpenAIRE

    Demšar, Urška

    2006-01-01

    Most of the largest databases currently available have a strong geospatial component and contain potentially useful information which might be of value. The discipline concerned with extracting this information and knowledge is data mining. Knowledge discovery is performed by applying automatic algorithms which recognise patterns in the data. Classical data mining algorithms assume that data are independently generated and identically distributed. Geospatial data are multidimensional, spatial...

  20. The automatic calibration of Korean VLBI Network data

    CERN Document Server

    Hodgson, Jeffrey A; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-01-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  1. The Automatic Calibration of Korean VLBI Network Data

    Science.gov (United States)

    Hodgson, Jeffrey A.; Lee, Sang-Sung; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-08-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  2. Spectral Curve Fitting for Automatic Hyperspectral Data Analysis

    CERN Document Server

    Brown, Adrian J

    2014-01-01

    Automatic discovery and curve fitting of absorption bands in hyperspectral data can enable the analyst to identify materials present in a scene by comparison with library spectra. This procedure is common in laboratory spectra, but is challenging for sparse hyperspectral data. A procedure for robust discovery of overlapping bands in hyperspectral data is described in this paper. The method is capable of automatically discovering and fitting symmetric absorption bands, can separate overlapping absorption bands in a stable manner, and has relatively low sensitivity to noise. A comparison with techniques already available in the literature is presented using simulated spectra. An application is demonstrated utilizing the shortwave infrared (2.0-2.5 micron or 5000-4000 cm-1) region. A small hyperspectral scene is processed to demonstrate the ability of the method to detect small shifts in absorption wavelength caused by varying white mica chemistry in a natural setting.

  3. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    Science.gov (United States)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched

  4. [Automatic analysis pipeline of next-generation sequencing data].

    Science.gov (United States)

    Wenke, Li; Fengyu, Li; Siyao, Zhang; Bin, Cai; Na, Zheng; Yu, Nie; Dao, Zhou; Qian, Zhao

    2014-06-01

    The development of next-generation sequencing has generated high demand for data processing and analysis. Although there are a lot of software for analyzing next-generation sequencing data, most of them are designed for one specific function (e.g., alignment, variant calling or annotation). Therefore, it is necessary to combine them together for data analysis and to generate interpretable results for biologists. This study designed a pipeline to process Illumina sequencing data based on Perl programming language and SGE system. The pipeline takes original sequence data (fastq format) as input, calls the standard data processing software (e.g., BWA, Samtools, GATK, and Annovar), and finally outputs a list of annotated variants that researchers can further analyze. The pipeline simplifies the manual operation and improves the efficiency by automatization and parallel computation. Users can easily run the pipeline by editing the configuration file or clicking the graphical interface. Our work will facilitate the research projects using the sequencing technology.

  5. Automatic processing of dominance and submissiveness

    OpenAIRE

    Moors, Agnes; De Houwer, Jan

    2005-01-01

    We investigated whether people are able to detect in a relatively automatic manner the dominant or submissive status of persons engaged in social interactions. Using a variant of the affective Simon task (De Houwer & Eelen, 1998), we demonstrated that the verbal response DOMINANT or SUBMISSIVE was facilitated when it had to be made to a target person that was respectively dominant or submissive. These results provide new information about the automatic nature of appraisals and ...

  6. Automatic rebalancing of data in ATLAS distributed data management

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration; Garonne, Vincent; Lassnig, Mario

    2016-01-01

    The ATLAS Distributed Data Management system stores more than 220PB of physics data across more than 130 sites globally. Rucio, the next generation data management system of the ATLAS collaboration has now been successfully operated for over a year. However, with the forthcoming start of run-2 and its expected workload and utilization, more automated and advanced methods of managing the data are needed. In this article we present an extension to the data management system, which is in charge of detecting and foreseeing data imbalances as well as storage elements reaching and surpassing their capacity limit. The system automatically and dynamically rebalances the data to other storage elements, while respecting and guaranteeing data distribution policies and ensuring the availability of the data. This concept not only lowers the operational burden, as these cumbersome procedures had previously to be done manually, but it also enables the system to use its distributed resources more efficiently, which not only ...

  7. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; Vries, A.P. de; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  8. Automatic image processing as a means of safeguarding nuclear material

    International Nuclear Information System (INIS)

    Problems involved in computerized analysis of pictures taken by automatic film or video cameras in the context of international safeguards implementation are described. They include technical ones as well as the need to establish objective criteria for assessing image information. In the near future automatic image processing systems will be useful in verifying the identity and integrity of IAEA seals. (author)

  9. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  10. Automatic neutron PSD transmission from a process computer to a timeshare system

    International Nuclear Information System (INIS)

    A method for automatically telephoning, connecting, and transmitting neutron power-spectral density data from a CDC-1700 process control computer to a PDP-10 time-share system is described. Detailed program listings and block diagrams are included

  11. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  12. An Automat for the Semantic Processing of Structured Information

    OpenAIRE

    Leiva-Mederos, Amed; Senso, Jos?? A.; Dom??nguez-Velasco, Sandor; H??pola, Pedro

    2012-01-01

    Using the database of the PuertoTerm project, an indexing system based on the cognitive model of Brigitte Enders was built. By analyzing the cognitive strategies of three abstractors, we built an automat that serves to simulate human indexing processes. The automat allows the texts integrated in the system to be assessed, evaluated and grouped by means of the Bipartite Spectral Graph Partitioning algorithm, which also permits visualization of the terms and the documents. The system features a...

  13. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...

  14. Automatic data acquisition of anthropological measurements

    DEFF Research Database (Denmark)

    Lynnerup, N; Lynnerup, O

    1993-01-01

    A computer program in BASIC is presented which enables the input of measurement data from a caliper directly into specific records in a dBASE IV or PARADOX database. The program circumvents the tedious procedure of first recording measurement data manually and then entering the data into a comput...

  15. From Automatic to Adaptive Data Acquisition

    DEFF Research Database (Denmark)

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yet the main driving force behind these deployments are still computer scien- tists. The denser sampling and added modalities oered by sensornets could drive these elds in new directions, but not until the domain scientists be......- come familiar with sensornets and use them as any other instrument in their toolbox. We explore three dierent directions in which sensornets can become easier to deploy, collect data of higher quality, and oer more exibility, and we postulate that sensornets should be instruments for domain scientists...... the exibility of sensornets and reduce the complexity for the domain scientist, we developed an AI-based controller to act as a proxy between the scientist and sensornet. This controller is driven by the scientist's requirements to the collected data, and uses adaptive sampling in order to reach these goals....

  16. Robust indexing for automatic data collection

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  17. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  18. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  19. Towards Automatic Processing of Virtual City Models for Simulations

    Science.gov (United States)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  20. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  1. Automatic beam path analysis of laser wakefield particle acceleration data

    Energy Technology Data Exchange (ETDEWEB)

    Ruebel, Oliver; Wu, Kesheng; Prabhat; Weber, Gunther H; Ushizima, Daniela M; Hamann, Bernd; Bethel, Wes [Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720 (United States); Geddes, Cameron G R; Cormier-Michel, Estelle [LOASIS program of Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720 (United States); Messmer, Peter [Tech-X Corporation, 5621 Arapahoe Avenue Suite A, Boulder, CO 80303 (United States); Hagen, Hans [International Research Training Group ' Visualization of Large and Unstructured Data Sets-Applications in Geospatial Planning, Modeling, and Engineering' , Technische Universitaet Kaiserslautern, Erwin-Schroedinger-Strasse, D-67653 Kaiserslautern (Germany)], E-mail: oruebel@lbl.gov

    2009-01-01

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high-energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information-derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than has been possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  2. Automatic retrieval of bone fracture knowledge using natural language processing.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew S; Maley, Joan; Biswal, Sandip

    2013-08-01

    Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time. PMID:23053906

  3. An Automatic Development Process for Integrated Modular Avionics Software

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2013-05-01

    Full Text Available With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA. ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic development process based on Architecture Analysis & Design Language. An automatic model transformation approach from domain-specific models to platform-specific ARINC653 models and safety-critical ARINC653-compliant code generation technology are respectively presented during this process. A simplified multi-task flight application as a case study with preliminary experiment result is given to show the validity of this process.

  4. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    Science.gov (United States)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  5. SimWorld – Automatic Generation of realistic Landscape models for Real Time Simulation Environments – a Remote Sensing and GIS-Data based Processing Chain

    OpenAIRE

    Sparwasser, Nils; Stöbe, Markus; Friedl, Hartmut; Krauß, Thomas; Meisner, Robert

    2007-01-01

    The interdisciplinary project “SimWorld” - initiated by the German Aerospace Center (DLR) - aims to improve and to facilitate the generation of virtual landscapes for driving simulators. It integrates the expertise of different research institutes working in the field of car simulation and remote sensing technology. SimWorld will provide detailed virtual copies of the real world derived from air- and satellite-borne remote sensing data, using automated geo-scientific analysis techniques for m...

  6. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  7. Automatic Discovery of Non-Compositional Compounds in Parallel Data

    CERN Document Server

    Melamed, I D

    1997-01-01

    Automatic segmentation of text into minimal content-bearing units is an unsolved problem even for languages like English. Spaces between words offer an easy first approximation, but this approximation is not good enough for machine translation (MT), where many word sequences are not translated word-for-word. This paper presents an efficient automatic method for discovering sequences of words that are translated as a unit. The method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languages. It can discover hundreds of non-compositional compounds on each iteration, and constructs longer compounds out of shorter ones. Objective evaluation on a simple machine translation task has shown the method's potential to improve the quality of MT output. The method makes few assumptions about the data, so it can be applied to parallel data other than parallel texts, such as word spellings and pronunciations.

  8. Automatic and controlled processing and the Broad Autism Phenotype.

    Science.gov (United States)

    Camodeca, Amy; Voelker, Sylvia

    2016-01-30

    Research related to verbal fluency in the Broad Autism Phenotype (BAP) is limited and dated, but generally suggests intact abilities in the context of weaknesses in other areas of executive function (Hughes et al., 1999; Wong et al., 2006; Delorme et al., 2007). Controlled processing, the generation of search strategies after initial, automated responses are exhausted (Spat, 2013), has yet to be investigated in the BAP, and may be evidenced in verbal fluency tasks. One hundred twenty-nine participants completed the Delis-Kaplan Executive Function System Verbal Fluency test (D-KEFS; Delis et al., 2001) and the Broad Autism Phenotype Questionnaire (BAPQ; Hurley et al., 2007). The BAP group (n=53) produced significantly fewer total words during the 2nd 15" interval compared to the Non-BAP (n=76) group. Partial correlations indicated similar relations between verbal fluency variables for each group. Regression analyses predicting 2nd 15" interval scores suggested differentiation between controlled and automatic processing skills in both groups. Results suggest adequate automatic processing, but slowed development of controlled processing strategies in the BAP, and provide evidence for similar underlying cognitive constructs for both groups. Controlled processing was predictive of Block Design score for Non-BAP participants, and was predictive of Pragmatic Language score on the BAPQ for BAP participants. These results are similar to past research related to strengths and weaknesses in the BAP, respectively, and suggest that controlled processing strategy use may be required in instances of weak lower-level skills. PMID:26652842

  9. Towards automatic building of continuous and discrete process simulator

    International Nuclear Information System (INIS)

    The problem to be solved is the simulation of essentially continuous processes but involving a limited number of events leading to discontinuities. The NEPTUNIX simulation package solves this problem in the folloving way: a description of the process model is made, using a non-procedural language, the model is then analysed and, if it is found correct, NEPTUNIX generates automatically the corresponding simulator. This simulator is efficient and transportable. Model description and other compiler outputs build up a complete documentation of the model, which documentation is also fundamental for easy and efficient operation of the simulator

  10. Automatic Defect Detection in X-Ray Images Using Image Data Fusion

    Institute of Scientific and Technical Information of China (English)

    TIAN Yuan; DU Dong; CAI Guorui; WANG Li; ZHANG Hua

    2006-01-01

    Automatic defect detection in X-ray images is currently a focus of much research at home and abroad. The technology requires computerized image processing, image analysis, and pattern recognition. This paper describes an image processing method for automatic defect detection using image data fusion which synthesizes several methods including edge extraction, wave profile analyses, segmentation with dynamic threshold, and weld district extraction. Test results show that defects that induce an abrupt change over a predefined extent of the image intensity can be segmented regardless of the number, location, shape, or size. Thus, the method is more robust and practical than the current methods using only one method.

  11. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    OpenAIRE

    Chuqing Cao; Ying Sun

    2014-01-01

    Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data....

  12. Real-time wireless acquisition of process data

    OpenAIRE

    Zhang, Ye

    2011-01-01

    This study discusses a novel method called automatic process measurement, which is based on the idea of mining process data from workflow logs. We improve the process mining technique by using Bluetooth wireless technology to do real-time acquisition of process data. The automatic measurement system is capable of collecting process data of elderly people's daily process as well as nursing personnel's behavior in the open healthcare. Similarly, retail and logistics processes can be measured wi...

  13. Automatic Multimedia Creation Enriched with Dynamic Conceptual Data

    Directory of Open Access Journals (Sweden)

    Angel Martín

    2012-12-01

    Full Text Available There is a growing gap between the multimedia production and the context centric multimedia services. The main problem is the under-exploitation of the content creation design. The idea is to support dynamic content generation adapted to the user or display profile. Our work is an implementation of a web platform for automatic generation of multimedia presentations based on SMIL (Synchronized Multimedia Integration Language standard. The system is able to produce rich media with dynamic multimedia content retrieved automatically from different content databases matching the semantic context. For this purpose, we extend the standard interpretation of SMIL tags in order to accomplish a semantic translation of multimedia objects in database queries. This permits services to take benefit of production process to create customized content enhanced with real time information fed from databases. The described system has been successfully deployed to create advanced context centric weather forecasts.

  14. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    International Nuclear Information System (INIS)

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same

  15. Automatic Identification And Data Collection Via Barcode Laser Scanning.

    Science.gov (United States)

    Jacobeus, Michel

    1986-07-01

    How to earn over 100 million a year by investing 40 million ? No this is not the latest Wall Street "tip" but the costsavings obtained by the U.S. Department of Defense. 2 % savings on annual turnover claim supermarkets ! Millions of Dollars saved report automotive companies ! These are not daydreams, but tangible results measured by users after implemen-ting Automatic Identification and Data Collection systems, based on bar codes. To paraphrase the famous sentence "I think, thus I am", with AI/ADC systems "You knonw, thus you are". Indeed, in today's world, an immediate, accurate and precise information is a vital management need for companies growth and survival. AI/ADC techniques fullfill these objectives by supplying automatically and without any delay nor alteration the right information.

  16. DMET-Analyzer: automatic analysis of Affymetrix DMET Data

    Directory of Open Access Journals (Sweden)

    Guzzi Pietro

    2012-10-01

    Full Text Available Abstract Background Clinical Bioinformatics is currently growing and is based on the integration of clinical and omics data aiming at the development of personalized medicine. Thus the introduction of novel technologies able to investigate the relationship among clinical states and biological machineries may help the development of this field. For instance the Affymetrix DMET platform (drug metabolism enzymes and transporters is able to study the relationship among the variation of the genome of patients and drug metabolism, detecting SNPs (Single Nucleotide Polymorphism on genes related to drug metabolism. This may allow for instance to find genetic variants in patients which present different drug responses, in pharmacogenomics and clinical studies. Despite this, there is currently a lack in the development of open-source algorithms and tools for the analysis of DMET data. Existing software tools for DMET data generally allow only the preprocessing of binary data (e.g. the DMET-Console provided by Affymetrix and simple data analysis operations, but do not allow to test the association of the presence of SNPs with the response to drugs. Results We developed DMET-Analyzer a tool for the automatic association analysis among the variation of the patient genomes and the clinical conditions of patients, i.e. the different response to drugs. The proposed system allows: (i to automatize the workflow of analysis of DMET-SNP data avoiding the use of multiple tools; (ii the automatic annotation of DMET-SNP data and the search in existing databases of SNPs (e.g. dbSNP, (iii the association of SNP with pathway through the search in PharmaGKB, a major knowledge base for pharmacogenomic studies. DMET-Analyzer has a simple graphical user interface that allows users (doctors/biologists to upload and analyse DMET files produced by Affymetrix DMET-Console in an interactive way. The effectiveness and easy use of DMET Analyzer is demonstrated through different

  17. Automatic humidification system to support the assessment of food drying processes

    Science.gov (United States)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  18. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  19. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  20. Enhancement of the automatic ultrasonic signal processing system using digital technology

    International Nuclear Information System (INIS)

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  1. Enhancement of the automatic ultrasonic signal processing system using digital technology

    Energy Technology Data Exchange (ETDEWEB)

    Koo, In Soo; Park, H. Y.; Suh, Y. S.; Kim, D. Hoon; Huh, S.; Sung, S. H.; Jang, G. S.; Ryoo, S. G.; Choi, J. H.; Kim, Y. H.; Lee, J. C.; Kim, D. Hyun; Park, H. J.; Kim, Y. C.; Lee, J. P.; Park, C. H.; Kim, M. S

    1999-12-01

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  2. Automatic removal of outliers in hydrologic time series and quality control of rainfall data: processing a real-time database of the Local System for Flood Monitoring in Klodzko County, Poland

    Science.gov (United States)

    Mizinski, Bartlomiej; Niedzielski, Tomasz; Kryza, Maciej; Szymanowski, Mariusz

    2013-04-01

    Real-time hydrological forecasting requires the highest quality of both hydrologic and meteorological data collected in a given river basin. Large outliers may lead to inaccurate predictions, with substantial departures between observations and prognoses considered even in short term. Although we need the correctness of both riverflow and rainfall data, they cannot be processed in the same way to produce a filtered output. Indeed, hydrologic time series at a given gauge can be interpolated in time domain after having detected suspicious values, however if no outlier has been detected at the upstream sites. In the case of rainfall data, interpolation is not suitable as we cannot verify the potential outliers at a given site against data from other sites especially in the complex terrain. This is due to the fact that very local convective events may occur, leading to large rainfall peaks at a limited space. Hence, instead of interpolating data, we rather perform a flagging procedure that only ranks outliers according to the likelihood of occurrence. Following the aforementioned assumptions, we have developed a few modules that serve a purpose of a fully automated correction of a database that is updated in real-time every 15 minutes, and the main objective of the work was to produce a high-quality database for a purpose of hydrologic rainfall-runoff modeling and ensemble prediction. The database in question is available courtesy of the County Office in Kłodzko (SW Poland), the institution which owns and maintains the Local System for Flood Monitoring in Kłodzko County. The dedicated prediction system, known as HydroProg, is now being built at the University of Wrocław (Poland). As the entire prediction system, the correction modules work automatically in real time and are developed in R language. They are plugged in to a larger IT infrastructure. Hydrologic time series, which are water levels recorded every 15 minutes at 22 gauges located in Kłodzko County, are

  3. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan;

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...... group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre...... males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two...

  4. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  5. Controlled versus Automatic Processes: Which Is Dominant to Safety? The Moderating Effect of Inhibitory Control

    OpenAIRE

    Yaoshan Xu; Yongjuan Li; Weidong Ding; Fan Lu

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study...

  6. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  7. Neural network for automatic analysis of motility data

    DEFF Research Database (Denmark)

    Jakobsen, Erik; Kruse-Andersen, S; Kolberg, Jens Godsk

    1994-01-01

    events. Due to great variation in events, this method often fails to detect biologically relevant pressure variations. We have tried to develop a new concept for recognition of pressure events based on a neural network. Pressures were recorded for over 23 hours in 29 normal volunteers by means...... comparable. However, the neural network recognized pressure peaks clearly generated by muscular activity that had escaped detection by the conventional program. In conclusion, we believe that neurocomputing has potential advantages for automatic analysis of gastrointestinal motility data....... of a portable data recording system. A number of pressure events and non-events were selected from 9 recordings and used for training the network. The performance of the trained network was then verified on recordings from the remaining 20 volunteers. The accuracy and sensitivity of the two systems were...

  8. Automatic Weissenberg data collection system for time-resolved protein crystallography

    CERN Document Server

    Sakabe, N; Higashi, T; Igarashi, N; Suzuki, M; Watanabe, N; Sasaki, K

    2001-01-01

    A totally new type of fully automatic Weissenberg data-collection system called 'Galaxy' was developed and was installed at the Photon Factory. This automatic data collection system consists of a rotated-inclined focusing monochromator, a screenless Weissenberg type camera, an image reader, an eraser, a cassette transportation mechanism, a control console and a safety and high-speed computer network system linking a control console, data processing computers and data servers. The special characteristics of this system are a Weissenberg camera with a fully cylindrical cassette which can be rotated to exchange a frame, a maximum number of 36 images to be recorded in an IP cassette, and a very high speed IP reader with five reading heads. Since the frame exchange time is only a few seconds, this system is applicable for time-resolved protein crystallography at seconds or minutes of time-scale.

  9. System design of the METC automatic data acquisition and control system

    Energy Technology Data Exchange (ETDEWEB)

    Goff, D. R.; Armstrong, D. L.

    1982-02-01

    A system of computer programs and hardware was developed by the Instrumentation Branch of the Morgantown Energy Technology Center (METC) to provide data acquisition and control features for research projects at the site. The Automatic Data Acquisition and Control System (ADACS) has the capability of servicing up to eight individual projects simultaneously, providing data acquisition, data feedback, and process control where needed. Several novel software features - including a data table driven program, extensive feedback in real time, free format English commands, and high reliability - were incorporated to provide these functions.

  10. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  11. Sensitometric comparison of E and F dental radiographic films using manual and automatic processing systems

    Directory of Open Access Journals (Sweden)

    Dabaghi A.

    2008-04-01

    Full Text Available Background and Aim: Processing conditions affect sensitometric properties of X-ray films. In this study, we aimed to evaluate the sensitometric characteristics of InSight (IP, a new F-speed film, in fresh and used processing solutions in dental office condition and compare them with Ektaspeed Plus (EP.Materials and Methods: In this experimental in vitro study, an aluminium step wedge was used to construct characteristic curves for InSight and Ektaspeed Plus films (Kodak Eastman, Rochester, USA.All films were processed in Champion solution (X-ray Iran, Tehran, Iran both manually and automatically in a period of six days. Unexposed films of both types were processed manually and automatically to determine base plus fog density. Speed and film contrast were measured according to ISO definition. Data were analyzed using one-way ANOVA and T tests with P<0.05 as the level of significance.Results: IP was 20 to 22% faster than EP and showed to be an F-speed film when processed in automatic condition and E-F film when processed manually. Also it was F-speed in fresh solution and E-speed in old solution. IP and EP contrasts were similar in automatic processing but EP contrast was higher when processed manually. Both EP and IP films had standard values of base plus fog (<0.35 and B+F densities were decreased in old solution.Conclusion: Based on the results of this study, InSight is a F-speed film with a speed of at least 20% greater than Ektaspeed. In addition, it reduces patient exposure with no damage to image quality.

  12. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    Science.gov (United States)

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  13. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  14. Automatic processing of unattended object features by functional connectivity

    Directory of Open Access Journals (Sweden)

    Katja Martina Mayer

    2013-05-01

    Full Text Available Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended and task-irrelevant (unattended object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, colour, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks or remained constant (no-change blocks during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioural level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localised several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction analyses revealed that several functionally-localised areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.

  15. Automatic Identification of Antibodies in the Protein Data Bank

    Institute of Scientific and Technical Information of China (English)

    LI Xun; WANG Renxiao

    2009-01-01

    An automatic method has been developed for identifying antibody entries in the protein data bank (PDB). Our method, called KIAb (Keyword-based Identification of Antibodies), parses PDB-format files to search for particular keywords relevant to antibodies, and makes judgment accordingly. Our method identified 780 entries as antibodies on the entire PDB. Among them, 767 entries were confirmed by manual inspection, indicating a high success rate of 98.3%. Our method recovered basically all of the entries compiled in the Summary of Antibody Crystal Structures (SACS) database. It also identified a number of entries missed by SACS. Our method thus provides a more com-plete mining of antibody entries in PDB with a very low false positive rate.

  16. Automatic delimitation of microwatershed using SRTM data of the NASA

    Directory of Open Access Journals (Sweden)

    Freddy Aníbal Jumbo Castillo

    2015-12-01

    Full Text Available The watershed as the basic territorial unit of planning and management of water resources, requires its proper delimitation of the catchment or drainage area, faced with this situation, the lack of geographic information of Casacay river micro watersheds, hydrographic unit should be resolved, for this purpose the research was aimed at automatic delimitation of micro watersheds using of Geographic Information Systems (GIS techniques and the project Shuttle Radar Topographic Mission (SRTM 30 meters spatial resolution data. The selected methodology was the Pfafstetter one, with which nine micro watersheds were obtained with their respective codification allowing to continue with watersheds standardization adopted by Ecuador Water's Secretariat. With the investigation results watersheds will be updated with more detail information, promoting the execution of tasks or activities related to the integrated management of the hydrographic unit studied

  17. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  18. Data processing made simple

    CERN Document Server

    Wooldridge, Susan

    2013-01-01

    Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The

  19. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  20. A Bottom-Up Approach for Automatically Grouping Sensor Data Layers by their Observed Property

    Directory of Open Access Journals (Sweden)

    Steve H.L. Liang

    2013-01-01

    Full Text Available The Sensor Web is a growing phenomenon where an increasing number of sensors are collecting data in the physical world, to be made available over the Internet. To help realize the Sensor Web, the Open Geospatial Consortium (OGC has developed open standards to standardize the communication protocols for sharing sensor data. Spatial Data Infrastructures (SDIs are systems that have been developed to access, process, and visualize geospatial data from heterogeneous sources, and SDIs can be designed specifically for the Sensor Web. However, there are problems with interoperability associated with a lack of standardized naming, even with data collected using the same open standard. The objective of this research is to automatically group similar sensor data layers. We propose a methodology to automatically group similar sensor data layers based on the phenomenon they measure. Our methodology is based on a unique bottom-up approach that uses text processing, approximate string matching, and semantic string matching of data layers. We use WordNet as a lexical database to compute word pair similarities and derive a set-based dissimilarity function using those scores. Two approaches are taken to group data layers: mapping is defined between all the data layers, and clustering is performed to group similar data layers. We evaluate the results of our methodology.

  1. Analysis on the Influence of Automatic Station Temperature Data on the Sequence Continuity of Historical Meteorological Data

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The research aimed to study the influence of automatic station data on the sequence continuity of historical meteorological data. [Method] Based on the temperature data which were measured by the automatic meteorological station and the corresponding artificial observation data during January-December in 2001, the monthly average, maximum and minimum temperatures in the automatic station were compared with the corresponding artificial observation temperature data in the parallel observation peri...

  2. Automatic Registration of Multi-Source Data Using Mutual Information

    Science.gov (United States)

    Parmehr, E. G.; Zhang, C.; Fraser, C. S.

    2012-07-01

    Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI) as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI) approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM) and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  3. Automatic fault detection on BIPV systems without solar irradiation data

    CERN Document Server

    Leloux, Jonathan; Luna, Alberto; Desportes, Adrien

    2014-01-01

    BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar ...

  4. AUTOMATICALLY CONVERTING TABULAR DATA TO RDF: AN ONTOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Kumar Sharma

    2015-07-01

    Full Text Available Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information

  5. AUTOMATIC REGISTRATION OF MULTI-SOURCE DATA USING MUTUAL INFORMATION

    Directory of Open Access Journals (Sweden)

    E. G. Parmehr

    2012-07-01

    Full Text Available Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  6. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  7. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  8. Measuring Service Reliability Using Automatic Vehicle Location Data

    Directory of Open Access Journals (Sweden)

    Zhenliang Ma

    2014-01-01

    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.

  9. An Automatic Cycle-Slip Processing Method and Its Precision Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHENG Zuoya; LU Xiushan

    2006-01-01

    On the basis of analyzing and researching the current algorithms of cycle-slip detection and correction, a new method of cycle-slip detection and correction is put forward in this paper, that is, a reasonable cycle-slip detection condition and algorithm with corresponding program COMPRE (COMpass PRE-processing) to detect and correct cycle-slip automatically, compared with GIPSY and GAMIT software, for example, it is proved that this method is effective and credible to cycle-slip detection and correction in GPS data pre-processing.

  10. An Automatic Number Plate Recognition System under Image Processing

    Directory of Open Access Journals (Sweden)

    Sarbjit Kaur

    2016-03-01

    Full Text Available Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whole ANPR system depends on number plate extraction phase as character segmentation and character recognition phases are also depend on the output of this phase. Further the accuracy of Number Plate Extraction phase depends on the quality of captured vehicle image. Higher be the quality of captured input vehicle image more will be the chances of proper extraction of vehicle number plate area. The existing methods of ANPR works well for dark and bright/light categories image but it does not work well for Low Contrast, Blurred and Noisy images and the detection of exact number plate area by using the existing ANPR approach is not successful even after applying existing filtering and enhancement technique for these types of images. Due to wrong extraction of number plate area, the character segmentation and character recognition are also not successful in this case by using the existing method. To overcome these drawbacks I proposed an efficient approach for ANPR in which the input vehicle image is pre-processed firstly by iterative bilateral filtering , adaptive histogram equalization and number plate is extracted from pre-processed vehicle image using morphological operations, image subtraction, image binarization/thresholding, sobel vertical edge detection and by boundary box analysis. Sometimes the extracted plate area also contains noise, bolts, frames etc. So the extracted plate area is enhanced by using morphological operations to improve the quality of

  11. Automatic Inspection of Nuclear-Reactor Tubes During Production and Processing, Using Eddy-Current Methods

    International Nuclear Information System (INIS)

    The possibilities of automatic and semi-automatic inspection of tubes using eddy-current methods are described. The paper deals in particular with modem processes, compared to the use of other non-destructive methods. The essence of the paper is that the methods discussed are ideal for objective automatic inspection. Not only are the known methods described, but certain new methods and their application to the detection of flaws in reactor tubes are discussed. (author)

  12. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  13. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Directory of Open Access Journals (Sweden)

    Yaoshan Xu

    Full Text Available This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  14. Processing NOAA Spectroradiometric Data

    OpenAIRE

    Broenkow, William W.; Greene, Nancy, T.; Feinholz, Michael, E.

    1993-01-01

    This report outlines the NOAA spectroradiometer data processing system implemented by the MLML_DBASE programs. This is done by presenting the algorithms and graphs showing the effects of each step in the algorithms. [PDF contains 32 pages

  15. Information Processing - Administrative Data Processing

    Science.gov (United States)

    Bubenko, Janis

    A three semester, 60-credit course package in the topic of Administrative Data Processing (ADP), offered in 1966 at Stockholm University (SU) and the Royal Institute of Technology (KTH) is described. The package had an information systems engineering orientation. The first semester focused on datalogical topics, while the second semester focused on the infological topics. The third semester aimed to deepen the students’ knowledge in different parts of ADP and at writing a bachelor thesis. The concluding section of this paper discusses various aspects of the department’s first course effort. The course package led to a concretisation of our discipline and gave our discipline an identity. Our education seemed modern, “just in time”, and well adapted to practical needs. The course package formed the first concrete activity of a group of young teachers and researchers. In a forty-year perspective, these people have further developed the department and the topic to an internationally well-reputed body of knowledge and research. The department has produced more than thirty professors and more than one hundred doctoral degrees.

  16. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  17. Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany

    Science.gov (United States)

    Olbert, Kai; Küperkoch, Ludger; Meier, Thomas

    2016-04-01

    Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.

  18. Are Automatic Imitation and Spatial Compatibility Mediated by Different Processes?

    Science.gov (United States)

    Cooper, Richard P.; Catmur, Caroline; Heyes, Cecilia

    2013-01-01

    Automatic imitation or "imitative compatibility" is thought to be mediated by the mirror neuron system and to be a laboratory model of the motor mimicry that occurs spontaneously in naturalistic social interaction. Imitative compatibility and spatial compatibility effects are known to depend on different stimulus dimensions--body…

  19. Automatic Mapping Of Large Signal Processing Systems To A Parallel Machine

    Science.gov (United States)

    Printz, Harry; Kung, H. T.; Mummert, Todd; Scherer, Paul M.

    1989-12-01

    Since the spring of 1988, Carnegie Mellon University and the Naval Air Development Center have been working together to implement several large signal processing systems on the Warp parallel computer. In the course of this work, we have developed a prototype of a software tool that can automatically and efficiently map signal processing systems to distributed-memory parallel machines, such as Warp. We have used this tool to produce Warp implementations of small test systems. The automatically generated programs compare favorably with hand-crafted code. We believe this tool will be a significant aid in the creation of high speed signal processing systems. We assume that signal processing systems have the following characteristics: •They can be described by directed graphs of computational tasks; these graphs may contain thousands of task vertices. • Some tasks can be parallelized in a systolic or data-partitioned manner, while others cannot be parallelized at all. • The side effects of each task, if any, are limited to changes in local variables. • Each task has a data-independent execution time bound, which may be expressed as a function of the way it is parallelized, and the number of processors it is mapped to. In this paper we describe techniques to automatically map such systems to Warp-like parallel machines. We identify and address key issues in gracefully combining different parallel programming styles, in allocating processor, memory and communication bandwidth, and in generating and scheduling efficient parallel code. When iWarp, the VLSI version of the Warp machine, becomes available in 1990, we will extend this tool to generate efficient code for very large applications, which may require as many as 3000 iWarp processors, with an aggregate peak performance of 60 gigaflops.

  20. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  1. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  2. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  3. Study on the automatic process of line heating for pillow shape plate

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper focuses on the process for pillow shape plate by line heating technique, which is widely applied in the production of ship hull. Based on the analysis of primary parameters and experimental data in line heating process, the amount of local contraction generated by line heating has been illustrated. Then, combining with the computational result of local deformation determined by shell plate development, an optimization method for line heating parameters has been studied. This prediction system may provide rational arrangements of heating lines and technical parameters of process. By integrating the prediction system into the line heating robot for pillow shape plate, the automatic process of line heating for pillow shape plate can be achieved.

  4. Experience with automatic orientation from different data sets

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2003-01-01

    Automatic orientation of aerial images based on existing databases was a topic of the OEEPE research project running in 1998 and 1999. Different approaches for solving this task have been published until now. The method developed at Aalborg University uses the existing topographic database and or...

  5. Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes

    Science.gov (United States)

    Awrangjeb, M.; Lu, G.; Fraser, C.

    2014-08-01

    This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the

  6. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  7. Gaia Data Processing Architecture

    CERN Document Server

    O'Mullane, W; Bailer-Jones, C; Bastian, U; Brown, A; Drimmel, R; Eyer, L; Huc, C; Jansen, F; Katz, D; Lindegren, L; Pourbaix, D; Luri, X; Mignard, F; Torra, J; van Leeuwen, F

    2006-01-01

    Gaia is ESA's ambitious space astrometry mission the main objective of which is to astrometrically and spectro-photometrically map 1000 Million celestial objects (mostly in our galaxy) with unprecedented accuracy. The announcement of opportunity for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer. The satellite will downlink close to 100 TB of raw telemetry data over 5 years. To achieve its required accuracy of a few 10s of Microarcsecond astrometry, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a Radial Velocity instrument, two low-resolution dispersers for multi-color photometry and two Star Mappers. Gaia is a flying Giga Pixel camera. The various instruments each require relatively complex processing while at the same time being interdependent. We describe the overall composition of the DPAC and the envisaged overall archi...

  8. Automatic Detection and Characterization of Subsurface Features from Mars Radar Sounder Data

    Science.gov (United States)

    Ferro, A.; Bruzzone, L.; Heggy, E.; Plaut, J. J.

    2010-12-01

    MARSIS and SHARAD are currently orbiting Mars in an attempt to explore structural and volatile elements in its subsurface. The data returned from these two experiments are complementary in their nature for providing different penetration capabilities and vertical resolutions that is crucial to constrain the ambiguities on the subsurface structural and geophysical properties. To this day, both radars have acquired a substantial large volume of data that are yet to be quantitatively analyzed with more accurate radar inversion algorithms. Manual investigation of the radargrams is a time consuming task that is often dependent on user visual ability to distinguish subsurface reflectors. Such process induces a substantial ambiguity in data analysis from user to user, limits the amount of data to be explored and reduces efficiency of fusion studies to compile MARSIS and SHARAD data in a metric process. To address this deficiency, we started the development of automated techniques for the extraction of subsurface information from the radar sounding data. Such methods will greatly improve the ability to perform scientific analysis on larger scale areas using the two data sets from MARSIS and SHARAD simultaneously [Ferro and Bruzzone, 2009]. Our automated data analysis chain has been preliminarily applied only to SHARAD data for the statistical characterization of the radargrams and the automatic detection of linear subsurface features [Ferro and Bruzzone, 2010]. Our current development has been extended for the integration of both SHARAD and MARSIS data. We identified two targets of interest to test and validate our automated tools to explore subsurface features: (1) The North Polar Layer Deposits, and (2) Elysium Planitia. On the NPLD, the technique was able to extract the position and the extension of the returns coming from basal unit from SHARAD radargrams, both in range and azimuth. Therefore, it was possible to map the depth and thickness of the icy polar cap. The

  9. Real-time hyperspectral processing for automatic nonferrous material sorting

    Science.gov (United States)

    Picón, Artzai; Ghita, Ovidiu; Bereciartua, Aranzazu; Echazarra, Jone; Whelan, Paul F.; Iriondo, Pedro M.

    2012-01-01

    The application of hyperspectral sensors in the development of machine vision solutions has become increasingly popular as the spectral characteristics of the imaged materials are better modeled in the hyperspectral domain than in the standard trichromatic red, green, blue data. While there is no doubt that the availability of detailed spectral information is opportune as it opens the possibility to construct robust image descriptors, it also raises a substantial challenge when this high-dimensional data is used in the development of real-time machine vision systems. To alleviate the computational demand, often decorrelation techniques are commonly applied prior to feature extraction. While this approach has reduced to some extent the size of the spectral descriptor, data decorrelation alone proved insufficient in attaining real-time classification. This fact is particularly apparent when pixel-wise image descriptors are not sufficiently robust to model the spectral characteristics of the imaged materials, a case when the spatial information (or textural properties) also has to be included in the classification process. The integration of spectral and spatial information entails a substantial computational cost, and as a result the prospects of real-time operation for the developed machine vision system are compromised. To answer this requirement, in this paper we have reengineered the approach behind the integration of the spectral and spatial information in the material classification process to allow the real-time sorting of the nonferrous fractions that are contained in the waste of electric and electronic equipment scrap.

  10. NGS-Trex: an automatic analysis workflow for RNA-Seq data.

    Science.gov (United States)

    Boria, Ilenia; Boatti, Lara; Saggese, Igor; Mignone, Flavio

    2015-01-01

    RNA-Seq technology allows the rapid analysis of whole transcriptomes taking advantage of next-generation sequencing platforms. Moreover with the constant decrease of the cost of NGS analysis RNA-Seq is becoming very popular and widespread. Unfortunately data analysis is quite demanding in terms of bioinformatic skills and infrastructures required, thus limiting the potential users of this method. Here we describe the complete analysis of sample data from raw sequences to data mining of results by using NGS-Trex platform, a low user interaction, fully automatic analysis workflow. Used through a web interface, NGS-Trex processes data and profiles the transcriptome of the samples identifying expressed genes, transcripts, and new and known splice variants. It also detects differentially expressed genes and transcripts across different experiments.

  11. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  12. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  13. Automatic processing of atmospheric CO2 and CH4 mole fractions at the ICOS Atmosphere Thematic Centre

    Science.gov (United States)

    Hazan, Lynn; Tarniewicz, Jérôme; Ramonet, Michel; Laurent, Olivier; Abbaris, Amara

    2016-09-01

    The Integrated Carbon Observation System Atmosphere Thematic Centre (ICOS ATC) automatically processes atmospheric greenhouse gases mole fractions of data coming from sites of the ICOS network. Daily transferred raw data files are automatically processed and archived. Data are stored in the ICOS atmospheric database, the backbone of the system, which has been developed with an emphasis on the traceability of the data processing. Many data products, updated daily, explore the data through different angles to support the quality control of the dataset performed by the principal operators in charge of the instruments. The automatic processing includes calibration and water vapor corrections as described in the paper. The mole fractions calculated in near-real time (NRT) are automatically revaluated as soon as a new instrument calibration is processed or when the station supervisors perform quality control. By analyzing data from 11 sites, we determined that the average calibration corrections are equal to 1.7 ± 0.3 µmol mol-1 for CO2 and 2.8 ± 3 nmol mol-1 for CH4. These biases are important to correct to avoid artificial gradients between stations that could lead to error in flux estimates when using atmospheric inversion techniques. We also calculated that the average drift between two successive calibrations separated by 15 days amounts to ±0.05 µmol mol-1 and ±0.7 nmol mol-1 for CO2 and CH4, respectively. Outliers are generally due to errors in the instrument configuration and can be readily detected thanks to the data products provided by the ATC. Several developments are still ongoing to improve the processing, including automated spike detection and calculation of time-varying uncertainties.

  14. AUTOMATIC RECOGNITION OF PIPING SYSTEM FROM LARGE-SCALE TERRESTRIAL LASER SCAN DATA

    Directory of Open Access Journals (Sweden)

    K. Kawashima

    2012-09-01

    Full Text Available Recently, changes in plant equipment have been becoming more frequent because of the short lifetime of the products, and constructing 3D shape models of existing plants (as-built models from large-scale laser scanned data is expected to make their rebuilding processes more efficient. However, the laser scanned data of the existing plant has massive points, captures tangled objects and includes a large amount of noises, so that the manual reconstruction of a 3D model is very time-consuming and costs a lot. Piping systems especially, account for the greatest proportion of plant equipment. Therefore, the purpose of this research was to propose an algorithm which can automatically recognize a piping system from terrestrial laser scan data of the plant equipment. The straight portion of pipes, connecting parts and connection relationship of the piping system can be recognized in this algorithm. Eigenvalue analysis of the point clouds and of the normal vectors allows for the recognition. Using only point clouds, the recognition algorithm can be applied to registered point clouds and can be performed in a fully automatic way. The preliminary results of the recognition for large-scale scanned data from an oil rig plant have shown the effectiveness of the algorithm.

  15. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data.

    Science.gov (United States)

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L; Polimeni, Jonathan R

    2016-07-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we present a framework for deriving cortical surface reconstructions directly from high-resolution EPI-based reference images that provide anatomical models exactly geometric distortion-matched to the functional data. Anatomical EPI data with 1mm isotropic voxel size were acquired using a fast multiple inversion recovery time EPI sequence (MI-EPI) at 7T, from which quantitative T1 maps were calculated. Using these T1 maps, volumetric data mimicking the tissue contrast of standard anatomical data were synthesized using the Bloch equations, and these T1-weighted data were automatically processed using FreeSurfer. The spatial alignment between T2(⁎)-weighted EPI data and the synthetic T1-weighted anatomical MI-EPI-based images was improved compared to the conventional anatomical reference. In particular, the alignment near the regions vulnerable to distortion due to magnetic susceptibility differences was improved, and sampling of the adjacent tissue classes outside of the cortex was reduced when using cortical surface reconstructions derived directly from the MI-EPI reference. The MI-EPI method therefore produces high-quality anatomical data that can be automatically segmented with standard software, providing cortical surface reconstructions that are geometrically matched to the BOLD fMRI data. PMID:27079529

  16. Research on HJ-1A/B satellite data automatic geometric precision correction design

    Institute of Scientific and Technical Information of China (English)

    Xiong Wencheng; Shen Wenming; Wang Qiao; Shi Yuanli; Xiao Rulin; Fu Zhuo

    2014-01-01

    Developed independently by China,HJ-1A/B satellites have operated well on-orbit for five years and acquired a large number of high-quality observation data. The realization of the observation data geometric precision correction is of great significance for macro and dynamic ecological environment monitoring. The pa-per analyzed the parameter characteristics of HJ-1 satellite and geometric features of HJ-1 satellite level 2 data (systematic geo-corrected data). Based on this,the overall HJ-1 multi-sensor geometric correction flow and charge-coupled device (CCD) automatic geometric precision correction method were designed. Actual operating data showed that the method could achieve good result for automatic geometric precision correction of HJ-1 sat-ellite data,automatic HJ-1 CCD image geometric precision correction accuracy could be achieved within two pixels and automatic matching accuracy between the images of same satellite could be obtained less than one pixel.

  17. Cell Processing Engineering for Regenerative Medicine : Noninvasive Cell Quality Estimation and Automatic Cell Processing.

    Science.gov (United States)

    Takagi, Mutsumi

    2016-01-01

    The cell processing engineering including automatic cell processing and noninvasive cell quality estimation of adherent mammalian cells for regenerative medicine was reviewed. Automatic cell processing necessary for the industrialization of regenerative medicine was introduced. The cell quality such as cell heterogeneity should be noninvasively estimated before transplantation to patient, because cultured cells are usually not homogeneous but heterogeneous and most protocols of regenerative medicine are autologous system. The differentiation level could be estimated by two-dimensional cell morphology analysis using a conventional phase-contrast microscope. The phase-shifting laser microscope (PLM) could determine laser phase shift at all pixel in a view, which is caused by the transmitted laser through cell, and might be more noninvasive and more useful than the atomic force microscope and digital holographic microscope. The noninvasive determination of the laser phase shift of a cell using a PLM was carried out to determine the three-dimensional cell morphology and estimate the cell cycle phase of each adhesive cell and the mean proliferation activity of a cell population. The noninvasive discrimination of cancer cells from normal cells by measuring the phase shift was performed based on the difference in cytoskeleton density. Chemical analysis of the culture supernatant was also useful to estimate the differentiation level of a cell population. A probe beam, an infrared beam, and Raman spectroscopy are useful for diagnosing the viability, apoptosis, and differentiation of each adhesive cell. PMID:25373455

  18. Efficient Parallel Data Processing in the Cloud

    Directory of Open Access Journals (Sweden)

    THANAPAL.P

    2013-05-01

    Full Text Available Cloud computing is a distributed computing technology which is the combination of hardware and software and delivered as a service to store, manage and process data. A new system is proposed to allocate resources dynamically for task scheduling and execution. Virtual machines are introduced in the proposed architecture for efficient parallel data processing in the cloud. Various virtual machines are introduced to automatically instantiate and terminate in execution of job. An extended evaluation of MapReduce is also used in thisapproach.

  19. An Automatic Framework Using Space-Time Processing and TR-MUSIC for Subsurface and Through-Wall Multitarget Imaging

    Directory of Open Access Journals (Sweden)

    Si-hao Tan

    2012-01-01

    Full Text Available We present an automatic framework combined space-time signal processing with Time Reversal electromagnetic (EM inversion for subsurface and through-wall multitarget imaging using electromagnetic waves. This framework is composed of a frequency-wavenumber (FK filter to suppress direct wave and medium bounce, a FK migration algorithm to automatically estimate the number of targets and identify target regions, which can be used to reduce the computational complexity of the following imaging algorithm, and a EM inversion algorithm using Time Reversal Multiple Signal Classification (TR-MUSIC to reconstruct hidden objects. The feasibility of the framework is demonstrated with simulated data generated by GPRMAX.

  20. Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data

    Directory of Open Access Journals (Sweden)

    Günther Greiner

    2010-01-01

    Full Text Available Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

  1. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  2. Laplace domain automatic data assimilation of contaminant transport using a Wireless Sensor Network

    Science.gov (United States)

    Barnhart, K.; Illangasekare, T. H.

    2011-12-01

    Emerging in situ sensors and distributed network technologies have the potential to monitor dynamic hydrological and environmental processes more effectively than traditional monitoring and data acquisition techniques by sampling at greater spatial and temporal resolutions. In particular, Wireless Sensor Networks, the combination of low-power telemetry and energy-harvesting with miniaturized sensors, could play a large role in monitoring the environment on nature's time scale. Since sensor networks supply data with little or no delay, applications exist where automatic or real-time assimilation of this data would be useful, for example during smart remediation procedures where tracking of the plume response will reinforce real-time decisions. As a foray into this new data context, we consider the estimation of hydraulic conductivity when incorporating subsurface plume concentration data. Current practice optimizes the model in the time domain, which is often slow and overly sensitive to data anomalies. Instead, we perform model inversion in Laplace space and are able to do so because data gathered using new technologies can be sampled densely in time. An intermediate-scale synthetic aquifer is used to illustrate the developed technique. Data collection and model (re-)optimization are automatic. Electric conductivity values of passing sodium bromide plumes are sent through a wireless sensor network, stored in a database, scrubbed and passed to a modeling server which transforms the data and assimilates it into a Laplace domain model. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000

  3. Adaptive Automatic Gauge Control of a Cold Strip Rolling Process

    Directory of Open Access Journals (Sweden)

    ROMAN, N.

    2010-02-01

    Full Text Available The paper tackles with thickness control structure of the cold rolled strips. This structure is based on the rolls position control of a reversible quarto rolling mill. The main feature of the system proposed in the paper consists in the compensation of the errors introduced by the deficient dynamics of the hydraulic servo-system used for the rolls positioning, by means of a dynamic compensator that approximates the inverse system of the servo-system. Because the servo-system is considered variant over time, an on-line identification of the servo-system and parameter adapting of the compensator are achieved. The results obtained by numerical simulation are presented together with the data taken from real process. These results illustrate the efficiency of the proposed solutions.

  4. Automatic Creation of quality multi-word Lexica from noisy text data

    OpenAIRE

    Frontini, Francesca; Quochi, Valeria; Rubino, Francesco

    2012-01-01

    This paper describes the design of a tool for the automatic creation of multi-word lexica that is deployed as a web service and runs on automatically web-crawled data within the framework of the PANACEA platform. The main purpose of our task is to provide a (computationally "light") tool that creates a full high quality lexical resource of multi-word items. Within the platform, this tool is typically inserted in a work flow whose first step is automatic web-crawling. Therefore, the input data...

  5. Development of an automatic sample changer and a data acquisition system

    International Nuclear Information System (INIS)

    An automatic electro-pneumatic sample changer with a rotating sample holder is described. The changer is coupled through an electronic interface with the data acquisition station. The software to automate the system has been designed. (author)

  6. Gaussian process classification using automatic relevance determination for SAR target recognition

    Science.gov (United States)

    Zhang, Xiangrong; Gou, Limin; Hou, Biao; Jiao, Licheng

    2010-10-01

    In this paper, a Synthetic Aperture Radar Automatic Target Recognition approach based on Gaussian process (GP) classification is proposed. It adopts kernel principal component analysis to extract sample features and implements target recognition by using GP classification with automatic relevance determination (ARD) function. Compared with k-Nearest Neighbor, Naïve Bayes classifier and Support Vector Machine, GP with ARD has the advantage of automatic model selection and hyper-parameter optimization. The experiments on UCI datasets and MSTAR database show that our algorithm is self-tuning and has better recognition accuracy as well.

  7. 过程数据采集与分析系统在冷连轧机组中的应用%Application of automatic acquisition and analysis system of process data for tandem cold mill

    Institute of Scientific and Technical Information of China (English)

    侯永刚; 秦大伟; 费静; 张岩; 刘宝权; 宋君

    2012-01-01

    当代冷连轧带钢生产线具有生产速度快、控制精度高的特点,因此在实际生产过程中,需要一种能够对生产机组各种过程数据进行高效处理的数据采集系统。鞍钢冷轧钢板(莆田)有限公司冷连轧机组使用iba公司的PDA设备,组建了一套可对冷连轧机各种运行过程数据进行实时高速采集、监控、记录及分析的数据采集系统,该系统的使用为生产技术人员快速诊断冷连轧机组故障提供了有力的数据支持。%It has the characteristic of high production speed and control precision in modern cold rolling production line. During actual process, a kind of data acquisition system that can efficiently deal with all kinds of process data in cold strip mill is required. According to the practical application in Angang Putian Cold Strip Mill, a set of data acquisition system is built with PDA equipment of iba Company, which collects, monitors, records and analyzes all kinds of running process data of cold tan- dem mill with real-time and high-speed. This system provides strong data support for the rapid diagno- sis of cold rolling unit fault to production technological operators.

  8. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  9. 76 FR 52581 - Automated Data Processing and Information Retrieval System Requirements

    Science.gov (United States)

    2011-08-23

    ... and Information Retrieval System Requirements AGENCY: Food and Nutrition Service, USDA. ACTION... automatic data processing (ADP) and information retrieval system, including the evaluation of data from... data processing and information retrieval system and to provide clarifications and updates which...

  10. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    OpenAIRE

    Mohammad Awrangjeb; Fraser, Clive S.

    2014-01-01

    Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the b...

  11. Automatic generation of optimal business processes from business rules

    OpenAIRE

    Steen, Bas; Ferreira Pires, Luis; Iacob, Maria-Eugenia

    2010-01-01

    In recent years, business process models are increasingly being used as a means for business process improvement. Business rules can be seen as requirements for business processes, in that they describe the constraints that must hold for business processes that implement these business rules. Therefore, in principle one could devise (automated) transformations from business rules to business processes. These transformations should improve the quality (correctness) of business processes, by im...

  12. An automatic precipitation-phase distinction algorithm for optical disdrometer data over the global ocean

    Science.gov (United States)

    Burdanowitz, Jörg; Klepp, Christian; Bakan, Stephan

    2016-04-01

    The lack of high-quality in situ surface precipitation data over the global ocean so far limits the capability to validate satellite precipitation retrievals. The first systematic ship-based surface precipitation data set OceanRAIN (Ocean Rainfall And Ice-phase precipitation measurement Network) aims at providing a comprehensive statistical basis of in situ precipitation reference data from optical disdrometers at 1 min resolution deployed on various research vessels (RVs). Deriving the precipitation rate for rain and snow requires a priori knowledge of the precipitation phase (PP). Therefore, we present an automatic PP distinction algorithm using available data based on more than 4 years of atmospheric measurements onboard RV Polarstern that covers all climatic regions of the Atlantic Ocean. A time-consuming manual PP distinction within the OceanRAIN post-processing serves as reference, mainly based on 3-hourly present weather information from a human observer. For automation, we find that the combination of air temperature, relative humidity, and 99th percentile of the particle diameter predicts best the PP with respect to the manually determined PP. Excluding mixed phase, this variable combination reaches an accuracy of 91 % when compared to the manually determined PP for 149 635 min of precipitation from RV Polarstern. Including mixed phase (165 632 min), an accuracy of 81.2 % is reached for two independent PP distributions with a slight snow overprediction bias of 0.93. Using two independent PP distributions represents a new method that outperforms the conventional method of using only one PP distribution to statistically derive the PP. The new statistical automatic PP distinction method considerably speeds up the data post-processing within OceanRAIN while introducing an objective PP probability for each PP at 1 min resolution.

  13. Profiling animal toxicants by automatically mining public bioassay data: a big data approach for computational toxicology.

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    Full Text Available In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities.

  14. The development of automatic and controlled inhibitory retrieval processes in true and false recall

    OpenAIRE

    Knott, L.; Howe, M. L.; Wimmer, M. C.; Dewhurst, S

    2011-01-01

    In three experiments we investigated the role of automatic and controlled inhibitory retrieval processes in true and false memory development in children and adults. Experiment 1 incorporated a directed forgetting task to examine controlled retrieval inhibition. Experiments 2 and 3 utilized a part-set cue and retrieval practice task to examine automatic retrieval inhibition. In the first experiment, the forget cue had no effect on false recall for adults but reduced false recall for children....

  15. Automatic Data Filter Customization Using a Genetic Algorithm

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  16. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    Science.gov (United States)

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes. PMID:12435377

  17. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  18. An automatic precipitation phase distinction algorithm for optical disdrometer data over the global ocean

    Science.gov (United States)

    Burdanowitz, J.; Klepp, C.; Bakan, S.

    2015-12-01

    The lack of high quality in situ surface precipitation data over the global ocean so far limits the capability to validate satellite precipitation retrievals. The first systematic ship-based surface precipitation dataset OceanRAIN (Ocean Rainfall And Ice-phase precipitation measurement Network) aims at providing a comprehensive statistical basis of in situ precipitation reference data from optical disdrometers at 1 min resolution deployed on various research vessels (RVs). Deriving the precipitation rate for rain and snow requires a priori knowledge of the precipitation phase (PP). Therefore, we present an automatic PP distinction algorithm using available data based on more than four years of atmospheric measurements onboard RV Polarstern that covers all climatic regions of the Atlantic Ocean. A time-consuming manual PP distinction within the OceanRAIN post-processing serves as reference, mainly based on 3 hourly present weather information from a human observer. For automation, we find that the combination of air temperature, relative humidity and 99th percentile of the particle diameter predicts best the PP with respect to the manually determined PP. Excluding mixed-phase, this variable combination reaches an accuracy of 91 % when compared to the manually determined PP for about 149 000 min of precipitation from RV Polarstern. Including mixed-phase (165 000 min), 81.2 % accuracy are reached with a slight snow overprediction bias of 0.93 for two independent PP distributions. In that respect, a method using two independent PP distributions outperforms a method based on only one PP distribution. The new statistical automatic PP distinction method significantly speeds up the data post-processing within OceanRAIN while introducing an objective PP probability for each PP at 1 min resolution.

  19. Research on automatic loading & unloading technology for vertical hot ring rolling process

    Directory of Open Access Journals (Sweden)

    Xiaokai Wang

    2015-01-01

    Full Text Available The automatic loading & unloading technology is the key to the automatic ring production line. In this paper, the automatic vertical hot ring rolling (VHRR process is taken as the target, the method of the loading & unloading for VHRR is proposed, and the mechanical structure of loading & unloading system is designed, The virtual prototype model of VHRR mill and loading & unloading mechanism is established, and the coordinated control method of VHRR mill and loading & unloading auxiliaries is studied, the movement trace and dynamic characteristic of the critical components are obtained. Finally, a series of hot ring rolling tests are conducted on the VHRR mill, and the production rhythm and the formed rings' geometric precision are analysed. The tests results show that the loading & unloading technology can meet the high quality and high efficiency ring production requirement. The research conclusions have practical significance for the large-scale automatic ring production.

  20. Automatic convey or System with In–Process Sorting Mechanism using PLC and HMI System

    Directory of Open Access Journals (Sweden)

    Y V Aruna

    2015-11-01

    Full Text Available Programmable logic controllers are widely used in many manufacturing process like machinery packaging material handling automatic assembly. These are special type of microprocessor based controller used for any application that needs any kind of electrical controller including lighting controller and HVAC control system. Automatic conveyor system is a computerized control method of controlling and managing the sorting mechanism at the same time maintaining the efficiency of the industry & quality of the products.HMI for automatic conveyor system is considered the primary way of controlling each operation. Text displays are available as well as graphical touch screens. It is used in touch panels and local monitoring of machines. This paper deals with the efficient use of PLC in automatic conveyor system and also building the accuracy in it.

  1. Data Processing for Scientists.

    Science.gov (United States)

    Heumann, K F

    1956-10-26

    This brief survey of integrated and electronic data processing has touched on such matters as the origin of the concepts, their use in business, machines that are available, indexing problems, and, finally, some scientific uses that surely foreshadow further development. The purpose of this has been to present for the consideration of scientists a point of view and some techniques which have had a phenomenal growth in the business world and to suggest that these are worth consideration in scientific data-handling problems (30). To close, let me quote from William Bamert on the experience of the C. and O. Railroad once more (8, p. 121): "Frankly, we have been asked whether we weren't planning for Utopia-the implication being that everyone except starry-eyed visionaries knows that Utopia is unattainable. Our answer is that of course we are! Has anyone yet discovered a better way to begin program planning of this nature? Our feeling is that compromise comes early enough in the normal order of things."

  2. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  3. Automatic subject classification of textual documents using limited or no training data

    OpenAIRE

    Joorabchi, Arash

    2010-01-01

    With the explosive growth in the number of electronic documents available on the internet, intranets, and digital libraries, there is a growing need for automatic systems capable of indexing and organising such large volumes of data more that ever. Automatic Text Classification (ATC) has become one of the principal means for enhancing the performance of information retrieval systems and organising digital libraries and other textual collections. Within this context, the use of ...

  4. Automatic structural matching of 3D image data

    Science.gov (United States)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  5. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  6. Automatic metaphor processing in adults with Asperger syndrome: a metaphor interference effect task.

    Science.gov (United States)

    Hermann, Ismene; Haser, Verena; van Elst, Ludger Tebartz; Ebert, Dieter; Müller-Feldmeth, Daniel; Riedel, Andreas; Konieczny, Lars

    2013-11-01

    This paper investigates automatic processing of novel metaphors in adults with Asperger Syndrome (AS) and typically developing controls. We present an experiment combining a semantic judgment task and a recognition task. Four types of sentences were compared: Literally true high-typical sentences, literally true low-typical sentences, apt metaphors, and scrambled metaphors (literally false sentences which are not readily interpretable as metaphors). Participants were asked to make rapid decisions about the literal truth of such sentences. The results revealed that AS and control participants showed significantly slower RTs for metaphors than for scrambled metaphors and made more mistakes in apt metaphoric sentences than in scrambled metaphors. At the same time, there was higher recognition of apt metaphors compared with scrambled metaphors. The findings indicate intact automatic metaphor processing in AS and replicate previous findings on automatic metaphor processing in typically developing individuals.

  7. An Automatic and Real-time Restoration of Gamma Dose Data by Radio Telemetry

    International Nuclear Information System (INIS)

    On-line gamma monitoring system based on a high pressurized ionization chamber has been used for determining airborne doses surrounding HANARO research reactor at KAERI (Korea Atomic Energy Research Institute). It is composed of a network of six monitoring stations and an on-line computer system. It has been operated by radio telemetry with a radio frequency of 468.8 MHz, which is able to transmit the real-time dose data measured from a remote ion chamber to the central computer for ten seconds-to seconds. Although radio telemetry has several advantages such as an effective and economical transmission, there is one main problem that data loss happen because each monitoring post only stores 300 radiation data points, which covers the previous sequential data of 50 minutes from the present in the case of a recording interval time of 10 seconds It is possible to restore the lost data by an off-line process such as a floppy disk or portable memory disk but it is ineffective method at the real-time monitoring system. Restoration, storage, and display of the current data as well as the lost data are also difficult in the present system. In this paper, an automatic and real-time restoration method by radio telemetry will be introduced

  8. Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text

    CERN Document Server

    Jonnalagadda, Siddhartha; Hakenberg, Jorg; Baral, Chitta; Gonzalez, Graciela

    2010-01-01

    The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.

  9. Process concepts for semi-automatic dismantling of LCD televisions

    OpenAIRE

    Elo, Kristofer; Sundin, Erik

    2014-01-01

    There is a large variety of electrical and electronic equipment products, for example liquid crystal display television sets (LCD TVs), in the waste stream today. Many LCD TVs contain mercury, which is a challenge to treat at the recycling plants. Two current used processes to recycle LCD TVs are automated shredding and manual disassembly. This paper aims to present concepts for semi-automated dismantling processes for LCD TVs in order to achieve higher productivity and flexibility, and in tu...

  10. Automatic digital document processing and management problems, algorithms and techniques

    CERN Document Server

    Ferilli, Stefano

    2011-01-01

    This text reviews the issues involved in handling and processing digital documents. Examining the full range of a document's lifetime, this book covers acquisition, representation, security, pre-processing, layout analysis, understanding, analysis of single components, information extraction, filing, indexing and retrieval. This title: provides a list of acronyms and a glossary of technical terms; contains appendices covering key concepts in machine learning, and providing a case study on building an intelligent system for digital document and library management; discusses issues of security,

  11. Modular toolkit for Data Processing (MDP: a Python data processing framework

    Directory of Open Access Journals (Sweden)

    Tiziano Zito

    2009-01-01

    Full Text Available Modular toolkit for Data Processing (MDP is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.

  12. Modular Toolkit for Data Processing (MDP): A Python Data Processing Framework.

    Science.gov (United States)

    Zito, Tiziano; Wilbert, Niko; Wiskott, Laurenz; Berkes, Pietro

    2008-01-01

    Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool. PMID:19169361

  13. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  14. Eye movements in pedophiles: automatic and controlled attentional processes while viewing prepubescent stimuli.

    Science.gov (United States)

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2013-05-01

    Recent theories in sexuality highlight the importance of automatic and controlled attentional processes in viewing sexually relevant stimuli. The model of Spiering and Everaerd (2007) assumes that sexually relevant features of a stimulus are preattentively selected and automatically induce focal attention to these sexually relevant aspects. Whether this assumption proves true for pedophiles is unknown. It is aim of this study to test this assumption empirically for people suffering from pedophilic interests. Twenty-two pedophiles, 8 nonpedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult while eye movements were measured. Entry time was assessed as a measure of automatic attentional processes and relative fixation time in order to assess controlled attentional processes. Pedophiles demonstrated significantly shorter entry time to child stimuli than to adult stimuli. The opposite was the case for nonpedophiles, as they showed longer relative fixation time for adult stimuli, and, against all expectations, pedophiles also demonstrated longer relative fixation time for adult stimuli. The results confirmed the hypothesis that pedophiles automatically selected sexually relevant stimuli (children). Contrary to all expectations, this automatic selection did not trigger the focal attention to these sexually relevant pictures. Furthermore, pedophiles were first and longest attracted by faces and pubic regions of children; nonpedophiles were first and longest attracted by faces and breasts of adults. The results demonstrated, for the first time, that the face and pubic region are the most attracting regions in children for pedophiles. PMID:23206281

  15. Automatic fracture density update using smart well data and artificial neural networks

    Science.gov (United States)

    Al-Anazi, A.; Babadagli, T.

    2010-03-01

    This paper presents a new methodology to continuously update and improve fracture network models. We begin with a hypothetical model whose fracture network parameters and geological information are known. After generating the "exact" fracture network with known characteristics, the data were exported to a reservoir simulator and simulations were run over a period of time. Intelligent wells equipped with downhole multiple pressure and flow sensors were placed throughout the reservoir and put into production. These producers were completed in different fracture zones to create a representative pressure and production response. We then considered a number of wells of which static (cores and well logs) and dynamic (production) data were used to model well fracture density. As new wells were opened, historical static and dynamic data from previous wells and static data from the new wells were used to update the fracture density using Artificial Neural Networks (ANN). The accuracy of the prediction model depends significantly on the representation of the available data of the existing fracture network. The importance of conventional data (surface production data) and smart well data prediction capability was also investigated. Highly sensitive input data were selected through a forward selection scheme to train the ANN. Well geometric locations were included as a new link in the ANN regression process. Once the relationship between fracture network parameters and well performance data was established, the ANN model was used to predict fracture density at newly drilled locations. Finally, an error analysis through a correlation coefficient and percentage absolute relative error performance was performed to examine the accuracy of the proposed inverse modeling methodology. It was shown that fracture dominated production performance data collected from both conventional and smart wells allow for automatically updating the fracture network model. The proposed technique helps

  16. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    Science.gov (United States)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  17. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  18. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature.

  19. Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data

    Science.gov (United States)

    Luo, Ting; Shi, Changrong; Zhao, Xing; Zhao, Yunsong; Xu, Jinqiu

    2016-01-01

    In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT) data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP) of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting. PMID:27300554

  20. Automatic Detection of Steel Ball's Surface Flaws Based on Image Processing

    Institute of Scientific and Technical Information of China (English)

    YU Zheng-lin; TAN Wei; YANG Dong-lin; CAO Guo-hua

    2007-01-01

    A new method to detect steel ball's surface flaws is presented based on computer techniques of image processing and pattern recognition. The steel ball's surface flaws is the primary factor causing bearing failure. The high efficient and precision detections for the surface flaws of steel ball can be conducted by the presented method, including spot, abrasion, burn, scratch and crack, etc. The design of main components of the detecting system is described in detail including automatic feeding mechanism, automatic spreading mechanism of steel ball's surface, optical system of microscope, image acquisition system, image processing system. The whole automatic system is controlled by an industrial control computer, which can carry out the recognition of flaws of steel ball's surface effectively.

  1. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  2. A study investigating variability of left ventricular ejection fraction using manual and automatic processing modes in a single setting

    International Nuclear Information System (INIS)

    Purpose: A planar multi-gated cardiac blood pool acquisition is a non-invasive technique commonly used to measure left ventricular ejection fraction (LVEF). It is essential that the calculation of LVEF be accurate, repeatable and reproducible for serial monitoring of patients. Different processing modes may be used in calculating the LVEF which require various degrees of manipulation. In addition, different operators with varying levels of experience may process the same data set. It is not known whether the inter-operator variability of LVEF values within a single nuclear medicine department has the potential to affect the calculated LVEF and in turn affect patient management. The aim of the study was to determine variability of LVEF values among operators with different levels of experience using two processing modes. Methods: A descriptive cross-sectional study was carried out in a single setting. Four operators with varying levels of experience analysed 120 left anterior oblique projections using manual and automatic processing modes to calculate the LVEF. Inter- and intra-operator correlation was determined. Results: Manual processing showed moderate to strong agreement (r1 = 0.653) between operators. Automatic processing indicated almost perfect (r1 = 0.812) inter-operator correlation. Intra-operator correlation demonstrated a trend of decreasing variability between processing modes with increasing levels of experience. Conclusion: Despite the overall inter-operator agreement, significant intra-operator variability was evident in results from operators with less experience. However, the discrepancies were such that the differences in LVEF would not play a role in patient management. It is recommended that automatic processing be used for determining LVEF to limit inter-operator variability. Additionally operator experience should be considered in the absence of standardised processing protocols when different processing modes are available in a single

  3. Adaptive Clutch Engaging Process Control for Automatic Mechanical Transmission

    Institute of Scientific and Technical Information of China (English)

    LIU Hai-ou; CHEN Hui-yan; DING Hua-rong; HE Zhong-bo

    2005-01-01

    Based on detail analysis of clutch engaging process control targets and adaptive demands, a control strategy which is based on speed signal, different from that of based on main clutch displacement signal, is put forward. It considers both jerk and slipping work which are the most commonly used quality evaluating indexes of vehicle starting phase. The adaptive control system and its reference model are discussed profoundly.Taking the adaptability to different starting gears and different road conditions as examples, some proving field test records are shown to illustrate the main clutch adaptive control strategy at starting phase. Proving field test gives acceptable results.

  4. Automatic diagnosis of pathological myopia from heterogeneous biomedical data.

    Directory of Open Access Journals (Sweden)

    Zhuo Zhang

    Full Text Available Pathological myopia is one of the leading causes of blindness worldwide. The condition is particularly prevalent in Asia. Unlike myopia, pathological myopia is accompanied by degenerative changes in the retina, which if left untreated can lead to irrecoverable vision loss. The accurate diagnosis of pathological myopia will enable timely intervention and facilitate better disease management to slow down the progression of the disease. Current methods of assessment typically consider only one type of data, such as that from retinal imaging. However, different kinds of data, including that of genetic, demographic and clinical information, may contain different and independent information, which can provide different perspectives on the visually observable, genetic or environmental mechanisms for the disease. The combination of these potentially complementary pieces of information can enhance the understanding of the disease, providing a holistic appreciation of the multiple risks factors as well as improving the detection outcomes. In this study, we propose a computer-aided diagnosis framework for Pathological Myopia diagnosis through Biomedical and Image Informatics(PM-BMII. Through the use of multiple kernel learning (MKL methods, PM-BMII intelligently fuses heterogeneous biomedical information to improve the accuracy of disease diagnosis. Data from 2,258 subjects of a population-based study, in which demographic and clinical information, retinal fundus imaging data and genotyping data were collected, are used to evaluate the proposed framework. The experimental results show that PM-BMII achieves an AUC of 0.888, outperforming the detection results from the use of demographic and clinical information 0.607 (increase 46.3%, p<0.005, genotyping data 0.774 (increase 14.7%, P<0.005 or imaging data 0.852 (increase 4.2%, p=0.19 alone. The accuracy of the results obtained demonstrates the feasibility of using heterogeneous data for improved disease

  5. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  6. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Rhine- Westphalia, Germany. A link to an example with ASTER data to detect change with the same method after the 2005 Kashmir earthquake is given. The method is also used to automatically normalize multitemporal, multispectral Landsat ETM+ data radiometrically. IDL/ENVI, Python and Matlab software...

  7. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    Science.gov (United States)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  8. Cooperative processing data bases

    Science.gov (United States)

    Hasta, Juzar

    1991-01-01

    Cooperative processing for the 1990's using client-server technology is addressed. The main theme is concepts of downsizing from mainframes and minicomputers to workstations on a local area network (LAN). This document is presented in view graph form.

  9. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  10. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  11. Towards the development of Hyperspectral Images of trench walls. Robotrench: Automatic Data acquisition

    Science.gov (United States)

    Ragona, D. E.; Minster, B.; Rockwell, T. K.; Fialko, Y.; Bloom, R. G.; Hemlinger, M.

    2004-12-01

    Previous studies on imaging spectrometry of paleoseismological excavations (Ragona, et. al, 2003, 2004) showed that low resolution Hyperspectral Imagery of a trench wall, processed with a supervised classification algorithm, provided more stratigraphic information than a high-resolution digital photography of the same exposure. Although the low-resolution images depicted the most important variations, a higher resolution hyperspectral image is necessary to assist in the recognition and documentation of paleoseismic events. Because of the fact that our spectroradiometer can only acquire one pixel at the time, creating a 25 psi image of a 1 x 1 m area of a trench wall will require 40000 individual measurements. To ease this extensive task we designed and built a device that can automatically position the spectroradiometer probe along the x-z plane of a trench wall. This device, informally named Robotrench, has two 7 feet long axes of motion (horizontal and vertical) commanded by a stepper motor controller board and a laptop computer. A platform provides the set up for the spectroradiometer probe and for the calibrated illumination system. A small circuit provided the interface between the Robotrench motion and the spectroradiomenter data collection. At its best, Robotrench ?spectroradiometer symbiotic pair can automatically record 1500-2000 pixels/hour, making the image acquisition process slow but feasible. At the time this abstract submission only a small calibration experiment was completed. This experiment was designed to calibrate the X-Z axes and to test the instrument performance. We measured a 20 x 10 cm brick wall at a 25 psi resolution. Three reference marks were set up on the trench wall as control points for the image registration process. The experiment was conducted at night under artificial light (stabilized 2 x 50 W halogen lamps). The data obtained was processed with the Spectral Angle Mapper algorithm. The image recovered from the data showed an

  12. Bayesian Updating in the EEG : Differentiation between Automatic and Controlled Processes of Human Economic Decision Making

    OpenAIRE

    Hügelschäfer, Sabine

    2011-01-01

    Research has shown that economic decision makers often do not behave according to the prescriptions of rationality, but instead show systematic deviations from rational behavior (e.g., Starmer, 2000). One approach to explain these deviations is taking a dual-process perspective (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009) in which a distinction is made between deliberate, resource-consuming controlled processes and fast, effortless automatic processes. In many cases, deviati...

  13. Evaluation of automatic face recognition for automatic border control on actual data recorded of travellers at Schiphol Airport

    NARCIS (Netherlands)

    Spreeuwers, L.J.; Hendrikse, A.J.; Gerritsen, K.J.; Brömme, A.; Busch, C.

    2012-01-01

    Automatic border control at airports using automated facial recognition for checking the passport is becoming more and more common. A problem is that it is not clear how reliable these automatic gates are. Very few independent studies exist that assess the reliability of automated facial recognition

  14. Relatedness Proportion Effects in Semantic Categorization: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2014-01-01

    Semantic priming effects at a short prime-target stimulus onset asynchrony are commonly explained in terms of an automatic spreading activation process. According to this view, the proportion of related trials should have no impact on the size of the semantic priming effect. Using a semantic categorization task ("Is this a living…

  15. Automatic and Manual Processes in End-User Multimedia Authoring Tools: Where is the Balance?

    NARCIS (Netherlands)

    Guimarães, R.L.

    2010-01-01

    This thesis aims to analyze, model, and develop a framework for next-generation multimedia authoring tools targeted to end-users. In particular, I concentrate on the combination of automatic and manual processes for the realization of such framework. My contributions are realized in the context of a

  16. Examining the influence of psychopathy, hostility biases, and automatic processing on criminal offenders' Theory of Mind

    NARCIS (Netherlands)

    L. Nentjes; D. Bernstein; A. Arntz; G. van Breukelen; M. Slaats

    2015-01-01

    Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psyc

  17. The Development of Automatic and Controlled Inhibitory Retrieval Processes in True and False Recall

    Science.gov (United States)

    Knott, Lauren M.; Howe, Mark L.; Wimmer, Marina C.; Dewhurst, Stephen A.

    2011-01-01

    In three experiments, we investigated the role of automatic and controlled inhibitory retrieval processes in true and false memory development in children and adults. Experiment 1 incorporated a directed forgetting task to examine controlled retrieval inhibition. Experiments 2 and 3 used a part-set cue and retrieval practice task to examine…

  18. REALIZATION OF TRAINING PROGRAMME ON THE BASIS OF LINGUISTIC DATABASE FOR AUTOMATIC TEXTS PROCESSING SYSTEM

    Directory of Open Access Journals (Sweden)

    M. A. Makarych

    2016-01-01

    Full Text Available Due to the constant increasing of electronic textual information, modern society needs for the automatic processing of natural language (NL. The main purpose of NL automatic text processing systems is to analyze and create texts and represent their content. The purpose of the paper is the development of linguistic and software bases of an automatic system for processing English publicistic texts. This article discusses the examples of different approaches to the creation of linguistic databases for processing systems. The author gives a detailed description of basic building blocks for a new linguistic processor: lexical-semantic, syntactical and semantic-syntactical. The main advantage of the processor is using special semantic codes in the alphabetical dictionary. The semantic codes have been developed in accordance with a lexical-semantic classification. It helps to precisely define semantic functions of the keywords that are situated in parsing groups and allows the automatic system to avoid typical mistakes. The author also represents the realization of a developed linguistic database in the form of a training computer program.

  19. SNPflow: a lightweight application for the processing, storing and automatic quality checking of genotyping assays.

    Science.gov (United States)

    Weissensteiner, Hansi; Haun, Margot; Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  20. SNPflow: a lightweight application for the processing, storing and automatic quality checking of genotyping assays.

    Directory of Open Access Journals (Sweden)

    Hansi Weissensteiner

    Full Text Available Single nucleotide polymorphisms (SNPs play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX and ABI 7900HT (TaqMan, KASPar systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at.

  1. Processing LHC data

    CERN Multimedia

    CERN IT department

    2013-01-01

    The LHC produces 600 million collisions every second in each detector, which generates approximately one petabyte of data per second. None of today’s computing systems are capable of recording such rates. Hence sophisticated selection systems are used for a first fast electronic pre-selection, only passing one out of 10 000 events. Tens of thousands of processor cores then select 1% of the remaining events. Even after such a drastic data reduction, the four big experiments, ALICE, ATLAS, CMS and LHCb, together need to store over 25 petabytes per year. The LHC data are aggregated in the CERN Data Centre, where initial data reconstruction is performed, and a copy is archived to long-term tape storage. Another copy is sent to several large scale data centres around the world. Subsequently hundreds of thousands of computers from around the world come into action: harnessed in a distributed computing service, they form the Worldwide LHC Computing Grid (WLCG), which provides the resources to store, distribute, an...

  2. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  3. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  4. From Automatic to Adaptive Data Acquisition:- towards scientific sensornets

    OpenAIRE

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yetthe main driving force behind these deployments are still computer scien-tists. The denser sampling and added modalities oered by sensornets coulddrive these elds in new directions, but not until the domain scientists be-come familiar with sensornets and use them as any other instrument in theirtoolbox.We explore three dierent directions in which sensornets can become easierto deploy, collect data of higher quality, and o...

  5. Automatic analysis of eye tracker data from a driving simulator

    OpenAIRE

    Bergstrand, Martin

    2008-01-01

    The movement of a persons eyes is an interesting factor to study in different research areas where attention is important, for example driving. In 2004 the Swedish national road and transport research institute (VTI) introduced Simulator III – their third generation of driving simulators. Inside Simulator III a camera based eye tracking system is installed that records the eye movements of the driver. To be useful, the raw data from the eye tracking system needs to be analyzed and concentrate...

  6. An algorithm for discovering Lagrangians automatically from data

    CERN Document Server

    Hills, D J A; Hudson, J J

    2015-01-01

    An activity fundamental to science is building mathematical models. These models are used to both predict the results of future experiments and gain insight into the structure of the system under study. We present an algorithm that automates the model building process in a scientifically principled way. The algorithm can take observed trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. It does this by applying the principle of least action and searching for the simplest Lagrangian that describes the system's behaviour. By generating this Lagrangian in a human interpretable form, it also provides insight into the working of the system.

  7. Big Data Analysis of Manufacturing Processes

    Science.gov (United States)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  8. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    Science.gov (United States)

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E

    2014-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  9. Automatic analysis (aa: efficient neuroimaging workflows and parallel processing using Matlab and XML

    Directory of Open Access Journals (Sweden)

    Rhodri eCusack

    2015-01-01

    Full Text Available Recent years have seen neuroimaging data becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complex to set up and run (increasing the risk of human error and time consuming to execute (restricting what analyses are attempted. Here we present an open-source framework, automatic analysis (aa, to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (redone. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA. However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast and efficient, for simple single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  10. Organ dose calculation in CT based on scout image data and automatic image registration

    Energy Technology Data Exchange (ETDEWEB)

    Kortesniemi, Mika; Salli, Eero; Seuri, Raija [HUS Helsinki Medical Imaging Center, Univ. of Helsinki, Helsinki (Finland)], E-mail: mika.kortesniemi@hus.fi

    2012-10-15

    Background Computed tomography (CT) has become the main contributor of the cumulative radiation exposure in radiology. Information on cumulative exposure history of the patient should be available for efficient management of radiation exposures and for radiological justification. Purpose To develop and evaluate automatic image registration for organ dose calculation in CT. Material and Methods Planning radiograph (scout) image data describing CT scan ranges from 15 thoracic CT examinations (9 men and 6 women) and 10 abdominal CT examinations (6 men and 4 women) were co-registered with the reference trunk CT scout image. 2-D affine transformation and normalized correlation metric was used for image registration. Longitudinal (z-axis) scan range coordinates on the reference scout image were converted into slice locations on the CT-Expo anthropomorphic male and female models, following organ and effective dose calculations. Results The average deviation of z-location of studied patient images from the corresponding location in the reference scout image was 6.2 mm. The ranges of organ and effective doses with constant exposure parameters were from 0 to 28.0 mGy and from 7.3 to 14.5 mSv, respectively. The mean deviation of the doses for fully irradiated organs (inside the scan range), partially irradiated organs and non-irradiated organs (outside the scan range) was 1%, 5%, and 22%, respectively, due to image registration. Conclusion The automated image processing method to registrate individual chest and abdominal CT scout radiograph with the reference scout radiograph is feasible. It can be used to determine the individual scan range coordinates in z-direction to calculate the organ dose values. The presented method could be utilized in automatic organ dose calculation in CT for radiation exposure tracking of the patients.

  11. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  12. Automatic testing system design and data analysis of permafrost temperature in Qinghai-Tibet Railway

    Institute of Scientific and Technical Information of China (English)

    尚迎春; 齐红元

    2008-01-01

    Aimed at the characteristics of permafrost temperature influencing the safety of Qinghai-Tibet Railway and its on-line testing system, comparing the achievement of permafrost study nationwide with those worldwide, an automatic testing system of permafrost temperature, containing a master computer and some slave computers, was designed. By choosing high-precise thermistors as temperature sensor, designing and positioning the depth and interval of testing sections, testing, keeping and sending permafrost temperature data at time over slave computers, and receiving, processing and analyzing the data of collecting permafrost temperature over master computer, the change of the permafrost temperature can be described and analyzed, which can provide information for permafrost railway engineering design. Moreover, by taking permafrost temperature testing in a certain section of Qinghai-Tibet Railway as an instance, the collected data of permafrost temperature were analyzed, and the effect of permafrost behavior was depicted under the railway, as well as, a BP model was set up to predict the permafrost characteristics. This testing system will provide information timely about the change of the permafrost to support the safety operation in Qinghai-Tibet Railway.

  13. Automatic and Accurate Conflation of Different Road-Network Vector Data towards Multi-Modal Navigation

    Directory of Open Access Journals (Sweden)

    Meng Zhang

    2016-05-01

    Full Text Available With the rapid improvement of geospatial data acquisition and processing techniques, a variety of geospatial databases from public or private organizations have become available. Quite often, one dataset may be superior to other datasets in one, but not all aspects. In Germany, for instance, there were three major road network vector data, viz. Tele Atlas (which is now “TOMTOM”, NAVTEQ (which is now “here”, and ATKIS. However, none of them was qualified for the purpose of multi-modal navigation (e.g., driving + walking: Tele Atlas and NAVTEQ consist of comprehensive routing-relevant information, but many pedestrian ways are missing; ATKIS covers more pedestrian areas but the road objects are not fully attributed. To satisfy the requirements of multi-modal navigation, an automatic approach has been proposed to conflate different road networks together, which involves five routines: (a road-network matching between datasets; (b identification of the pedestrian ways; (c geometric transformation to eliminate geometric inconsistency; (d topologic remodeling of the conflated road network; and (e error checking and correction. The proposed approach demonstrates high performance in a number of large test areas and therefore has been successfully utilized for the real-world data production in the whole region of Germany. As a result, the conflated road network allows the multi-modal navigation of “driving + walking”.

  14. Automatic cross-talk removal from multi-channel data

    CERN Document Server

    Allen, B; Ottewill, A; Allen, Bruce; Hua, Wensheng; Ottewill, Adrian

    1999-01-01

    A technique is described for removing interference from a signal of interest ("channel 1") which is one of a set of N time-domain instrumental signals ("channels 1 to N"). We assume that channel 1 is a linear combination of "true" signal plus noise, and that the "true" signal is not correlated with the noise. We also assume that part of this noise is produced, in a poorly-understood way, by the environment, and that the environment is monitored by channels 2 to N. Finally, we assume that the contribution of channel n to channel 1 is described by an (unknown!) linear transfer function R_n(t-t'). Our technique estimates the R_i and provides a way to subtract the environmental contamination from channel 1, giving an estimate of the "true" signal which minimizes its variance. It also provides some insights into how the environment is contaminating the signal of interest. The method is illustrated with data from a prototype interferometric gravitational-wave detector, in which the channel of interest (differential...

  15. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  16. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  17. GPU applications for data processing

    Energy Technology Data Exchange (ETDEWEB)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); Aleksandrov, Andrey [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); INFN sezione di Napoli, I-80125 Napoli (Italy); Tioukov, Valeri [INFN sezione di Napoli, I-80125 Napoli (Italy)

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  18. Automatic perceptual simulation of first language meanings during second language sentence processing in bilinguals.

    Science.gov (United States)

    Vukovic, Nikola; Williams, John N

    2014-01-01

    Research supports the claim that, when understanding language, people perform mental simulation using those parts of the brain which support sensation, action, and emotion. A major criticism of the findings quoted as evidence for embodied simulation, however, is that they could be a result of conscious image generation strategies. Here we exploit the well-known fact that bilinguals routinely and automatically activate both their languages during comprehension to test whether this automatic process is, in turn, modulated by embodied simulatory processes. Dutch participants heard English sentences containing interlingual homophones and implying specific distance relations, and had to subsequently respond to pictures of objects matching or mismatching this implied distance. Participants were significantly slower to reject critical items when their perceptual features matched said distance relationship. These results suggest that bilinguals not only activate task-irrelevant meanings of interlingual homophones, but also automatically simulate these meanings in a detailed perceptual fashion. Our study supports the claim that embodied simulation is not due to participants' conscious strategies, but is an automatic component of meaning construction.

  19. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML

    OpenAIRE

    Rhodri Cusack; Alejandro Vicente-Grabovetsky; Daniel J Mitchell; Peelle, Jonathan E.

    2015-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by m...

  20. Necessary Processing of Personal Data

    DEFF Research Database (Denmark)

    Tranberg, Charlotte Bagger

    2006-01-01

    The Data Protection Directive prohibits processing of sensitive data (racial or ethnic origin, political, religious or philosophical convictions, trade union membership and information on health and sex life). All other personal data may be processed, provided processing is deemed necessary...... Handelsgesellschaft. The aim of this article is to clarify the necessity requirement of the Data Protection Directive in terms of the general principle of proportionality. The usefulness of the principle of proportionality as the standard by which processing of personal data may be weighed is illustrated by the Peck...

  1. Automatic detection of zebra crossings from mobile LiDAR data

    Science.gov (United States)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  2. Progress on statistical learning systems as data mining tools for the creation of automatic databases in Fusion environments

    International Nuclear Information System (INIS)

    Nowadays, processing all information of a fusion database is a much more important issue than acquiring more data. Although typically fusion devices produce tens of thousands of discharges, specialized databases for physics studies are normally limited to a few tens of shots. This is due to the fact that these databases are almost always generated manually, which is a very time consuming and unreliable activity. The development of automatic methods to create specialized databases ensures first, the reduction of human efforts to identify and locate physical events, second, the standardization of criteria (reducing the vulnerability to human errors) and, third, the improvement of statistical relevance. Classification and regression techniques have been used for these purposes. The objective has been the automatic recognition of physical events (that can appear in a random and/or infrequent way) in waveforms and video-movies. Results are shown for the JET database.

  3. Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Javier Estornell

    2011-06-01

    Full Text Available In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral response. The other approach follows the standard scheme of object-based image classification: segmentation, feature extraction and selection, and classification, here performed using decision trees. In addition, the effect of the inclusion in the building detection process of contextual relations with the shadows is evaluated. Quality assessment is performed at two different levels: area and object. Area-level evaluates the building delineation performance, whereas object-level assesses the accuracy in the spatial location of individual buildings. The results obtained show a high efficiency of the evaluated methods for building detection techniques, in particular the thresholding-based approach, when the parameters are properly adjusted and adapted to the type of urban landscape considered.

  4. Entropy algorithm for automatic detection of oil spill from radarsat-2 SAR data

    International Nuclear Information System (INIS)

    Synthetic aperture radar (SAR) is a precious foundation of oil spill detection, surveying and monitoring that improves oil spill detection by various approaches. The main objective of this work is to design automatic detection procedures for oil spill in synthetic aperture radar (SAR) satellite data. In doing so the Entropy algorithm tool was designed to investigate the occurrence of oil spill in Gulf of Mexico using RADARSAT-2 SAR satellite data. The study shows that entropy algorithm provides accurate pattern of oil slick in SAR data. This shown by 90% for oil spill, 3% look-alike and 7% for sea roughness using the receiver -operational characteristics (ROC) curve. It can therefore be concluded Entropy algorithm can be used as automatic tool for oil spill detection in RADARSAT-2 SAR data

  5. Big data processing with Hadoop

    OpenAIRE

    Wu, Shiqi

    2015-01-01

    Computing technology has changed the way we work, study, and live. The distributed data processing technology is one of the popular topics in the IT field. It provides a simple and centralized computing platform by reducing the cost of the hardware. The characteristics of distributed data processing technology have changed the whole industry. Hadoop, as the open source project of Apache foundation, is the most representative platform of distributed big data processing. The Hadoop distribu...

  6. Evaluation of automatic building detection approaches combining high resolution images and LiDAR data

    OpenAIRE

    Javier Estornell; Recio, Jorge A.; Txomin Hermosilla; Ruiz, Luis A.

    2011-01-01

    In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral re...

  7. SDPG: Spatial Data Processing Grid

    Institute of Scientific and Technical Information of China (English)

    XIAO Nong(肖侬); FUV Wei(付伟)

    2003-01-01

    Spatial applications will gain high complexity as the volume of spatial data increases rapidly. A suitable data processing and computing infrastructure for spatial applications needs to be established. Over the past decade, grid has become a powerful computing environment for data intensive and computing intensive applications. Integrating grid computing with spatial data processing technology, the authors designed a spatial data processing grid (called SDPG) to address the related problems. Requirements of spatial applications are examined and the architecture of SDPG is described in this paper. Key technologies for implementing SDPG are discussed with emphasis.

  8. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study

    OpenAIRE

    Tongran Liu; Tong Xiao; Xiaoyan Li; Jiannong Shi

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross change...

  9. Automatic Clustering of Flow Cytometry Data with Density-Based Merging

    Directory of Open Access Journals (Sweden)

    Guenther Walther

    2009-01-01

    made this technology ubiquitous and indispensable in the clinical and laboratory setting. A current limit to the potential of this technology is the lack of automated tools for analyzing the resulting data. We describe methodology and software to automatically identify cell populations in flow cytometry data. Our approach advances the paradigm of manually gating sequential two-dimensional projections of the data to a procedure that automatically produces gates based on statistical theory. Our approach is nonparametric and can reproduce nonconvex subpopulations that are known to occur in flow cytometry samples, but which cannot be produced with current parametric model-based approaches. We illustrate the methodology with a sample of mouse spleen and peritoneal cavity cells.

  10. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  11. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis. PMID:27457481

  12. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Maria Vakalopoulou

    2014-04-01

    Full Text Available Frame hyperspectral sensors, in contrast to push-broom or line-scanning ones, produce hyperspectral datasets with, in general, better geometry but with unregistered spectral bands. Being acquired at different instances and due to platform motion and movements (UAVs, aircrafts, etc., every spectral band is displaced and acquired with a different geometry. The automatic and accurate registration of hyperspectral datasets from frame sensors remains a challenge. Powerful local feature descriptors when computed over the spectrum fail to extract enough correspondences and successfully complete the registration procedure. To this end, we propose a generic and automated framework which decomposes the problem and enables the efficient computation of a sufficient amount of accurate correspondences over the given spectrum, without using any ancillary data (e.g., from GPS/IMU. First, the spectral bands are divided in spectral groups according to their wavelength. The spectral borders of each group are not strict and their formulation allows certain overlaps. The spectral variance and proximity determine the applicability of every spectral band to act as a reference during the registration procedure. The proposed decomposition allows the descriptor and the robust estimation process to deliver numerous inliers. The search space of possible solutions has been effectively narrowed by sorting and selecting the optimal spectral bands which under an unsupervised manner can quickly recover hypercube’s geometry. The developed approach has been qualitatively and quantitatively evaluated with six different datasets obtained by frame sensors onboard aerial platforms and UAVs. Experimental results appear promising.

  13. An object-based classification method for automatic detection of lunar impact craters from topographic data

    Science.gov (United States)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  14. Fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier, E-mail: nurizzo@esrf.fr [European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France); Bowler, Matthew W., E-mail: nurizzo@esrf.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); Université Grenoble Alpes–EMBL–CNRS, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France)

    2015-07-31

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  15. Automatic differentiation tools in the dynamic simulation of chemical engineering processes

    Directory of Open Access Journals (Sweden)

    Castro M.C.

    2000-01-01

    Full Text Available Automatic Differentiation is a relatively recent technique developed for the differentiation of functions applicable directly to the source code to compute the function written in standard programming languages. That technique permits the automatization of the differentiation step, crucial for dynamic simulation and optimization of processes. The values for the derivatives obtained with AD are exact (to roundoff. The theoretical exactness of the AD comes from the fact that it uses the same rules of differentiation as in differential calculus, but these rules are applied to an algorithmic specification of the function rather than to a formula. The main purpose of this contribution is to discuss the impact of Automatic Differentiation in the field of dynamic simulation of chemical engineering processes. The influence of the differentiation technique on the behavior of the integration code, the performance of the generated code and the incorporation of AD tools in consistent initialization tools are discussed from the viewpoint of dynamic simulation of typical models in chemical engineering.

  16. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    Science.gov (United States)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  17. The automatic conservative: ideology-based attentional asymmetries in the processing of valenced information.

    Directory of Open Access Journals (Sweden)

    Luciana Carraro

    Full Text Available Research has widely explored the differences between conservatives and liberals, and it has been also recently demonstrated that conservatives display different reactions toward valenced stimuli. However, previous studies have not yet fully illuminated the cognitive underpinnings of these differences. In the current work, we argued that political ideology is related to selective attention processes, so that negative stimuli are more likely to automatically grab the attention of conservatives as compared to liberals. In Experiment 1, we demonstrated that negative (vs. positive information impaired the performance of conservatives, more than liberals, in an Emotional Stroop Task. This finding was confirmed in Experiment 2 and in Experiment 3 employing a Dot-Probe Task, demonstrating that threatening stimuli were more likely to attract the attention of conservatives. Overall, results support the conclusion that people embracing conservative views of the world display an automatic selective attention for negative stimuli.

  18. Experimental Data Processing. Part 2

    Directory of Open Access Journals (Sweden)

    Wilhelm LAURENZI

    2011-03-01

    Full Text Available This paper represents the second part of a study regarding the processing of experimental monofactorialdata, and it presents the original program developed by the author for processing experimental data.Using consecrated methods and relations, this program allows establishing the number of samples,generating the experimental plan, entering and saving the measured data, identifying the data corrupted byaberrant errors, verifying the randomness, verifying the normality of data repartition, calculating the mainstatistical parameters and exporting the experimental data to Excel or to other programs for statistical dataprocessing.

  19. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    Science.gov (United States)

    Zlokazov, V. B.

    2013-05-01

    and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum

  20. Process Mining Online Assessment Data

    Science.gov (United States)

    Pechenizkiy, Mykola; Trcka, Nikola; Vasilyeva, Ekaterina; van der Aalst, Wil; De Bra, Paul

    2009-01-01

    Traditional data mining techniques have been extensively applied to find interesting patterns, build descriptive and predictive models from large volumes of data accumulated through the use of different information systems. The results of data mining can be used for getting a better understanding of the underlying educational processes, for…

  1. Big Data in Market Research: Why More Data Does Not Automatically Mean Better Information

    Directory of Open Access Journals (Sweden)

    Bosch Volker

    2016-11-01

    Full Text Available Big data will change market research at its core in the long term because consumption of products and media can be logged electronically more and more, making it measurable on a large scale. Unfortunately, big data datasets are rarely representative, even if they are huge. Smart algorithms are needed to achieve high precision and prediction quality for digital and non-representative approaches. Also, big data can only be processed with complex and therefore error-prone software, which leads to measurement errors that need to be corrected. Another challenge is posed by missing but critical variables. The amount of data can indeed be overwhelming, but it often lacks important information. The missing observations can only be filled in by using statistical data imputation. This requires an additional data source with the additional variables, for example a panel. Linear imputation is a statistical procedure that is anything but trivial. It is an instrument to “transport information,” and the higher the observed data correlates with the data to be imputed, the better it works. It makes structures visible even if the depth of the data is limited.

  2. AN AUTOMATIC PROCEDURE FOR COMBINING DIGITAL IMAGES AND LASER SCANNER DATA

    OpenAIRE

    Moussa, W.; Abdel-Wahab, M.; D. Fritsch

    2012-01-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach com...

  3. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    International Nuclear Information System (INIS)

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described

  4. Graphical Language for Data Processing

    Science.gov (United States)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  5. Modeling, Learning, and Processing of Text Technological Data Structures

    CERN Document Server

    Kühnberger, Kai-Uwe; Lobin, Henning; Lüngen, Harald; Storrer, Angelika; Witt, Andreas

    2012-01-01

    Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book “Modelling, Learning and Processing of Text-Technological Data Structures” deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.

  6. Influence of the automatic regulator parameters on the power transition processes of the IBR-2 reactor

    International Nuclear Information System (INIS)

    With the help of the IBR-2 reactor models based on a block structure with z-transformation of variable and experimentally determined parameters of feed-backs, the power transition processes at various values of parameters of the automatic regulator (AR) are calculated. It is shown, that at regular disturbances of a reactivity the best transition processes correspond to the greatest speed of the AR while the AR smoothing-unit is eliminated. The recommendations of selection of the AR parameters are given if there are random disturbances of a reactivity which have place at normal operation of the IBR-2 reactor. (author)

  7. REAL TIME DATA PROCESSING FRAMEWORKS

    Directory of Open Access Journals (Sweden)

    Yash Sakaria

    2015-09-01

    Full Text Available On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solve the problem of live stream data mining. In summary, the traditional approach of big data being co-relational to Hadoop is false; focus needs to be given on business value as well. Data Warehousing, Hadoop and stream processing complement each other very well. In this paper, we have tried reviewing a few frameworks and products which use real time data streaming by providing modifications to Hadoop.

  8. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  9. REAL TIME DATA PROCESSING FRAMEWORKS

    OpenAIRE

    Yash Sakaria; Chetashri Bhadane

    2015-01-01

    On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solv...

  10. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  11. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  12. BRICORK: an automatic machine with image processing for the production of corks

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Carvalho, Fernando D.; Rodrigues, Fernando C.

    1991-06-01

    The production of cork stoppers from raw cork strip is a manual and labour-intensive process in which a punch-operator quickly inspects all sides of the cork strip for defects and decides where to punch out stoppers. He then positions the strip underneath a rotating tubular cutter and punches out the stoppers one at a time. This procedure is somewhat subjective and prone to error, being dependent on the judgement and accuracy of the operator. This paper describes the machine being developed jointly by Mecanova, Laboratorio Nacional de Engenharia e Tecnologia (LNETI) and Empresa de Investiga&sigmafcoe Desenvolvimento de Electronica SA (EID) which automatically processes cork strip introduced by an unskilled operator. The machine uses both image processing and laser inspection techniques to examine the strip. Defects in the cork are detected and categorised in order to determine regions where stoppers may be punched. The precise locations are then automatically optimised for best usage of the raw material (quantity and quality of stoppers). In order to achieve the required speed of production these image processing techniques may be implemented in hardware. The paper presents results obtained using the vision system software under development together with descriptions of both the image processing and mechanical aspects of the proposed machine.

  13. Automatic Inspection and Processing of Accessory Based on Vision Stitching and Spectral Illumination

    Directory of Open Access Journals (Sweden)

    Wen-Yang Chang

    2014-08-01

    Full Text Available The study investigates automatic inspection and processing of the stem accessories based on vision stitching and spectral illumination. The vision stitching mainly involves algorithms of white balance, scale-invariant feature transforms (SIFT and roundness for whole image of automatic accessory inspection. The illumination intensities, angles, and spectral analyses of light sources are analyzed for image optimal inspections. The unrealistic color casts of feature inspection is removed using a white balance algorithm for global automatic adjustment. The SIFT is used to extract and detect the image features for big image stitching. The Hough transform is used to detect the parameters of a circle for roundness of the bicycle accessories. The feature inspections of a stem contain geometry size, roundness, and image stitching. Results showed that maximum errors of 0°, 10°, 30°, and 50° degree for the spectral illumination of white light LED arrays with differential shift displacements are 4.4, 4.2, 6.8, and 3.5 %, respectively. The deviation error of image stitching for the stem accessory in x and y coordinates are 2 pixels. The SIFT and RANSAC enable to transform the stem image into local feature coordinates.

  14. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  15. Linear Processes for Functional Data

    OpenAIRE

    Mas, André; Pumo, Besnik

    2009-01-01

    International audience Linear processes on functional spaces were born about fifteen years ago. And this original topic went through the same fast development as the other areas of functional data modeling such as PCA or regression. They aim at generalizing to random curves the classical ARMA models widely known in time series analysis. They offer a wide spectrum of models suited to the statistical inference on continuous time stochastic processes within the paradigm of functional data. Es...

  16. Visa: AN Automatic Aware and Visual Aids Mechanism for Improving the Correct Use of Geospatial Data

    Science.gov (United States)

    Hong, J. H.; Su, Y. T.

    2016-06-01

    With the fast growth of internet-based sharing mechanism and OpenGIS technology, users nowadays enjoy the luxury to quickly locate and access a variety of geospatial data for the tasks at hands. While this sharing innovation tremendously expand the possibility of application and reduce the development cost, users nevertheless have to deal with all kinds of "differences" implicitly hidden behind the acquired georesources. We argue the next generation of GIS-based environment, regardless internet-based or not, must have built-in knowledge to automatically and correctly assess the fitness of data use and present the analyzed results to users in an intuitive and meaningful way. The VISA approach proposed in this paper refer to four different types of visual aids that can be respectively used for addressing analyzed results, namely, virtual layer, informative window, symbol transformation and augmented TOC. The VISA-enabled interface works in an automatic-aware fashion, where the standardized metadata serve as the known facts about the selected geospatial resources, algorithms for analyzing the differences of temporality and quality of the geospatial resources were designed and the transformation of analyzed results into visual aids were automatically executed. It successfully presents a new way for bridging the communication gaps between systems and users. GIS has been long seen as a powerful integration tool, but its achievements would be highly restricted if it fails to provide a friendly and correct working platform.

  17. Performing the processing required for automatically get a PDF/A version of the CERN Library documentation

    CERN Document Server

    Molina Garcia-Retamero, Antonio

    2015-01-01

    The aim of the project was to perform the processing required for automatically get a PDF/A version of the CERN Library documentation. For this, it is necessary to extract as much metadata as possible from the sources files, inject the required data into the original source files creating new ones ready for being compiled with all related dependencies. Besides, I’ve proposed the creation of a HTML version consistent with the PDF and navigable for easy access, I’ve been trying to perform some Natural Language Processing for extracting metadata, I’ve proposed the injection of the cern library documentation into the HTML version of the long writeups where it is referenced (for instance, when a CERN Library function is referenced in a sample code) Finally, I’ve designed and implemented a Graphical User Interface in order to simplify the process for the user.

  18. Design of a modern automatic control system for the activated sludge process in wastewater treatment

    Institute of Scientific and Technical Information of China (English)

    Alexandros D. Kotzapetros; Panayotis A. Paraskevas; Athanasios S. Stasinakis

    2015-01-01

    The Activated Sludge Process (ASP) exhibits highly nonlinear properties. The design of an automatic control system that is robust against disturbance of inlet wastewater flow rate and has short process settling times is a chal enging matter. The proposed control method is an I-P modified controller automatic control system with state variable feedback and control canonical form simulation diagram for the process. A more stable response is achieved with this type of modern control. Settling times of 0.48 days are achieved for the concentration of microorganisms, (reference value step increase of 50 mg·L−1) and 0.01 days for the concentration of oxygen (reference value step increase of 0.1 mg·L−1). Fluctuations of concentrations of oxygen and microorganisms after an inlet disturbance of 5 × 103m3·d−1 are smal . Changes in the reference values of oxygen and microorganisms (increases by 10%, 20%and 30%) show satisfactory response of the system in al cases. Changes in the value of inlet wastewater flow rate disturbance (increases by 10%, 25%, 50%and 100%) are stabilized by the control system in short time. Maximum percent overshoot is also taken in consideration in all cases and the largest value is 25%which is acceptable. The proposed method with I-P controller is better for disturbance rejection and process settling times compared to the same method using PI control er. This method can substitute optimal control systems in ASP.

  19. High speed television camera system processes photographic film data for digital computer analysis

    Science.gov (United States)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  20. Digital Data Processing of Stilbene

    International Nuclear Information System (INIS)

    Stilbene is a proven spectrometric detector for mixed fields of neutrons and gamma rays. By digital processing of shape output pulses from the detector it is possible to obtain information about the energy of the interacting neutron / photon and distinguish which of these two particles interacts in the detector. Another numerical processing of digital data can finalize the energy spectrum of both components of the mixed field. The quality of the digitized data is highly dependent on the parameters of the hardware used for digitization and on the quality of software processing. Our results also show how the quality of the particle type identification depends on the sampling rate and as well as the method of processing of the sampled data. (authors)

  1. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  2. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  3. Study on Rear-end Real-time Data Quality Control Method of Regional Automatic Weather Station

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to study the rear-end real-time data quality control method of regional automatic weather station. [Method] The basic content and steps of rear-end real-time data quality control of regional automatic weather station were introduced. Each element was treated with systematic quality control procedure. The existence of rear-end real time data of regional meteorological station in Guangxi was expounded. Combining with relevant elements and linear changes, improvement based on traditiona...

  4. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  5. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. PMID:26518205

  6. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols.

  7. Key issues in automatic classification of defects in post-inspection review process of photomasks

    Science.gov (United States)

    Pereira, Mark; Maji, Manabendra; Pai, Ravi R.; B. V. R., Samir; Seshadri, R.; Patil, Pradeepkumar

    2012-11-01

    very small real defects, registering grey level defect images with layout data base, automatically finding out maximum critical dimension (CD) variation for defective patterns (where patterns could have Manhattan as well as all angle edges), etc. This paper discusses about many such key issues and suggests strategies to address some of them based upon our experience while developing the NxADC and evaluating it on production mask defects.

  8. Data Processing: Large Data Processing Class Leads to Innovations

    Science.gov (United States)

    Stair, Ralph M.; Render, Barry

    1977-01-01

    Experience with mass sections in the Introduction to Business Data Processing course at the University of New Orleans has been positive. The innovations described in this articles have not only helped to conserve scarce resources but have, in the author's opinion, provided the potential for a more effective and efficient course. (HD)

  9. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  10. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  11. Automatic geocoding of high-value targets using structural image analysis and GIS data

    Science.gov (United States)

    Soergel, Uwe; Thoennessen, Ulrich

    1999-12-01

    Geocoding based merely on navigation data and sensor model is often not possible or precise enough. In these cases an improvement of the preregistration through image-based approaches is a solution. Due to the large amount of data in remote sensing automatic geocoding methods are necessary. For geocoding purposes appropriate tie points, which are present in image and map, have to be detected and matched. The tie points are base of the transformation function. Assigning the tie points is combinatorial problem depending on the number of tie points. This number can be reduced using structural tie points like corners or crossings of prominent extended targets (e.g. harbors, airfields). Additionally the reliability of the tie points is improved. Our approach extracts structural tie points independently in the image and in the vector map by a model-based image analysis. The vector map is provided by a GIS using ATKIS data base. The model parameters are extracted from maps or collateral information of the scenario. The two sets of tie points are automatically matched with a Geometric Hashing algorithm. The algorithm was successfully applied to VIS, IR and SAR data.

  12. Are scalar implicatures automatically processed and different for each individual? A mismatch negativity (MMN) study.

    Science.gov (United States)

    Zhao, Ming; Liu, Tao; Chen, Gang; Chen, Feiyan

    2015-03-01

    Scalar implicatures are ordinarily activated in human communication when the speaker uses a weak expression (e.g., some) from a set of stronger alternatives (e.g., many, all). It has been debated whether scalar inferences are generated by default. To clarify this issue and examine whether individual pragmatic ability will affect the mechanism of scalar inference processing, we performed experiment with an MMN paradigm to capture the neurophysiological indicators of automatic processing of spoken sentences and divided participants into high and low pragmatic ability groups. Experimental results showed that compared with the condition that an informative sentence ("Some animals have tails") is the deviant stimuli, when an underinformative sentence ("Some tigers have tails") is the deviant stimuli, the high pragmatic ability group induced mismatch negativity (MMN) and sustained negativity, while the low pragmatic ability group had no ERP effects. These results indicated that at least some people can automatically activate the scalar implicatures when encountering scalar trigger words, even in the inattentive status. PMID:25542387

  13. Automatic License Plate Recoganization System Based on Image Processing Using LabVIEW

    Directory of Open Access Journals (Sweden)

    Rachana Chahar

    2014-05-01

    Full Text Available Automatic License plate recognition (ALPR system is one kind of an intelligent transport system and is of considerable interest because of its potential applications in highway electronic toll collection and traffic monitoring systems. This allows traffic fines to be automatically generated and sent to the appropriate violator without the need for human intervention. An ALPR system can be located on the side of or above a roadway, at a toll booth, or at another type of entrance way. All ALPR systems follow a basic high level process. The process starts when a sensor detects the presence of a vehicle and signals the system camera to record an image of the passing vehicle. The image is passed on to a computer where software running on the computer extracts the license plate number from the image. License plate numbers can then be recorded in a database with other information such as time vehicle past and speed of vehicle. And finally, chain code concept with different parameter is used for recognition of the characters. The performance of the proposed algorithm has been tested on real images. The Proposed system has been implemented using Vision Assistant {&} LabVIEW

  14. Abnormalities in Automatic Processing of Illness-Related Stimuli in Self-Rated Alexithymia.

    Directory of Open Access Journals (Sweden)

    Laura Brandt

    Full Text Available To investigate abnormalities in automatic information processing related to self- and observer-rated alexithymia, especially with regard to somatization, controlling for confounding variables such as depression and affect.89 healthy subjects (60% female, aged 19-71 years (M = 32.1. 58 subjects were additionally rated by an observer.Alexithymia (self-rating: TAS-20, observer rating: OAS; automatic information processing (priming task including verbal [illness-related, negative, positive, neutral] and facial [negative, positive, neutral] stimuli; somatoform symptoms (SOMS-7T; confounders: depression (BDI, affect (PANAS.Higher self-reported alexithymia scores were associated with lower reaction times for negative (r = .19, p < .10 and positive (r = .26, p < .05 verbal primes when the target was illness-related. Self-reported alexithymia was correlated with number (r = .42, p < .01 and intensity of current somatoform symptoms (r = .36, p < .01, but unrelated to observer-rated alexithymia (r = .11, p = .42.Results indicate a faster allocation of attentional resources away from task-irrelevant information towards illness-related stimuli in alexithymia. Considering the close relationship between alexithymia and somatization, these findings are compatible with the theoretical view that alexithymics focus strongly on bodily sensations of emotional arousal. A single observer rating (OAS does not seem to be an adequate alexithymia-measure in community samples.

  15. Automatic Gauge Control in Rolling Process Based on Multiple Smith Predictor Models

    Directory of Open Access Journals (Sweden)

    Jiangyun Li

    2014-01-01

    Full Text Available Automatic rolling process is a high-speed system which always requires high-speed control and communication capabilities. Meanwhile, it is also a typical complex electromechanical system; distributed control has become the mainstream of computer control system for rolling mill. Generally, the control system adopts the 2-level control structure—basic automation (Level 1 and process control (Level 2—to achieve the automatic gauge control. In Level 1, there is always a certain distance between the roll gap of each stand and the thickness testing point, leading to the time delay of gauge control. Smith predictor is a method to cope with time-delay system, but the practical feedback control based on traditional Smith predictor cannot get the ideal control result, because the time delay is hard to be measured precisely and in some situations it may vary in a certain range. In this paper, based on adaptive Smith predictor, we employ multiple models to cover the uncertainties of time delay. The optimal model will be selected by the proposed switch mechanism. Simulations show that the proposed multiple Smith model method exhibits excellent performance in improving the control result even for system with jumping time delay.

  16. Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet

    Directory of Open Access Journals (Sweden)

    Paweł Kędzia

    2015-06-01

    Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet The paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.

  17. Automatic electricity markets data extraction for realistic multi-agent simulations

    DEFF Research Database (Denmark)

    Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel;

    2014-01-01

    markets data available on-line; capability of dealing with different file formats and types, some of them inserted by the user, resulting from information obtained not on-line but based on the possible collaboration with market entities; definition and implementation of database gathering information from...... different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database...

  18. Interpreting sign components from accelerometer and sEMG data for automatic sign language recognition.

    Science.gov (United States)

    Li, Yun; Chen, Xiang; Zhang, Xu; Wang, Kongqiao; Yang, Jihai

    2011-01-01

    The identification of constituent components of each sign gesture is a practical way of establishing large-vocabulary sign language recognition (SLR) system. Aiming at developing such a system using portable accelerometer (ACC) and surface electromyographic (sEMG) sensors, this work proposes a method for automatic SLR at the component level. The preliminary experimental results demonstrate the effectiveness of the proposed method and the feasibility of interpreting sign components from ACC and sEMG data. Our study improves the performance of SLR based on ACC and sEMG sensors and will promote the realization of a large-vocabulary portable SLR system. PMID:22255059

  19. Automatic Ethical Filtering using Semantic Vectors Creating Normative Tag Cloud from Big Data

    Directory of Open Access Journals (Sweden)

    Ahsan N. Khan

    2015-03-01

    Full Text Available Ethical filtering has been a painful and controversial issue seen by different angles worldwide. Stalwarts for freedom find newer methods to circumvent banned URLs while generative power of the Internet outpaces velocity of censorship. Hence, keeping online content safe from anti-religious and sexually provocative content is a growing issue in conservative countries in Asia and The Middle East. Solutions for online ethical filters are linearly upper bound given computation and big data growth scales. In this scenario, Semantic Vectors are applied as automatic ethical filters to calculate accuracy and efficiency metrics. The results show a normative tag cloud generated with superior performance to industry solutions.

  20. Fast data processing with Spark

    CERN Document Server

    Sankar, Krishna

    2015-01-01

    Fast Data Processing with Spark - Second Edition is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too big to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

  1. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2014-04-01

    Full Text Available Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging data. Using the ground height from a DEM (digital elevation model, the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness and roof plane extraction (about 80% correctness and completeness, when LIDAR point density is as low as four points/m2. Thus, the proposed method can be exploited in various applications.

  2. [Data integration, data mining and visualization analysis of traditional Chinese medicine manufacturing process].

    Science.gov (United States)

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    Huge amount of data becomes available from the pharmaceutical manufacturing process with wide application of in- dustrial automatic control technology in traditional Chinese medicine (TCM) industry. The industrial big data thus provides golden op- portunities to better understand the manufacturing process and improve the process performance. Therefore it is important to implement data integration and management systems in TCM plants to easily collect, integrate, store, analyze, communicate and visulize the data with high efficiency. It could break the data island and discover useful information and knowledge to improve the manufacturing process performance. The key supporting technologies for TCM manufacturing and industrial big data management were introduced in this paper, with a specific focus on data mining and visualization technologies. Using historic data collected from a manufacturing plant of Shengmai injection of SZYY group, we illustrated the usefulness and discussed future prospects of data mining and visualization technologies. PMID:25507568

  3. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training.

  4. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training. PMID:25223112

  5. Towards fully automatic modelling of the fracture process in quasi-brittle and ductile materials: a unified crack growth criterion

    Institute of Scientific and Technical Information of China (English)

    Zhen-jun YANG; Guo-hua LIU

    2008-01-01

    Fully automatic finite element (FE) modelling of the fracture process in quasi-brittle materials such as concrete and rocks and ductile materials such as metals and alloys, is of great significance in assessing structural integrity and presents tremendous challenges to the engineering community. One challenge lies in the adoption of an objective and effective crack propagation criterion. This paper proposes a crack propagation criterion based on the principle of energy conservation and the cohesive zone model (CZM). The virtual crack extension technique is used to calculate the differential terms in the criterion. A fully-automatic discrete crack modelling methodology, integrating the developed criterion, the CZM to model the crack, a simple remeshing procedure to accommodate crack propagation, the J2 flow theory implemented within the incremental plasticity framework to model the ductile materials, and a local arc-length solver to the nonlinear equation system, is developed and implemented in an in-house program. Three examples, i.e., a plain concrete beam with a single shear crack, a reinforced concrete (RC) beam with multiple cracks and a compact-tension steel specimen, are simulated. Good agreement between numerical predictions and experimental data is found, which demonstrates the applicability of the criterion to both quasi-brittle and ductile materials.

  6. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    Science.gov (United States)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of

  7. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    Science.gov (United States)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  8. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    Directory of Open Access Journals (Sweden)

    R.Thilepa

    2011-02-01

    Full Text Available The main objective of this paper is to elaborate how defective fabric parts can beprocessed using Matlab with image processing techniques. In developing countries like Indiaespecially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields amajor income for the country. The city also employs either directly or indirectly more than 3lakhs of people and earns almost an income of 12, 000 crores per annum for the country in pastthree decades [2]. To upgrade this process the fabrics when processed in textiles the fault presenton the fabrics can be identified using Matlab with Image processing techniques. This imageprocessing technique is done using Matlab 7.3 and for the taken image, Noise Filtering,Histogram and Thresholding techniques are applied for the image and the output is obtained inthis paper. This research thus implements a textile defect detector with system visionmethodology in image processing.

  9. Automatic detection of referral patients due to retinal pathologies through data mining.

    Science.gov (United States)

    Quellec, Gwenolé; Lamard, Mathieu; Erginay, Ali; Chabouis, Agnès; Massin, Pascale; Cochener, Béatrice; Cazuguel, Guy

    2016-04-01

    With the increased prevalence of retinal pathologies, automating the detection of these pathologies is becoming more and more relevant. In the past few years, many algorithms have been developed for the automated detection of a specific pathology, typically diabetic retinopathy, using eye fundus photography. No matter how good these algorithms are, we believe many clinicians would not use automatic detection tools focusing on a single pathology and ignoring any other pathology present in the patient's retinas. To solve this issue, an algorithm for characterizing the appearance of abnormal retinas, as well as the appearance of the normal ones, is presented. This algorithm does not focus on individual images: it considers examination records consisting of multiple photographs of each retina, together with contextual information about the patient. Specifically, it relies on data mining in order to learn diagnosis rules from characterizations of fundus examination records. The main novelty is that the content of examination records (images and context) is characterized at multiple levels of spatial and lexical granularity: 1) spatial flexibility is ensured by an adaptive decomposition of composite retinal images into a cascade of regions, 2) lexical granularity is ensured by an adaptive decomposition of the feature space into a cascade of visual words. This multigranular representation allows for great flexibility in automatically characterizing normality and abnormality: it is possible to generate diagnosis rules whose precision and generalization ability can be traded off depending on data availability. A variation on usual data mining algorithms, originally designed to mine static data, is proposed so that contextual and visual data at adaptive granularity levels can be mined. This framework was evaluated in e-ophtha, a dataset of 25,702 examination records from the OPHDIAT screening network, as well as in the publicly-available Messidor dataset. It was successfully

  10. Summer Student Work Project Report: SCADA Bridge Tool Development Automatically Capturing Data from SCADA to the Maintenance System

    CERN Document Server

    Alhambra-Moron, Alfonso

    2015-01-01

    The main purpose of this report is to summarize the work project I have been doing at CERN during the last 3 months as a Summer Student. My name is Alfonso Alhambra Morón and the 8th of June 2015 I joined the EN-HE-LM team as a summer student supervised by Damien Lafarge in order to collaborate in the automation of the transfer of meter readings from SCADA1 to Infor EAM2, the computerized maintenance management system at CERN. The main objective of my project was to enable the automatic updates of meters in Infor EAM fetching data from SCADA so as to automatize a process which was done manually before and consumed resources in terms of having to consult the meter physically, import this information to Infor EAM by hand and detecting and correcting the errors that can occur when doing all of this manually. This problem is shared by several other teams at CERN apart from the Lift Maintenance team and for this reason the main target I had when developing my solution was flexibility and scalability so as to make...

  11. Automatic Estimation of Live Coffee Leaf Infection Based on Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Eric Hitimana

    2014-02-01

    Full Text Available Image segmentation is the most challenging issue in computer vision applications. And most difficulties for crops management in agriculture ar e the lack of appropriate methods for detecting the leaf damage for pests’ treatment. In this paper we proposed an automatic method for leaf damage detection and severity estimation o f coffee leaf by avoiding defoliation. After enhancing the contrast of the original image using LUT based gamma correction, the image is processed to remove the background, and the output leaf is clustered using Fuzzy c-means segmentation in V channel of YUV color space to max imize all leaf damage detection, and finally, the severity of leaf is estimated in terms of ratio for leaf pixel distribution between the normal and the detected leaf damage. The results in each proposed method was compared to the current researches and the accuracy is obvious either in the background removal or dama ge detection.

  12. An automatic detection method to the field wheat based on image processing

    Science.gov (United States)

    Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan

    2013-10-01

    The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.

  13. SAUNA—a system for automatic sampling, processing, and analysis of radioactive xenon

    Science.gov (United States)

    Ringbom, A.; Larson, T.; Axelsson, A.; Elmgren, K.; Johansson, C.

    2003-08-01

    A system for automatic sampling, processing, and analysis of atmospheric radioxenon has been developed. From an air sample of about 7 m3 collected during 12 h, 0.5 cm3 of xenon is extracted, and the atmospheric activities from the four xenon isotopes 133Xe, 135Xe, 131mXe, and 133mXe are determined with a beta-gamma coincidence technique. The collection is performed using activated charcoal and molecular sieves at ambient temperature. The sample preparation and quantification are performed using preparative gas chromatography. The system was tested under routine conditions for a 5-month period, with average minimum detectable concentrations below 1 mBq/ m3 for all four isotopes.

  14. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  15. Raster Data Partitioning for Supporting Distributed GIS Processing

    Science.gov (United States)

    Nguyen Thai, B.; Olasz, A.

    2015-08-01

    In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms

  16. Automatic extraction of building boundaries using aerial LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  17. Sla-Oriented Semi-Automatic Management Of Data Storage And Applications In Distributed Environments

    Directory of Open Access Journals (Sweden)

    Dariusz Król

    2010-01-01

    Full Text Available In this paper we describe a semi-automatic programming framework for supporting userswith managing the deployment of distributed applications along with storing large amountsof data in order to maintain Quality of Service in highly dynamic and distributed environments,e.g., Grid. The Polish national PL-GRID project aims to provide Polish science withboth hardware and software infrastructures which will allow scientists to perform complexsimulations and in-silico experiments on a scale greater than ever before. We highlight theissues and challenges related to data storage strategies that arise at the analysis stage ofuser requirements coming from different areas of science. Next we present a solution to thediscussed issues along with a description of sample usage scenarios. At the end we provideremarks on the current status of the implementation work and some results from the testsperformed.

  18. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    OpenAIRE

    R.Thilepa; M.THANIKACHALAM

    2015-01-01

    The main objective of this paper is to elaborate how defective fabric parts can be processed using Matlab with image processing techniques. In developing countries like India especially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields a major income for the country. The city also employs either directly or indirectly more than 3 lakhs of people and earns almost an income of 12, 000 crores per annum for the country in past three decades [2]. To u...

  19. Automatic monitoring system for high-steep slope in open-pit mine based on GPS and data analysis

    Science.gov (United States)

    Zhou, Chunmei; Li, Xianfu; Qin, Sunwei; Qiu, Dandan; Wu, Yanlin; Xiao, Yun; Zhou, Jian

    2008-12-01

    Recently, GPS has been more and more applicative in open pit mine slope safety monitoring. Daye Iron Mine open pit high-steep slope automatic monitoring system mainly consists of three modules, namely, GPS data processing module, monitoring and warning module, emergency plans module. According to the rock mass structural feature and the side slope stability evaluation, it is arranged altogether to seven GPS distortion monitoring points on the sharp of Fault F9 at Daye iron Mine, adopted the combination of monofrequent static GPS receiver and data-transmission radio to carry on the observation, the data processing mainly uses three transect interpolation method to solve the questions of discontinuity and Effectiveness in the data succession. According to the displacement monitoring data from 1990 to 1996 of Daye Iron Mine East Open Pit Shizi mountain Landslide A2, researching the displacement criterion, rate criterion, acceleration criterion, creep curve tangent angle criterion etc of landslide failure, the result shows that the landslide A2 is the lapse type crag nature landslide whose movement in three phases, namely creep stage, accelerated phase, destruction stage. It is different of the failure criterion in different stages and different position that is at the rear, central, front margin of the landslide. It has important guiding significance to put forward the comprehensive failure criterion of seven new-settled monitoring points combining the slope deformation destruction and macroscopic evidence.

  20. Simple automatic strategy for background drift correction in chromatographic data analysis.

    Science.gov (United States)

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-01

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use.

  1. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  2. Automatic processing of an orientation map into a finite element mesh that conforms to grain boundaries

    Science.gov (United States)

    Dancette, S.; Browet, A.; Martin, G.; Willemet, M.; Delannay, L.

    2016-06-01

    A new procedure for microstructure-based finite element modeling of polycrystalline aggregates is presented. The proposed method relies (i) on an efficient graph-based community detection algorithm for crystallographic data segmentation and feature contour extraction and (ii) on the generation of selectively refined meshes conforming to grain boundaries. It constitutes a versatile and close to automatic environment for meshing complex microstructures. The procedure is illustrated with polycrystal microstructures characterized by orientation imaging microscopy. Hot deformation of a Duplex stainless steel is investigated based on ex-situ EBSD measurements performed on the same region of interest before and after deformation. A finite element mesh representing the initial microstructure is generated and then used in a crystal plasticity simulation of the plane strain compression. Simulation results and experiments are in relatively good agreement, confirming a large potential for such directly coupled experimental and modeling analyses, which is facilitated by the present image-based meshing procedure.

  3. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  4. Process-scheme-driven automatic construction of NC machining cell for aircraft structural parts

    Institute of Scientific and Technical Information of China (English)

    Chen Shulin; Zheng Guolei; Zhou Min; Du Baorui; Chu Hongzhen

    2013-01-01

    In order to enhance the NC programming efficiency and quality of aircraft structural parts (ASPs), an intelligent NC programming pattern driven by process schemes is presented. In this pattern, the NC machining cell is the minimal organizational structure in the technological process, consisting of an operation machining volume cell, and the type and parameters of the machining operation. After the machining cell construction, the final NC program can be easily obtained in a CAD/CAM system by instantiating the machining operation for each machining cell. Accord-ingly, how to automatically establish the machining cells is a key issue in intelligent NC program-ming. On the basis of the NC machining craft of ASP, the paper aims to make an in-depth research on this issue. Firstly, some new terms about the residual volume and the machinable volume are defined, and then, the technological process is modeled with a process scheme. Secondly, the approach to building the machining cells is introduced, in which real-time complement machining is mainly considered to avoid interference and overcutting. Thirdly, the implementing algorithm is designed and applied to the Intelligent NC Programming System of ASP. Finally, the developed algorithm is validated through two case studies.

  5. Examining the influence of psychopathy, hostility biases, and automatic processing on criminal offenders' Theory of Mind.

    Science.gov (United States)

    Nentjes, Lieke; Bernstein, David; Arntz, Arnoud; van Breukelen, Gerard; Slaats, Mariëtte

    2015-01-01

    Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psychopathy. ToM abilities (as assessed with the Reading the Mind in the Eyes Test; RMET; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), was compared between 39 PCL-R diagnosed psychopathic offenders, 37 non-psychopathic offenders, and 26 nonoffender controls. Contrary to our hypothesis, psychopathic individuals presented with intact overall RMET performance when restrictions were imposed on how long task stimuli could be processed. In addition, psychopaths did not over-ascribe hostility to task stimuli (i.e., lack of hostility bias). However, there was a significant three-way interaction between hostility, processing speed, and psychopathy: when there was no time limit on stimulus presentation, psychopathic offenders made fewer errors in identifying more hostile eye stimuli compared to nonoffender controls, who seemed to be less accurate in detecting hostility. Psychopaths' more realistic appraisal of others' malevolent mental states is discussed in the light of theories that stress its potential adaptive function.

  6. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety

    Directory of Open Access Journals (Sweden)

    Sylvia A. Morelli

    2013-05-01

    Full Text Available Although many studies have examined the neural basis of experiencing empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, while instructed to empathize, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; septal area, SA. Two key regions – the ventral AI and SA – were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching versus empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others’ emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy (DMPFC, MPFC, TPJ, amygdala and social cognition. The current results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  7. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    Science.gov (United States)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  8. ANIGAM: a computer code for the automatic calculation of nuclear group data

    International Nuclear Information System (INIS)

    The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)

  9. Automatic data distribution and load balancing with space-filling curves: implementation in CONQUEST

    International Nuclear Information System (INIS)

    We present an automatic, spatially local data distribution and load balancing scheme applicable to many-body problems running on parallel architectures. The particle distribution is based on spatial decomposition of the simulation cell. A one-dimensional Hilbert curve is mapped onto the three-dimensional real space cell, which reduces the dimensionality of the problem and provides a way to assign different spatially local parts of the cell to each processor. The scheme is independent of the number of processors. It can be used for both ordered and disordered structures and does not depend on the dimensionality or shape of the system. Details of implementation in the linear-scaling density functional code CONQUEST, as well as several case studies of systems of various complexity, containing up to 55 755 particles, are given

  10. INfluence of vinasse on water movement in soil, using automatic acquisition and handling data system

    International Nuclear Information System (INIS)

    The vinasse, by-product of ethylic alcohol industry from the sugar cane juice or molasses yeast fermentation, has been incorporated in the soil as fertilizer, due to the its hight organic matter (2-6%), potassium and sulphate (0,1-0,5%) and other nutrient contents. By employing monoenergetic gamma-ray beam attenuation technique (241Am; 59,5 keV; 100 mCi) the influence of vinasse on the water movement in the soil was studied. For this, an automatic acquisition and handling data system was used, based in multichannel analyser, multi-scaling mode operated, coupled to a personal microcomputer and plotter. Despite the small depth studied (6 cm), it was observed that vinasse decreases the water infiltration velocity in the soil. (Author)

  11. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    DEFF Research Database (Denmark)

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune;

    2014-01-01

    Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group......Background: The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. New method: To meet the criticism and reveal the latent...... sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model...

  12. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Bowler, Matthew W., E-mail: mbowler@embl.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 avenue des Martyrs, F-38042 Grenoble (France); Université Grenoble Alpes-EMBL-CNRS, 71 avenue des Martyrs, F-38042 Grenoble (France); Nurizzo, Didier, E-mail: mbowler@embl.fr; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine [European Synchrotron Radiation Facility, 71 avenue des Martyrs, F-38043 Grenoble (France)

    2015-10-03

    MASSIF-1 (ID30A-1) is a new beamline dedicated to the completely automatic characterization and data collection from crystals of biological macromolecules. MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined.

  13. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  14. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  15. Automatic registration of UAV-borne sequent images and LiDAR data

    Science.gov (United States)

    Yang, Bisheng; Chen, Chi

    2015-03-01

    Use of direct geo-referencing data leads to registration failure between sequent images and LiDAR data captured by mini-UAV platforms because of low-cost sensors. This paper therefore proposes a novel automatic registration method for sequent images and LiDAR data captured by mini-UAVs. First, the proposed method extracts building outlines from LiDAR data and images and estimates the exterior orientation parameters (EoPs) of the images with building objects in the LiDAR data coordinate framework based on corresponding corner points derived indirectly by using linear features. Second, the EoPs of the sequent images in the image coordinate framework are recovered using a structure from motion (SfM) technique, and the transformation matrices between the LiDAR coordinate and image coordinate frameworks are calculated using corresponding EoPs, resulting in a coarse registration between the images and the LiDAR data. Finally, 3D points are generated from sequent images by multi-view stereo (MVS) algorithms. Then the EoPs of the sequent images are further refined by registering the LiDAR data and the 3D points using an iterative closest-point (ICP) algorithm with the initial results from coarse registration, resulting in a fine registration between sequent images and LiDAR data. Experiments were performed to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision robust co-registration of sequent images and LiDAR data captured by mini-UAVs.

  16. A comparison of density of Insight and Ektaspeed plus dental x-ray films using automatic and manual processing

    International Nuclear Information System (INIS)

    To compare the film density of Insight dental X-ray film (Eastman Kodak Co., Rochester, NY, USA) with that of Ektaspeed Plus film (Eastman Kodak) under manual and automatic processing conditions. Insight and wedge on the film under the three different exposure times. The exposed films were processed by both manual and automatic ways. The Base plus fog density and the optical density and the optical density made by exposing step wedge were calculated using a digital densitometer (model 07-443, Victoreen Inc, Cleveland, Ohio, USA). The optical densities of the Insight and Ektaspeed film versus thickness of aluminum wedge at the same exposure time were plotted on the graphs. Statistical analyses were applied for comparing the optical densities of the two films. The film density of both Insight films and Ektaspeed Plus films under automatic processing condition was significantly higher over the manual processing. The film density of Insight over Ektaspeed Plus film. To take the full advantage of reducing exposure time, Insight film should be processed automatically

  17. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    Science.gov (United States)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  18. Iqpc 2015 Track: Evaluation of Automatically Generated 2d Footprints from Urban LIDAR Data

    Science.gov (United States)

    Truong-Hong, L.; Laefer, D.; Bisheng, Y.; Ronggang, H.; Jianping, L.

    2015-08-01

    Over the last decade, several automatic approaches have been proposed to extract and reconstruct 2D building footprints and 2D road profiles from ALS data, satellite images, and/or aerial imagery. Since these methods have to date been applied to various data sets and assessed through a variety of different quality indicators and ground truths, comparing the relative effectiveness of the techniques and identifying their strengths and short-comings has not been possible in a systematic way. This contest as part of IQPC15 was designed to determine pros and cons of submitted approaches in generating 2D footprint of a city region from ALS data. Specifically, participants were asked to submit 2D footprints (building outlines and road profiles) derived from ALS data from a highly dense dataset (approximately 225 points/m2) across a 1km2 of Dublin, Ireland's city centre. The proposed evaluation strategies were designed to measure not only the capacity of each method to detect and reconstruct 2D buildings and roads but also the quality of the reconstructed building and road models in terms of shape similarity and positional accuracy.

  19. A Fast Algorithm for Automatic Detection of Ionospheric Disturbances Using GPS Slant Total Electron Content Data

    Science.gov (United States)

    Efendi, Emre; Arikan, Feza; Yarici, Aysenur

    2016-07-01

    Solar, geomagnetic, gravitational and seismic activities cause disturbances in the ionospheric region of upper atmosphere for space based communication, navigation and positioning systems. These disturbances can be categorized with respect to their amplitude, duration and frequency. Typically in the literature, ionospheric disturbances are investigated with gradient based methods on Total Electron Content (TEC) data estimated from ground based dual frequency Global Positioning System (GPS) receivers. In this study, a detection algorithm is developed to determine the variability in Slant TEC (STEC) data. The developed method, namely Differential Rate of TEC (DRoT), is based on Rate of Tec (RoT) method that is widely used in the literature. RoT is usually applied to Vertical TEC (VTEC) and it can be defined as normalized derivative of VTEC. Unfortunately, the resultant data obtained from the application of RoT on VTEC suffer from inaccuracies due to mapping function and the resultant values are very noisy which make it difficult to automatically detect the disturbance due to variability in the ionosphere. The developed DRoT method can be defined as the normalized metric norm (L2) between the RoT and its baseband trend structure. In this study, the error performance of DRoT is determined using synthetic data with variable bounds on the parameter set of amplitude, frequency and period of disturbance. It is observed that DRoT method can detect disturbances in three categories. For DRoT values less than 50%, there is no significant disturbance in STEC data. For DRoT values between 50 to 70 %, a medium scale disturbance can be observed. For DROT values over 70 %, severe disturbances such Large Scale Travelling Ionospheric Disturbances (TID) or plasma bubbles can be observed. When DRoT is applied to the GPS-STECdata for stations in high latitude, equatorial and mid-latitude regions, it is observed that disturbances with amplitudes larger than 10% of the difference between

  20. Comparison of edge detection techniques for the automatic information extraction of Lidar data

    Science.gov (United States)

    Li, H.; di, L.; Huang, X.; Li, D.

    2008-05-01

    In recent years, there has been much interest in information extraction from Lidar point cloud data. Many automatic edge detection algorithms have been applied to extracting information from Lidar data. Generally they can be divided as three major categories: early vision gradient operators, optimal detectors and operators using parametric fitting models. Lidar point cloud includes the intensity information and the geographic information. Thus, traditional edge detectors used in remote sensed images can take advantage with the coordination information provided by point data. However, derivation of complex terrain features from Lidar data points depends on the intensity properties and topographic relief of each scene. Take road for example, in some urban area, road has the alike intensity as buildings, but the topographic relationship of road is distinct. The edge detector for road in urban area is different from the detector for buildings. Therefore, in Lidar extraction, each kind of scene has its own suitable edge detector. This paper compares application of the different edge detectors from the previous paragraph to various terrain areas, in order to figure out the proper algorithm for respective terrain type. The Canny, EDISON and SUSAN algorithms were applied to data points with the intensity character and topographic relationship of Lidar data. The Lidar data for test are over different terrain areas, such as an urban area with a mass of buildings, a rural area with vegetation, an area with slope, or an area with a bridge, etc. Results using these edge detectors are compared to determine which algorithm is suitable for a specific terrain area. Key words: Edge detector, Extraction, Lidar, Point data

  1. On the Automatic Generation of Plans for Life Cycle Assembly Processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-01-01

    Designing products for easy assembly and disassembly during their entire life cycles for purposes including product assembly, product upgrade, product servicing and repair, and product disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and manufacturing plan selection criteria, as compared to initial assembly, require re-visiting significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or to applied studies of life cycle assembly processes that give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for, optimize, and analyze the cycle assembly processes. The study of assembly planning is at the very heart of manufacturing research facilities and academic engineering institutions; and, in recent years a number of significant advances in the field of assembly planning have been made. These advances have ranged from the development of automated assembly planning systems, such as Sandia's Automated Assembly Analysis System Archimedes 3.0{copyright}, to the startling revolution in microprocessors and computer-controlled production tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), flexible manufacturing systems (EMS), and computer-integrated manufacturing (CIM). These results have kindled considerable interest in the study of algorithms for life cycle related assembly processes and have blossomed into a field of intense interest. The intent of this manuscript is to bring together the fundamental results in this area, so that the unifying principles and underlying concepts of algorithm design may more easily be implemented in practice.

  2. The new automatic precipitation phase distinction algorithm for OceanRAIN data over the global ocean

    Science.gov (United States)

    Burdanowitz, Jörg; Klepp, Christian; Bakan, Stephan

    2015-04-01

    The hitherto lack of surface precipitation data over the global ocean limits the capabilities to validate recent and future precipitation satellite retrievals. The first systematic ship-based surface precipitation data set OceanRAIN (Ocean Rain And Ice-phase precipitation measurement Network) aims at providing in-situ precipitation data through particle size distributions (PSDs) from optical disdrometers deployed on research vessels (RVs). From the RV Polarstern, OceanRAIN currently contains more than four years of 1-minute resolution precipitation data, which corresponds to more than 200,000 minutes of precipitation. The calculation of the precipitation rate requires to know the precipitation phase (PP) of the falling particles. We develop a novel algorithm to automatically retrieve the PP using OceanRAIN data and ancillary meteorological measurements from RVs. The main objective is to improve accuracy and efficiency of the current time-consuming manual method of discriminating liquid and solid precipitation particles. The new PP distinction algorithm is based on the relation of air temperature and relative humidity (T-rH) with respect to PP. For first-time usage over oceanic areas, the land-retrieved coefficients of this empirical relationship are adjusted to OceanRAIN data. The measured PSD supports determining the PP in certain cases where large snow aggregates exist at distinctly positive air temperatures. The classification, based on T-rH and PSD, is statistically exploited and weighed with respect to the current weather conditions to obtain an overall PP probability at 1-minute resolution. The new PP distinction algorithm agrees in more than 92% (94% excl. mixed-phase) of precipitating cases with the manually-determined PP in the RV Polarstern data. The PP distinction algorithm complements the valuable information of OceanRAIN surface precipitation over the ocean.

  3. Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology

    OpenAIRE

    Jun Zhang; Jui-Hua Hsieh; Hao Zhu

    2014-01-01

    In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which...

  4. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    International Nuclear Information System (INIS)

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System

  5. Automatic cardiac gating of small-animal PET from list-mode data

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Udias, J.M. [Universidad Complutense de Madrid Univ. (Spain). Grupo de Fisica Nuclear; Vaquero, J.J.; Desco, M. [Universidad Carlos III de Madrid (Spain). Dept. de Bioingenieria e Ingenieria Aeroespacial; Cusso, L. [Hospital General Universitario Gregorio Maranon, Madrid (Spain). Unidad de Medicina y Cirugia Experimental

    2011-07-01

    This work presents a method to obtain automatically the cardiac gating signal in a PET study of rats, by employing the variation with time of the counts in the cardiac region, that can be extracted from list-mode data. In an initial step, the cardiac region is identified in the image space by backward-projecting a small fraction of the acquired data and studying the variation with time of the counts in each voxel inside said region, with frequencies within 2 and 8 Hz. The region obtained corresponds accurately to the left-ventricle of the heart of the rat. In a second step, the lines-of-response (LORs) connected with this region are found by forward-projecting this region. The time variation of the number of counts in these LORs contains the cardiac motion information that we want to extract. This variation of counts with time is band-pass filtered to reduce noise, and the time signal so obtained is used to create the gating signal. The result was compared with a cardiac gating signal obtained from an ECG acquired simultaneously to the PET study. Reconstructed gated images obtained from both gating information are similar. The method proposed demonstrates that valid cardiac gating signals can be obtained for rats from PET list-mode data. (orig.)

  6. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity. PMID:26357393

  7. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.

  8. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  9. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    International Nuclear Information System (INIS)

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only. (paper)

  10. Development of Web Tools for the automatic Upload of Calibration Data into the CMS Condition Data

    Science.gov (United States)

    di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2010-04-01

    This article explains the recent evolution of Condition Database Application Service. The Condition Database Application Service is part of the condition database system of the CMS experiment, and it is used for handling and monitoring the CMS detector condition data, and the corresponding computing resources like Oracle Databases, storage service and network devices. We deployed a service, the offline Dropbox service, that will be used by Alignment and Calibration Group in order to upload from the offline network (GPN) the calibration constants produced by running offline analysis.

  11. Composable Data Processing in Environmental Science - A Process View

    OpenAIRE

    Wombacher, A.

    2008-01-01

    Data processing in environmental science is essential for doing science. The heterogeneity of data sources, data processing operations and infrastructures results in a lot of manual data and process integration work done by each scientist individually. This is very inefficient and time consuming. The aim is to provide a view based approach on accessing and processing data supporting a more generic infrastructure to integrate processing steps from different organizations, systems and libraries...

  12. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    Science.gov (United States)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  13. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  14. Critical properties of the diffusive epidemic process obtained via an automatic search technique

    International Nuclear Information System (INIS)

    The diffusive epidemic process DEP is composed of A and B species that independently diffuse on a lattice with diffusion rates DA and DB and follow the probabilistic dynamical rule A+B→2B and B→A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active and inactive states. We investigate the critical behavior of the one-dimensional DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points MASCP. We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ν and 1/zν in all the cases DA = DB, DA B and DA > DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime of DA > DB

  15. Automatic polishing process of plastic injection molds on a 5-axis milling center

    CERN Document Server

    Pessoles, Xavier; 10.1016/j.jmatprotec.2008.08.034

    2010-01-01

    The plastic injection mold manufacturing process includes polishing operations when surface roughness is critical or mirror effect is required to produce transparent parts. This polishing operation is mainly carried out manually by skilled workers of subcontractor companies. In this paper, we propose an automatic polishing technique on a 5-axis milling center in order to use the same means of production from machining to polishing and reduce the costs. We develop special algorithms to compute 5-axis cutter locations on free-form cavities in order to imitate the skills of the workers. These are based on both filling curves and trochoidal curves. The polishing force is ensured by the compliance of the passive tool itself and set-up by calibration between displacement and force based on a force sensor. The compliance of the tool helps to avoid kinematical error effects on the part during 5-axis tool movements. The effectiveness of the method in terms of the surface roughness quality and the simplicity of impleme...

  16. Automated Data Processing as an AI Planning Problem

    Science.gov (United States)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  17. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  18. Simple Approaches to Improve the Automatic Inventory of ZEBRA Crossing from Mls Data

    Science.gov (United States)

    Arias, P.; Riveiro, B.; Soilán, M.; Díaz-Vilariño, L.; Martínez-Sánchez, J.

    2015-08-01

    The city management is increasingly supported by information technologies, leading to paradigms such as smart cities, where decision-makers, companies and citizens are continuously interconnected. 3D modelling turns of great relevance when the city has to be managed making use of geospatial databases or Geographic Information Systems. On the other hand, laser scanning technology has experienced a significant growth in the last years, and particularly, terrestrial mobile laser scanning platforms are being more and more used with inventory purposes in both cities and road environments. Consequently, large datasets are available to produce the geometric basis for the city model; however, this data is not directly exploitable by management systems constraining the implementation of the technology for such applications. This paper presents a new algorithm for the automatic detection of zebra crossing. The algorithm is divided in three main steps: road segmentation (based on a PCA analysis of the points contained in each cycle of collected by a mobile laser system), rasterization (conversion of the point cloud to a raster image coloured as a function of intensity data), and zebra crossing detection (using the Hough Transform and logical constrains for line classification). After evaluating different datasets collected in three cities located in Northwest Spain (comprising 25 strips with 30 visible zebra crossings) a completeness of 83% was achieved.

  19. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan;

    2016-01-01

    , by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...... create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...

  20. Visualization process of Temporal Data

    OpenAIRE

    Daassi, Chaouki; Nigay, Laurence; Fauvet, Marie-Christine

    2004-01-01

    International audience Temporal data are abundantly present in many application domains such as banking, financial, clinical, geographical applications and so on. Temporal data have been extensively studied from data mining and database perspectives. Complementary to these studies, our work focuses on the visualization techniques of temporal data: a wide range of visualization techniques have been designed to assist the users to visually analyze and manipulate temporal data. All the techni...

  1. {sup 13}C-detected NMR experiments for automatic resonance assignment of IDPs and multiple-fixing SMFT processing

    Energy Technology Data Exchange (ETDEWEB)

    Dziekański, Paweł; Grudziąż, Katarzyna [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland); Jarvoll, Patrik [Agilent Technologies (United Kingdom); Koźmiński, Wiktor; Zawadzka-Kazimierczuk, Anna, E-mail: anzaw@chem.uw.edu.pl [University of Warsaw, Faculty of Chemistry, Biological and Chemical Research Centre (Poland)

    2015-06-15

    Intrinsically disordered proteins (IDPs) have recently attracted much interest, due to their role in many biological processes, including signaling and regulation mechanisms. High-dimensional {sup 13}C direct-detected NMR experiments have proven exceptionally useful in case of IDPs, providing spectra with superior peak dispersion. Here, two such novel experiments recorded with non-uniform sampling are introduced, these are 5D HabCabCO(CA)NCO and 5D HNCO(CA)NCO. Together with the 4D (HACA)CON(CA)NCO, an extension of the previously published 3D experiments (Pantoja-Uceda and Santoro in J Biomol NMR 59:43–50, 2014. doi: 10.1007/s10858-014-9827-1 10.1007/s10858-014-9827-1 ), they form a set allowing for complete and reliable resonance assignment of difficult IDPs. The processing is performed with sparse multidimensional Fourier transform based on the concept of restricting (fixing) some of spectral dimensions to a priori known resonance frequencies. In our study, a multiple-fixing method was developed, that allows easy access to spectral data. The experiments were tested on a resolution-demanding alpha-synuclein sample. Due to superior peak dispersion in high-dimensional spectrum and availability of the sequential connectivities between four consecutive residues, the overwhelming majority of resonances could be assigned automatically using the TSAR program.

  2. Advancements in Big Data Processing in the ATLAS and CMS Experiments

    CERN Document Server

    Vaniachine, A

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for distributed computing and Grid technologies. The emerging Big Data revolution drives exploration in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable six sigma production quality in petascale data ...

  3. Automatic Speech Segmentation Based on HMM

    OpenAIRE

    M. Kroul

    2007-01-01

    This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is ...

  4. Automatic detection and agronomic characterization of olive groves using high-resolution imagery and LIDAR data

    Science.gov (United States)

    Caruso, T.; Rühl, J.; Sciortino, R.; Marra, F. P.; La Scalia, G.

    2014-10-01

    The Common Agricultural Policy of the European Union grants subsidies for olive production. Areas of intensified olive farming will be of major importance for the increasing demand for oil production of the next decades, and countries with a high ratio of intensively and super-intensively managed olive groves will be more competitive than others, since they are able to reduce production costs. It can be estimated that about 25-40% of the Sicilian oliviculture must be defined as "marginal". Modern olive cultivation systems, which permit the mechanization of pruning and harvest operations, are limited. Agronomists, landscape planners, policy decision-makers and other professionals have a growing need for accurate and cost-effective information on land use in general and agronomic parameters in the particular. The availability of high spatial resolution imagery has enabled researchers to propose analysis tools on agricultural parcel and tree level. In our study, we test the performance of WorldView-2 imagery relative to the detection of olive groves and the delineation of olive tree crowns, using an object-oriented approach of image classification in combined use with LIDAR data. We selected two sites, which differ in their environmental conditions and in their agronomic parameters of olive grove cultivation. The main advantage of the proposed methodology is the low necessary quantity of data input and its automatibility. However, it should be applied in other study areas to test if the good results of accuracy assessment can be confirmed. Data extracted by the proposed methodology can be used as input data for decision-making support systems for olive grove management.

  5. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    Science.gov (United States)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  6. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  7. The speed of magnitude processing and executive functions in controlled and automatic number comparison in children: an electro-encephalography study

    Directory of Open Access Journals (Sweden)

    Jármi Éva

    2007-04-01

    Full Text Available Abstract Background In the numerical Stroop paradigm (NSP participants decide whether a digit is numerically or physically larger than another simultaneously presented digit. This paradigm is frequently used to assess the automatic number processing abilities of children. Currently it is unclear whether an equally refined evaluation of numerical magnitude occurs in both controlled (the numerical comparison task of the NSP and automatic (the physical comparison task of the NSP numerical comparison in both children and adults. One of our objectives was to respond this question by measuring the speed of controlled and automatic magnitude processing in children and adults in the NSP. Another objective was to determine how the immature executive functions of children affect their cognitive functions relative to adults in numerical comparison. Methods and results The speed of numerical comparison was determined by monitoring the electro-encephalographic (EEG numerical distance effect: The amplitude of EEG measures is modulated as a function of numerical distance between the to-be-compared digits. EEG numerical distance effects occurred between 140–320 ms after stimulus presentation in both controlled and automatic numerical comparison in all age groups. Executive functions were assessed by analyzing facilitation and interference effects on the latency of the P3b event-related potential component and the lateralized readiness potential (LRP. Interference effects were more related to response than to stimulus processing in children as compared with adults. The LRP revealed that the difficulty to inhibit irrelevant response tendencies was a major factor behind interference in the numerical task in children. Conclusion The timing of the EEG distance effect suggests that a refined evaluation of numerical magnitude happened at a similar speed in each age group during both controlled and automatic magnitude processing. The larger response interference in

  8. Automatic Thickness and Volume Estimation of Sprayed Concrete on Anchored Retaining Walls from Terrestrial LIDAR Data

    Science.gov (United States)

    Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.

    2016-06-01

    When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.

  9. Semi-Automatic Detection of Swimming Pools from Aerial High-Resolution Images and LIDAR Data

    Directory of Open Access Journals (Sweden)

    Borja Rodríguez-Cuenca

    2014-03-01

    Full Text Available Bodies of water, particularly swimming pools, are land covers of high interest. Their maintenance involves energy costs that authorities must take into consideration. In addition, swimming pools are important water sources for firefighting. However, they also provide a habitat for mosquitoes to breed, potentially posing a serious health threat of mosquito-borne disease. This paper presents a novel semi-automatic method of detecting swimming pools in urban environments from aerial images and LIDAR data. A new index for detecting swimming pools is presented (Normalized Difference Swimming Pools Index that is combined with three other decision indices using the Dempster–Shafer theory to determine the locations of swimming pools. The proposed method was tested in an urban area of the city of Alcalá de Henares in Madrid, Spain. The method detected all existing swimming pools in the studied area with an overall accuracy of 99.86%, similar to the results obtained by support vector machines (SVM supervised classification.

  10. Algorithm for the Automatic Estimation of Agricultural Tree Geometric Parameters Using Airborne Laser Scanning Data

    Science.gov (United States)

    Hadaś, E.; Borkowski, A.; Estornell, J.

    2016-06-01

    The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS) data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA) as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m-2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9) between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  11. Automatic calibration of a global hydrological model using satellite data as a proxy for stream flow data

    Science.gov (United States)

    Revilla-Romero, B.; Beck, H.; Salamon, P.; Burek, P.; Thielen, J.; de Roo, A.

    2014-12-01

    Model calibration and validation are commonly restricted due to the limited availability of historical in situ observational data. Several studies have demonstrated that using complementary remotely sensed datasets such as soil moisture for model calibration have led to improvements. The aim of this study was to evaluate the use of remotely sensed signal of the Global Flood Detection System (GFDS) as a proxy for stream flow data to calibrate a global hydrological model used in operational flood forecasting. This is done in different river basins located in Africa, South and North America for the time period 1998-2010 by comparing model calibration using the raw satellite signal as a proxy for river discharge with a model calibration using in situ stream flow observations. River flow is simulated using the LISFLOOD hydrological model for the flow routing in the river network and the groundwater mass balance. The model is set up on global coverage with horizontal grid resolution of 0.1 degree and daily time step for input/output data. Based on prior tests, a set of seven model parameters was used for calibration. The parameter space was defined by specifying lower and upper limits on each parameter. The objective functions considered were Pearson correlation (R), Nash-Sutcliffe Efficiency log (NSlog) and Kling-Gupta Efficiency (KGE') where both single- and multi-objective functions were employed. After multiple iterations, for each catchment, the algorithm generated a set of Pareto-optimal front of solutions. A single parameter set was selected which had the lowest distance to R=1 for the single-objective and NSlog=1 and KGE'=1 for the multi-objective function. The results of the different test river basins are compared against the performance obtained using the same objective functions by in situ discharge observations. Automatic calibration strategies of the global hydrological model using satellite data as a proxy for stream flow data are outlined and discussed.

  12. Processing multidimensional nuclear physics data

    Energy Technology Data Exchange (ETDEWEB)

    Becker, J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Modern Ge detector arrays for gamma-ray spectroscopy are producing data sets unprecedented in size and event multiplicity. Gammasphere, the DOE sponsored array, has the following characteristics: (1) High granularity (110 detectors); (2) High efficiency (10%); and (3) Precision energy measurements (Delta EE = 0.2%). Characteristics of detector line shape, the data set, and the standard practice in the nuclear physics community to the nuclear gamma-ray cascades from the 4096 times 4096 times 4096 data cube will be discussed.

  13. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images

    Directory of Open Access Journals (Sweden)

    Kimori Yoshitaka

    2010-07-01

    Full Text Available Abstract Background A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. Results A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Conclusions Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  14. Pipeline Processing of VLBI Data

    CERN Document Server

    Reynolds, C; Garrett, M

    2002-01-01

    As part of an on-going effort to simplify the data analysis path for VLBI experiments, a pipeline procedure has been developed at JIVE to carry out much of the data reduction required for EVN experiments in an automated fashion. This pipeline procedure runs entirely within AIPS, the standard data reduction package used in astronomical VLBI, and is used to provide preliminary calibration of EVN experiments correlated at the EVN MkIV data processor. As well as simplifying the analysis for EVN users, the pipeline reduces the delay in providing information on the data quality to participating telescopes, hence improving the overall performance of the array. A description of this pipeline is presented here.

  15. Differences in semantic category priming in the left and right cerebral hemispheres under automatic and controlled processing conditions.

    Science.gov (United States)

    Collins, M

    1999-08-01

    The contribution of each cerebral hemisphere to the generation of semantic category meanings at automatic and strategic levels of processing was investigated in a priming experiment where prime and target words were independently projected to the left or right visual fields (LVF or RVF). Non-associated category exemplars were employed as related pairs in a lexical decision task and presented in two experimental conditions. The first condition was designed to elicit automatic processing, so related pairs comprised 20% of the positive set, stimulus pairs were temporally separated by a stimulus onset asynchrony (SOA) of 250 ms, and there was no allusion to the presence of related pairs in the instructions to subjects. The second condition, designed to invoke controlled processing, incorporated a relatedness proportion of 50%, stimulus pairs separated by an SOA of 750 ms, and instructions which informed subjects of the presence and use of category exemplar pairs in the stimulus set. In the first condition, a prime directed to either visual field facilitated responses to categorically related targets subsequently projected to the RVF, while in the second condition a prime directed to either visual field facilitated responses to related targets projected to the LVF. The facilitation effects obtained in both conditions appeared to reflect automatic processes, while strategic processes were invoked in the left, but not the right hemisphere in the second condition. The results suggest that both hemispheres have automatic access to semantic category meanings, although the timecourse of activation of semantic category meanings is slower in the right hemisphere than in the left.

  16. Data warehouse building process based on data transformation templates

    OpenAIRE

    Paulavičiūtė, Kristina

    2006-01-01

    Growing amount of data and needs of data analysis starts needs of data warehouses. A lot of organizations operational data cumulates in OLTP DBMS databases. Organization historical data are cumulating in data warehouses. These data are adjusted for data analysis. DBMS ETL tools don’t have good data warehouse building opportunities. Created ETL tool for MS SQL Server makes data warehouse building process easier and speedier.

  17. Automatic processing of facial affects in patients with borderline personality disorder: associations with symptomatology and comorbid disorders

    OpenAIRE

    Donges, Uta-Susan; Dukalski, Bibiana; Kersting, Anette; Suslow, Thomas

    2015-01-01

    Background Instability of affects and interpersonal relations are important features of borderline personality disorder (BPD). Interpersonal problems of individuals suffering from BPD might develop based on abnormalities in the processing of facial affects and high sensitivity to negative affective expressions. The aims of the present study were to examine automatic evaluative shifts and latencies as a function of masked facial affects in patients with BPD compared to healthy individuals. As ...

  18. Experiences with automatic N and P measurements of an activated sludge process in a research environment

    DEFF Research Database (Denmark)

    Isaacs, Steven Howard; Temmink, H.

    1996-01-01

    Some of the advantages of on-line automatic measurement of ammonia, nitrate and phosphate for studying activated sludge systems are pointed out with the help of examples of batch experiments. Sample taking is performed by cross-flow filtration and measurement of all three analytes is performed...

  19. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  20. Automatic Estimation of Excavation Volume from Laser Mobile Mapping Data for Mountain Road Widening

    Directory of Open Access Journals (Sweden)

    Massimo Menenti

    2013-09-01

    Full Text Available Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as

  1. Automatic Preprocessing of Tidal Gravity Observation Data%重力固体潮观测数据的自动化预处理

    Institute of Scientific and Technical Information of China (English)

    许闯; 罗志才; 林旭; 周波阳

    2013-01-01

    The preprocessing of tidal gravity observation data is very important to obtain high-quality tidal harmonic analysis results. The preprocessing methods of tidal gravity observation data are studied systematically, and average filtering method and wavelet filtering method for downsampling original tidal gravity observation data are given in the paper, as well as the linear interpolation method and the cubic spline interpolation method for processing interrupt data. The automatic preprocessing software of the tidal gravity observation data (APTsoft) is developed, which can calibrate and correct automatically abnormal data such as spikes, steps and interrupts. Finally, the experimental results show that the preprocessing methods and APTsoft are very effective, and APTsoft can be applied to the automatic preprocessing of tidal gravity observation data.%研究了重力固体潮汐观测数据的预处理方法,给出了对原始观测数据降采样的平均滤波和小波滤波处理方法以及处理中断数据的线性插值和三次样条插值方法,研制了重力固体潮汐观测数据自动化预处理软件APTsoft,实现了异常数据(包括尖峰、台阶、中断等)的自动标定与改正功能.实验结果验证了本文预处理方法及APTsoft软件的有效性,APTsoft可应用于重力固体潮观测数据的自动化预处理.

  2. Evaluation and processing of covariance data

    International Nuclear Information System (INIS)

    These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)

  3. Sentinel-1 automatic processing chain for volcanic and seismic areas monitoring within the Geohazards Exploitation Platform (GEP)

    Science.gov (United States)

    De Luca, Claudio; Zinno, Ivana; Manunta, Michele; Lanari, Riccardo; Casu, Francesco

    2016-04-01

    these issues, ESA recently funded the development of the Geohazards Exploitation Platform (GEP), a project aimed at putting together data, processing tools and results to make them accessible to the EO scientific community, with particular emphasis to the Geohazard Supersites & Natural Laboratories and the CEOS Seismic Hazards and Volcanoes Pilots. In this work we present the integration of the parallel version of a well-known DInSAR algorithm referred to as Small BAseline Subset (P-SBAS) within the GEP platform for processing Sentinel-1 data. The integration allowed us to set up an operational on-demand web tool, open to every user, aimed at automatically processing S1A data for the generation of SBAS displacement time-series. Main characteristics as well as a number of experimental results obtained by using the implemented web tool will be also shown. This work is partially supported by: the RITMARE project of Italian MIUR, the DPC-CNR agreement and the ESA GEP project.

  4. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    Science.gov (United States)

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  5. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    Science.gov (United States)

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  6. Making sense of sensor data : detecting clinical mastitis in automatic milking systems

    NARCIS (Netherlands)

    Kamphuis, C.

    2010-01-01

    Farmers milking dairy cows are obliged to exclude milk with abnormal homogeneity or color for human consumption (e.g., Regulation (EC) No 853/2004), where most abnormal milk is caused by clinical mastitis (CM). With automatic milking (AM), farmers are no longer physically present during the milking

  7. Comparative analysis of automatic approaches to building detection from multi-source aerial data

    NARCIS (Netherlands)

    Frontoni, E.; Khoshelham, K.; Nardinocchi, C.; Nedkov, S.; Zingaretti, P.

    2008-01-01

    Automatic building detection has been a hot topic since the early 1990’s. Early approaches were based on a single aerial image. Detecting buildings is a difficult task so it can be more effective when multiple sources of information are obtained and fused. The objective of this paper is to provide a

  8. On the meaning of meaning when being mean: commentary on Berkowitz's "on the consideration of automatic as well as controlled psychological processes in aggression".

    Science.gov (United States)

    Dodge, Kenneth A

    2008-01-01

    Berkowitz (this issue) makes a cogent case for his cognitive neo-associationist (CNA) model that some aggressive behaviors occur automatically, emotionally, and through conditioned association with other stimuli. He also proposes that they can occur without "processing," that is, without meaning. He contrasts his position with that of social information processing (SIP) models, which he casts as positing only controlled processing mechanisms for aggressive behavior. However, both CNA and SIP models posit automatic as well as controlled processes in aggressive behavior. Most aggressive behaviors occur through automatic processes, which are nonetheless rule governed. SIP models differ from the CNA model in asserting the essential role of meaning (often through nonconscious, automatic, and emotional processes) in mediating the link between a stimulus and an angry aggressive behavioral response. PMID:18203196

  9. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    Science.gov (United States)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  10. Automatic 3D building reconstruction from airbornelaser scanning and cadastral data using hough transform

    DEFF Research Database (Denmark)

    Bodum, Lars; Overby, Jens; Kjems, Erik;

    2004-01-01

    degree of details. However, it is possible to create virtual 3D models of buildings, by processing these data. Roof polygons are generated using airborne laser scanning of 1x1 meter grid and ground plans (footprints) extracted from technical feature maps. An effective algorithm is used for fixing...

  11. Automatic Spectroscopic Data Categorization by Clustering Analysis (ASCLAN): A Data-Driven Approach for Distinguishing Discriminatory Metabolites for Phenotypic Subclasses.

    Science.gov (United States)

    Zou, Xin; Holmes, Elaine; Nicholson, Jeremy K; Loo, Ruey Leng

    2016-06-01

    We propose a novel data-driven approach aiming to reliably distinguish discriminatory metabolites from nondiscriminatory metabolites for a given spectroscopic data set containing two biological phenotypic subclasses. The automatic spectroscopic data categorization by clustering analysis (ASCLAN) algorithm aims to categorize spectral variables within a data set into three clusters corresponding to noise, nondiscriminatory and discriminatory metabolites regions. This is achieved by clustering each spectral variable based on the r(2) value representing the loading weight of each spectral variable as extracted from a orthogonal partial least-squares discriminant (OPLS-DA) model of the data set. The variables are ranked according to r(2) values and a series of principal component analysis (PCA) models are then built for subsets of these spectral data corresponding to ranges of r(2) values. The Q(2)X value for each PCA model is extracted. K-means clustering is then applied to the Q(2)X values to generate two clusters based on minimum Euclidean distance criterion. The cluster consisting of lower Q(2)X values is deemed devoid of metabolic information (noise), while the cluster consists of higher Q(2)X values is then further subclustered into two groups based on the r(2) values. We considered the cluster with high Q(2)X but low r(2) values as nondiscriminatory, while the cluster with high Q(2)X and r(2) values as discriminatory variables. The boundaries between these three clusters of spectral variables, on the basis of the r(2) values were considered as the cut off values for defining the noise, nondiscriminatory and discriminatory variables. We evaluated the ASCLAN algorithm using six simulated (1)H NMR spectroscopic data sets representing small, medium and large data sets (N = 50, 500, and 1000 samples per group, respectively), each with a reduced and full resolution set of variables (0.005 and 0.0005 ppm, respectively). ASCLAN correctly identified all discriminatory

  12. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  13. Automatically Building Diagnostic Bayesian Networks from On-line Data Sources and the SMILE Web-based Interface

    OpenAIRE

    Tungkasthan, Anucha; Jongsawat, Nipat; Poompuang, Pittaya; Intarasema, Sarayut; Premchaiswadi, Wichian

    2010-01-01

    This paper presented a practical framework for automating the building of diagnostic BN models from data sources obtained from the WWW and demonstrates the use of a SMILE web-based interface to represent them. The framework consists of the following components: RSS agent, transformation/conversion tool, core reasoning engine, and the SMILE web-based interface. The RSS agent automatically collects and reads the provided RSS feeds according to the agent's predefined URLs. A transformation/conve...

  14. daptive Filter Used as a Dynamic Compensator in Automatic Gauge Control of Strip Rolling Processes

    Directory of Open Access Journals (Sweden)

    N. ROMAN

    2010-12-01

    Full Text Available The paper deals with a control structure of the strip thickness in a rolling mill of quarto type (AGC – Automatic Gauge Control. It performs two functions: the compensation of errors induced by unideal dynamics of the tracking systems lead by AGC system and the control adaptation to the change of dynamic properties of the tracking systems. The compensation of dynamical errors is achieved through inverse models of the tracking system, implemented as adaptive filters.

  15. Process monitoring using automatic physical measurement based on electrical and physical variability analysis

    Science.gov (United States)

    Shauly, Eitan N.; Levi, Shimon; Schwarzband, Ishai; Adan, Ofer; Latinsky, Sergey

    2015-04-01

    A fully automated silicon-based methodology for systematic analysis of electrical features is shown. The system was developed for process monitoring and electrical variability reduction. A mapping step was created by dedicated structures such as static-random-access-memory (SRAM) array or standard cell library, or by using a simple design rule checking run-set. The resulting database was then used as an input for choosing locations for critical dimension scanning electron microscope images and for specific layout parameter extraction then was input to SPICE compact modeling simulation. Based on the experimental data, we identified two items that must be checked and monitored using the method described here: transistor's sensitivity to the distance between the poly end cap and edge of active area (AA) due to AA rounding, and SRAM leakage due to a too close N-well to P-well. Based on this example, for process monitoring and variability analyses, we extensively used this method to analyze transistor gates having different shapes. In addition, analysis for a large area of high density standard cell library was done. Another set of monitoring focused on a high density SRAM array is also presented. These examples provided information on the poly and AA layers, using transistor parameters such as leakage current and drive current. We successfully define "robust" and "less-robust" transistor configurations included in the library and identified unsymmetrical transistors in the SRAM bit-cells. These data were compared to data extracted from the same devices at the end of the line. Another set of analyses was done to samples after Cu M1 etch. Process monitoring information on M1 enclosed contact was extracted based on contact resistance as a feedback. Guidelines for the optimal M1 space for different layout configurations were also extracted. All these data showed the successful in-field implementation of our methodology as a useful process monitoring method.

  16. Detection of pneumoconiosis opacities on X-ray images by contour line processing and its application to automatic diagnosis

    International Nuclear Information System (INIS)

    This paper presents a study on automatic diagnosis of pneumoconiosis by X-ray image processing. Contour line processing method for identifying small opacities of pneumoconiosis is proposed and a new feature vector for classifying the profusion of small opacities is also proposed. This method is superior to the methods which are based on texture analysis because it is robust against variations of film quality and individual differences of structural patterns such as ribs and blood vessels. ILO standard films and 140 CR (computed radiography) images were used to test the performance of the proposed method. Experimental results show the effectiveness of the proposed method. (author)

  17. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  18. Simplified Processing Method for Meter Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Colotelo, Alison H. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Downs, Janelle L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ham, Kenneth D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Henderson, Jordan W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Montgomery, Sadie A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vernon, Christopher R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Parker, Steven A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-11-01

    Simple/Quick metered data processing method that can be used for Army Metered Data Management System (MDMS) and Logistics Innovation Agency data, but may also be useful for other large data sets. Intended for large data sets when analyst has little information about the buildings.

  19. Using data from automatic planetary stations for solving problems in astronomy and space physics

    Science.gov (United States)

    Stoeva, Penka; Stoev, Alexey; Bojurova, Eva

    The specific nature of the Astronomy and Space Physics problems promote students' interest in the relevant sciences and provoke their creativity. It is illustrated by numerous examples of positive response from the participants in the Astronomy Olympiad to extraordinary moments in problems, especially those related to space flight and scientific data and photographs from satellites and automatic interplanetary stations (AIS). Jupiter's satellite Io is one of the satellites with the highest volcano activity in the solar system. So far, the volcanoes of Io were photographed for a short time only by the interplanetary stations Voyager 1 and Galileo - sent by NASA, and New Horizons of ESA. By monitoring these often erupting volcanoes, however, one can quickly gather detailed information and establish methods for prediction of eruptions, including the Earth's volcanoes. This could push forward research on volcanism in the Solar system. Therefore, this issue was used for creation conditions for problems in astronomy. The report shows how through measurements on images of Io taken with AIS heights of the jets emitted by volcanoes are defined. Knowing the mass and radius of the satellite initial speed of the emitted particles is evaluated. Similarly, the initial rate of discharge of earth volcanoes and ice geysers on Saturn's satellite Enceladus are also evaluated. An attempt is made to explain the rings of ejection around the volcanoes on Io. The ratio of the diameter of the dispersion of the substance to the height of the stream is studied. Actually, maximum speed of the particles is evaluated as the boundaries of the volcanic "fountain" are determined by the fast moving particles reaching maximal height. The observed ratio is compared with the theoretical one derived by the students. The results show that although the volcanoes of Io , Earth's volcanoes and even ice geysers of Enceladus operate under very different conditions and arise from different causes, the initial

  20. Data near processing support for climate data analysis

    Science.gov (United States)

    Kindermann, Stephan; Ehbrecht, Carsten; Hempelmann, Nils

    2016-04-01

    Climate data repositories grow in size exponentially. Scalable data near processing capabilities are required to meet future data analysis requirements and to replace current "data download and process at home" workflows and approaches. On one hand side, these processing capabilities should be accessible via standardized interfaces (e.g. OGC WPS), on the other side a large variety of processing tools, toolboxes and deployment alternatives have to be supported and maintained at the data/processing center. We present a community approach of a modular and flexible system supporting the development, deployment and maintenace of OGC-WPS based web processing services. This approach is organized in an open source github project (called "bird-house") supporting individual processing services ("birds", e.g. climate index calculations, model data ensemble calculations), which rely on basic common infrastructural components (e.g. installation and deployment recipes, analysis code dependencies management). To support easy deployment at data centers as well as home institutes (e.g. for testing and development) the system supports the management of the often very complex package dependency chain of climate data analysis packages as well as docker based packaging and installation. We present a concrete deployment scenario at the German Climate Computing Center (DKRZ). The DKRZ one hand side hosts a multi-petabyte climate archive which is integrated e.g. into the european ENES and worldwide ESGF data infrastructure, and on the other hand hosts an HPC center supporting (model) data production and data analysis. The deployment scenario also includes openstack based data cloud services to support data import and data distribution for bird-house based WPS web processing services. Current challenges for inter-institutionnal deployments of web processing services supporting the european and international climate modeling community as well as the climate impact community are highlighted

  1. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  2. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  3. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  4. Semi-Automatic Detection of Swimming Pools from Aerial High-Resolution Images and LIDAR Data

    OpenAIRE

    Borja Rodríguez-Cuenca; Maria C. Alonso

    2014-01-01

    Bodies of water, particularly swimming pools, are land covers of high interest. Their maintenance involves energy costs that authorities must take into consideration. In addition, swimming pools are important water sources for firefighting. However, they also provide a habitat for mosquitoes to breed, potentially posing a serious health threat of mosquito-borne disease. This paper presents a novel semi-automatic method of detecting swimming pools in urban environments from aerial images and L...

  5. Data processing framework for decision making

    DEFF Research Database (Denmark)

    Larsen, Jan

    The aim of the talk is * to provide insight into some of the issues in data processing and detection systems * to hint at possible solutions using statistical signal processing and machine learning methodologies...

  6. Utilization of a genetic algorithm for the automatic detection of oil spill from RADARSAT-2 SAR satellite data

    International Nuclear Information System (INIS)

    Highlights: • An oil platform located 70 km from the coast of Louisiana sank on Thursday. • Oil spill has backscatter values of −25 dB in RADARSAT-2 SAR. • Oil spill is portrayed in SCNB mode by shallower incidence angle. • Ideal detection of oil spills in SAR images requires moderate wind speeds. • Genetic algorithm is excellent tool for automatic detection of oil spill in RADARSAT-2 SAR data. - Abstract: In this work, a genetic algorithm is applied for the automatic detection of oil spills. The procedure is implemented using sequences from RADARSAT-2 SAR ScanSAR Narrow single-beam data acquired in the Gulf of Mexico. The study demonstrates that the implementation of crossover allows for the generation of an accurate oil spill pattern. This conclusion is confirmed by the receiver-operating characteristic (ROC) curve. The ROC curve indicates that the existence of oil slick footprints can be identified using the area between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills, and the ScanSAR Narrow single-beam mode serves as an excellent sensor for oil spill detection and survey

  7. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter;

    2012-01-01

    The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy...

  8. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  9. Multidimensional data modeling for business process analysis

    OpenAIRE

    Mansmann, Svetlana; Neumuth, Thomas; Scholl, Marc H.

    2007-01-01

    The emerging area of business process intelligence attempts to enhance the analytical capabilities of business process management systems by employing data warehousing and mining technologies. This paper presents an approach to re-engineering the business process modeling in conformity with the multidimensional data model. Since the business process and the multidimensional model are driven by rather different objectives and assumptions, there is no straightforward solution to converging thes...

  10. THE METHOD OF DATA PROCESSING OF THE ELECTRICAL SURVEYING AND THE PROGRAM SYSTEM USED ON MICROCOMPUTER

    Institute of Scientific and Technical Information of China (English)

    李志聃; 高绋麟

    1990-01-01

    The ESS software package is prepared for electrical data processing in the fields of coal prospecting, hydrogeological engineering, and can be used in the other fields of electrical data processing. It can be operated on any kind of microcomputer which has an internal memories of moro than 512kB. The ESS software package would be leading the office operation to an automatic data processing period and the field work free from the tedious, repeated data treating and mapping, so that the engineers would have more time to analyse and interpret field data. Undoubtedly, it is of benefit to improving the relibility of the geological evaluation.

  11. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  12. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  13. Age effects shrink when motor learning is predominantly supported by nondeclarative, automatic memory processes: evidence from golf putting.

    Science.gov (United States)

    Chauvel, Guillaume; Maquestiaux, François; Hartley, Alan A; Joubert, Sven; Didierjean, André; Masters, Rich S W

    2012-01-01

    Can motor learning be equivalent in younger and older adults? To address this question, 48 younger (M = 23.5 years) and 48 older (M = 65.0 years) participants learned to perform a golf-putting task in two different motor learning situations: one that resulted in infrequent errors or one that resulted in frequent errors. The results demonstrated that infrequent-error learning predominantly relied on nondeclarative, automatic memory processes whereas frequent-error learning predominantly relied on declarative, effortful memory processes: After learning, infrequent-error learners verbalized fewer strategies than frequent-error learners; at transfer, a concurrent, attention-demanding secondary task (tone counting) left motor performance of infrequent-error learners unaffected but impaired that of frequent-error learners. The results showed age-equivalent motor performance in infrequent-error learning but age deficits in frequent-error learning. Motor performance of frequent-error learners required more attention with age, as evidenced by an age deficit on the attention-demanding secondary task. The disappearance of age effects when nondeclarative, automatic memory processes predominated suggests that these processes are preserved with age and are available even early in motor learning. PMID:21736434

  14. Apache Flink: Distributed Stream Data Processing

    CERN Document Server

    Jacobs, Kevin; CERN. Geneva. IT Department

    2016-01-01

    The amount of data is growing significantly over the past few years. Therefore, the need for distributed data processing frameworks is growing. Currently, there are two well-known data processing frameworks with an API for data batches and an API for data streams which are named Apache Flink and Apache Spark. Both Apache Spark and Apache Flink are improving upon the MapReduce implementation of the Apache Hadoop framework. MapReduce is the first programming model for distributed processing on large scale that is available in Apache Hadoop. This report compares the Stream API and the Batch API for both frameworks.

  15. Embed XRF Data Processing System Development

    International Nuclear Information System (INIS)

    This paper introduced a project of XRF data processing system. The project adopted embed processor LPC2148 as the core of the data processing. This System has equipped graph LCD and the number of dots is 320 x 240. The large capacity Secure Digital Memory Card has been used as Data memory. It could exchange data with PC by USB interface. Also, we have made some amelioration on the function of XRF data processing. This system running stably, capability credibility and using conveniently, so it has good prospect of application and extension. (authors)

  16. ACRF Data Collection and Processing Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, M; Egan, D

    2004-12-01

    We present a description of the data flow from measurement to long-term archive. We also discuss data communications infrastructure. The data handling processes presented include collection, transfer, ingest, quality control, creation of Value-Added Products (VAP), and data archiving.

  17. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    Science.gov (United States)

    Bowler, Matthew W.; Nurizzo, Didier; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine; Caserotto, Hugo; Delagenière, Solange; Dobias, Fabian; Flot, David; Giraud, Thierry; Guichard, Nicolas; Guijarro, Mattias; Lentini, Mario; Leonard, Gordon A.; McSweeney, Sean; Oskarsson, Marcus; Schmidt, Werner; Snigirev, Anatoli; von Stetten, David; Surr, John; Svensson, Olof; Theveneau, Pascal; Mueller-Dieckmann, Christoph

    2015-01-01

    MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined. PMID:26524320

  18. Shadow-Based Hierarchical Matching for the Automatic Registration of Airborne LiDAR Data and Space Imagery

    OpenAIRE

    Alireza Safdarinezhad; Mehdi Mokhtarzade; Mohammad Javad Valadan Zoej

    2016-01-01

    The automatic registration of LiDAR data and optical images, which are heterogeneous data sources, has been a major research challenge in recent years. In this paper, a novel hierarchical method is proposed in which the least amount of interaction of a skilled operator is required. Thereby, two shadow extraction schemes, one from LiDAR and the other from high-resolution satellite images, were used, and the obtained 2D shadow maps were then considered as prospective matching entities. Taken as...

  19. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules.

    Science.gov (United States)

    Bowler, Matthew W; Nurizzo, Didier; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine; Caserotto, Hugo; Delagenière, Solange; Dobias, Fabian; Flot, David; Giraud, Thierry; Guichard, Nicolas; Guijarro, Mattias; Lentini, Mario; Leonard, Gordon A; McSweeney, Sean; Oskarsson, Marcus; Schmidt, Werner; Snigirev, Anatoli; von Stetten, David; Surr, John; Svensson, Olof; Theveneau, Pascal; Mueller-Dieckmann, Christoph

    2015-11-01

    MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined.

  20. ARP: Automatic rapid processing for the generation of problem dependent SAS2H/ORIGEN-s cross section libraries

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Hermann, O.W.; Bowman, S.M.; Parks, C.V.

    1998-04-01

    In this report, a methodology is described which serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. ARP, Automatic Rapid Processing, is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent fuel isotopic compositions for PWR and BWR systems.

  1. Satellite radar altimetry over ice. Volume 1: Processing and corrections of Seasat data over Greenland

    Science.gov (United States)

    Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.

    1990-01-01

    The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.

  2. Data acquisition system for TRIGA Mark I nuclear reactor and a proposal for its automatic operation

    International Nuclear Information System (INIS)

    The TRIGA IPR-R1 Nuclear Research Reactor, located at the Nuclear Technology Development Center (CDTN/CNEN) in Belo Horizonte, Brazil, is being operated since 44 years ago. During these years the main operational parameters were monitored by analog recorders and counters located in the reactor control console. The most important operational parameters and data in the reactor logbook were registered by the reactor operators. This process is quite useful, but it can involve some human errors. It is also impossible for the operators to take notes of all variables involving the process mainly during fast power transients operations. A PC-based Data Acquisition was developed for the reactor that allows on line monitoring, through graphic interfaces, and shows operational parameters evolution to the operators. Some parameters that never were measured on line, like the thermal power and the coolant flow rate at the primary loop, are monitored now in the computer video monitor. The developed system allows measure out all parameters in a frequency up to 1 kHz. These data is also recorded in text files available for consults and analysis. (author)

  3. Data processing and visualisation in the Rosetta Science Ground Segment

    Science.gov (United States)

    Geiger, Bernhard

    2016-09-01

    Rosetta is the first space mission to rendezvous with a comet. The spacecraft encountered its target 67P/Churyumov-Gerasimenko in 2014 and currently escorts the comet through a complete activity cycle during perihelion passage. The Rosetta Science Ground Segment (RSGS) is in charge of planning and coordinating the observations carried out by the scientific instruments on board the Rosetta spacecraft. We describe the data processing system implemented at the RSGS in order to support data analysis and science operations planning. The system automatically retrieves and processes telemetry data in near real-time. The generated products include spacecraft and instrument housekeeping parameters, scientific data for some instruments, and derived quantities. Based on spacecraft and comet trajectory information a series of geometric variables are calculated in order to assess the conditions for scheduling the observations of the scientific instruments and analyse the respective measurements obtained. Images acquired by the Rosetta Navigation Camera are processed and distributed in near real-time to the instrument team community. A quicklook web-page displaying the images allows the RSGS team to monitor the state of the comet and the correct acquisition and downlink of the images. Consolidated datasets are later delivered to the long-term archive.

  4. Telemetry Data Processing Methodology: An ASLV Experience

    OpenAIRE

    R. Varaprasad

    1998-01-01

    In any launch vehicle mission, post -flight analysis (pFA ) of vehicle telemetry data turns outto be all important, because it helps in evaluating detailed in-flight perfonnance of the varioussub systems of the vehicle. An integrated processing methodology was adopted and a generalised software was developed for processing the telemetry data of augmented satellite launch vehicle (ASLV).

  5. Business Data Processing: A Teacher's Guide.

    Science.gov (United States)

    Virginia State Dept. of Education, Richmond. Business Education Service.

    The curriculum guide, which was prepared to serve as an aid to all teachers of business data processing, gives a complete outline for a high-school level course in both Common Business Oriented Language (COBOL) and Report Program Generator (RPG). Parts one and two of the guide together comprise an introduction to data processing, which deals with…

  6. Automatic quality assurance in cutting and machining

    International Nuclear Information System (INIS)

    Requirements, economics, and possibility of automatic data acquisition and processing are discussed for different production stages. Which of the stages of materials and measuring equipment handling data acquisition, and data processing is to have priority in automation depends on the time requirements of these stages. (orig.)

  7. Environmental monitoring based on automatic change detection from remotely sensed data: kernel-based approach

    Science.gov (United States)

    Shah-Hosseini, Reza; Homayouni, Saeid; Safari, Abdolreza

    2015-01-01

    In the event of a natural disaster, such as a flood or earthquake, using fast and efficient methods for estimating the extent of the damage is critical. Automatic change mapping and estimating are important in order to monitor environmental changes, e.g., deforestation. Traditional change detection (CD) approaches are time consuming, user dependent, and strongly influenced by noise and/or complex spectral classes in a region. Change maps obtained by these methods usually suffer from isolated changed pixels and have low accuracy. To deal with this, an automatic CD framework-which is based on the integration of change vector analysis (CVA) technique, kernel-based C-means clustering (KCMC), and kernel-based minimum distance (KBMD) classifier-is proposed. In parallel with the proposed algorithm, a support vector machine (SVM) CD method is presented and analyzed. In the first step, a differential image is generated via two approaches in high dimensional Hilbert space. Next, by using CVA and automatically determining a threshold, the pseudo-training samples of the change and no-change classes are extracted. These training samples are used for determining the initial value of KCMC parameters and training the SVM-based CD method. Then optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the KCMC parameters and to select the precise training samples. These training samples are used to train the KBMD classifier. Last, the class label of each unknown pixel is determined using the KBMD classifier and SVM-based CD method. In order to evaluate the efficiency of the proposed algorithm for various remote sensing images and applications, two different datasets acquired by Quickbird and Landsat TM/ETM+ are used. The results show a good flexibility and effectiveness of this automatic CD method for environmental change monitoring. In addition, the comparative analysis of results from the proposed method

  8. Data processing device for computed tomography system

    International Nuclear Information System (INIS)

    A data processing device applied to a computed tomography system which examines a living body utilizing radiation of X-rays is disclosed. The X-rays which have penetrated the living body are converted into electric signals in a detecting section. The electric signals are acquired and converted from an analog form into a digital form in a data acquisition section, and then supplied to a matrix data-generating section included in the data processing device. By this matrix data-generating section are generated matrix data which correspond to a plurality of projection data. These matrix data are supplied to a partial sum-producing section. The partial sums respectively corresponding to groups of the matrix data are calculated in this partial sum-producing section and then supplied to an accumulation section. In this accumulation section, the final value corresponding to the total sum of the matrix data is calculated, whereby the calculation for image reconstruction is performed

  9. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  10. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  11. Process mining data science in action

    CERN Document Server

    van der Aalst, Wil

    2016-01-01

    The first to cover this missing link between data mining and process modeling, this book provides real-world techniques for monitoring and analyzing processes in real time. It is a powerful new tool destined to play a key role in business process management.

  12. Synthetic Aperture Radar (SAR) data processing

    Science.gov (United States)

    Beckner, F. L.; Ahr, H. A.; Ausherman, D. A.; Cutrona, L. J.; Francisco, S.; Harrison, R. E.; Heuser, J. S.; Jordan, R. L.; Justus, J.; Manning, B.

    1978-01-01

    The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed.

  13. Automatic first-break picking using the instantaneous traveltime attribute

    KAUST Repository

    Saragiotis, Christos

    2012-01-01

    Picking the first breaks is an important step in seismic processing. The large volume of the seismic data calls for automatic and objective picking. We introduce a new automatic first-break picker, which uses specifically designed time windows and an iterative procedure based on the instantaneous traveltime attribute. The method is fast as it only uses a few FFT\\'s per trace. We demonstrate the effectiveness of this automatic method by applying it on real test data.

  14. Automatic quantitative analysis of microstructure of ductile cast iron using digital image processing

    Directory of Open Access Journals (Sweden)

    Abhijit Malage

    2015-09-01

    Full Text Available Ductile cast iron is preferred as nodular iron or spheroidal graphite iron. Ductile cast iron contains graphite in form of discrete nodules and matrix of ferrite and perlite. In order to determine the mechanical properties, one needs to determine volume of phases in matrix and nodularity in the microstructure of metal sample. Manual methods available for this, are time consuming and accuracy depends on expertize. The paper proposes a novel method for automatic quantitative analysis of microstructure of Ferritic Pearlitic Ductile Iron which calculates volume of phases and nodularity of that sample. This gives results within a very short time (approximately 5 sec with 98% accuracy for volume phases of matrices and 90% of accuracy for nodule detection and analysis which are in the range of standard specified for SG 500/7 and validated by metallurgist.

  15. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    Science.gov (United States)

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-01

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates. PMID:27426516

  16. Initial borehole acoustic televiewer data processing algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Moore, T.K.

    1988-06-01

    With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.

  17. 深度图像自动配准点云的方法研究%A method of automatically registering point cloud data based on range images

    Institute of Scientific and Technical Information of China (English)

    田慧; 周绍光; 李浩

    2012-01-01

    点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准.本文给出一种配准策略,利用中心投影原理将单站扫描的点云转换为深度影像,借助教字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准.实验证明了本文方法的有效性.%Point cloud registration plays an essential role to process the data acquired with 3D laser scanner. One traditional registration scheme is based on targets that need to be scanned separately at each station. In this paper, an automatic registration strategy was developed that converted single station point clouds to range images by the center projection principle, utilized digital image processing technology to extract target automatically, calculated the coordinates of its center point, and made use of the knowledge of pho-togrammetry to achieve point cloud registration automatically. Experimental results showed the effectiveness of this method.

  18. Automatic Reconstruction of 3D Building Models from Terrestrial Laser Scanner Data

    Science.gov (United States)

    El Meouche, R.; Rezoug, M.; Hijazi, I.; Maes, D.

    2013-11-01

    With modern 3D laser scanners we can acquire a large amount of 3D data in only a few minutes. This technology results in a growing number of applications ranging from the digitalization of historical artifacts to facial authentication. The modeling process demands a lot of time and work (Tim Volodine, 2007). In comparison with the other two stages, the acquisition and the registration, the degree of automation of the modeling stage is almost zero. In this paper, we propose a new surface reconstruction technique for buildings to process the data obtained by a 3D laser scanner. These data are called a point cloud which is a collection of points sampled from the surface of a 3D object. Such a point cloud can consist of millions of points. In order to work more efficiently, we worked with simplified models which contain less points and so less details than a point cloud obtained in situ. The goal of this study was to facilitate the modeling process of a building starting from 3D laser scanner data. In order to do this, we wrote two scripts for Rhinoceros 5.0 based on intelligent algorithms. The first script finds the exterior outline of a building. With a minimum of human interaction, there is a thin box drawn around the surface of a wall. This box is able to rotate 360° around an axis in a corner of the wall in search for the points of other walls. In this way we can eliminate noise points. These are unwanted or irrelevant points. If there is an angled roof, the box can also turn around the edge of the wall and the roof. With the different positions of the box we can calculate the exterior outline. The second script draws the interior outline in a surface of a building. By interior outline we mean the outline of the openings like windows or doors. This script is based on the distances between the points and vector characteristics. Two consecutive points with a relative big distance will form the outline of an opening. Once those points are found, the interior outline

  19. A Domain Description Language for Data Processing

    Science.gov (United States)

    Golden, Keith

    2003-01-01

    We discuss an application of planning to data processing, a planning problem which poses unique challenges for domain description languages. We discuss these challenges and why the current PDDL standard does not meet them. We discuss DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative, object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which we are using to provide automation for an ecological forecasting application. We compare DPADL to PDDL and discuss changes that could be made to PDDL to make it more suitable for representing planning domains that involve data processing actions.

  20. Linking DICOM pixel data with radiology reports using automatic semantic annotation

    Science.gov (United States)

    Pathak, Sayan D.; Kim, Woojin; Munasinghe, Indeera; Criminisi, Antonio; White, Steve; Siddiqui, Khan

    2012-02-01

    Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists' PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists' reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physician's advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.

  1. Towards automatic lithological classification from remote sensing data using support vector machines

    Science.gov (United States)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14

  2. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  3. Automatic spline-smoothing approach applied to denoise Moroccan resistivity data phosphate deposit “disturbances” map

    Directory of Open Access Journals (Sweden)

    Saad Bakkali

    2010-04-01

    Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.

  4. Automatic chemical design using a data-driven continuous representation of molecules

    CERN Document Server

    Gómez-Bombarelli, Rafael; Hernández-Lobato, José Miguel; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Adams, Ryan P; Aspuru-Guzik, Alán

    2016-01-01

    We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. We train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our metho...

  5. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  6. A framework for automatic construction of 3D PDM from segmented volumetric neuroradiological data sets.

    Science.gov (United States)

    Fu, Yili; Gao, Wenpeng; Xiao, Yongfei; Liu, Jimin

    2010-03-01

    3D point distribution model (PDM) of subcortical structures can be applied in medical image analysis by providing priori-knowledge. However, accurate shape representation and point correspondence are still challenging for building 3D PDM. This paper presents a novel framework for the automated construction of 3D PDMs from a set of segmented volumetric images. First, a template shape is generated according to the spatial overlap. Then the corresponding landmarks among shapes are automatically identified by a novel hierarchical global-to-local approach, which combines iterative closest point based global registration and active surface model based local deformation to transform the template shape to all other shapes. Finally, a 3D PDM is constructed. Experiment results on four subcortical structures show that the proposed method is able to construct 3D PDMs with a high quality in compactness, generalization and specificity, and more efficient and effective than the state-of-art methods such as MDL and SPHARM. PMID:19631401

  7. Utilizing Linked Open Data Sources for Automatic Generation of Semantic Metadata

    Science.gov (United States)

    Nummiaho, Antti; Vainikainen, Sari; Melin, Magnus

    In this paper we present an application that can be used to automatically generate semantic metadata for tags given as simple keywords. The application that we have implemented in Java programming language creates the semantic metadata by linking the tags to concepts in different semantic knowledge bases (CrunchBase, DBpedia, Freebase, KOKO, Opencyc, Umbel and/or WordNet). The steps that our application takes in doing so include detecting possible languages, finding spelling suggestions and finding meanings from amongst the proper nouns and common nouns separately. Currently, our application supports English, Finnish and Swedish words, but other languages could be included easily if the required lexical tools (spellcheckers, etc.) are available. The created semantic metadata can be of great use in, e.g., finding and combining similar contents, creating recommendations and targeting advertisements.

  8. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  9. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    International Nuclear Information System (INIS)

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks

  10. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  11. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    Science.gov (United States)

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (ppoint or sixteen-point projection with fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  12. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    Science.gov (United States)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  13. US Air Force Data Processing Manuals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data Processing Reference manual for United States Air Force surface stations, circa 1960s. TDF-13 stands for Tape Deck Format number 13, the format in which the...

  14. Lobster Processing and Sales Trip Report Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a federally mandated log which is required to be mailed in to NMFS after a fishing trip. This data set includes lobster processing and sales information...

  15. A practical data processing workflow for multi-OMICS projects.

    Science.gov (United States)

    Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin

    2014-01-01

    Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post

  16. Modular Toolkit for Data Processing (MDP): A Python Data Processing Framework

    OpenAIRE

    Zito, Tiziano; Wilbert, Niko; Wiskott, Laurenz; Berkes, Pietro

    2009-01-01

    Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can eas...

  17. 12 CFR 7.5006 - Data processing.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Data processing. 7.5006 Section 7.5006 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY BANK ACTIVITIES AND OPERATIONS Electronic Activities § 7.5006 Data processing. (a) Eligible activities. It is part of the business of banking under 12 U.S.C. 24(Seventh) for a...

  18. Automatic Recognition of Isolated And Interacting Manufacturing Features In Milling Process

    Directory of Open Access Journals (Sweden)

    Abdelilah El Mesbahi

    2014-10-01

    Full Text Available Manufacturing features play an important role between design information and manufacturing activities. Recently, various efforts have been concentrated in development of automatic feature recognition systems. However, only limited number of features could be recognized, intersecting features were generally not involved. This paper presents a simple system, in which manufacturing features are easily detected using a Chain of Faces and Base of Faces (CF-BF graph. A feature is modeled by a series/parallel association of opened Chain of Faces (OCF or Closed chain of Faces (CCF that rest on a Base Face (BF. The feature is considered Perfect Manufacturing Feature (PMF if all Faces that participate in constitution of OCF/CCF are blank faces, else it is an Imperfect Manufacturing Feature (IMF. In order to establish news Virtual Faces to satisfy this necessaries condition, a judicious analysis of orientation of frontier faces that rest on BF is performed. The technique was tested on several parts taken from literature and the results were satisfying.

  19. EARLINET Single Calculus Chain – technical – Part 1: Pre-processing of raw lidar data

    Directory of Open Access Journals (Sweden)

    G. D'Amico

    2015-10-01

    Full Text Available In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor. It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC, the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  20. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  1. Automatic input rectification

    OpenAIRE

    Long, Fan; Ganesh, Vijay; Carbin, Michael James; Sidiroglou, Stelios; Rinard, Martin

    2012-01-01

    We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the ...

  2. Standards for the Analysis and Processing of Surface-Water Data and Information Using Electronic Methods

    Science.gov (United States)

    Sauer, Vernon B.

    2002-01-01

    Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as

  3. Interactive data-processing system for metallurgy

    Science.gov (United States)

    Rathz, T. J.

    1978-01-01

    Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.

  4. Simultaneous estimation of absolute and relative permeability by automatic history matching of three-phase flow production data

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A.C.; Li, R.; Oliver, D.S. [Tulsa Univ., Tulsa, OK (United States)

    2001-06-01

    A study was conducted in petroleum engineering to determine the feasibility of estimating absolute permeability fields and parameters that define relative permeability functions by automatic history matching of production data obtained under multiphase flow conditions. A prior model is used to assume irreducible water saturation, critical gas saturation and residual oil saturations. The three-phase oil relative permeability curve was calculated from the two sets of two-phase curves using Stone's Model II. The study considered data regarding pressure, gas-oil-ratio or water-oil ratio. It was concluded that when the parameters that characterize the relative permeability functions of a reservoir are known, then it is possible to estimate the relative permeability curves and log-permeability fields by history matching production data derived under three-phase flow conditions. 30 refs., 5 tabs., 14 figs.

  5. AUTOMATIC DESIGNING OF POWER SUPPLY SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. I. Kirspou

    2016-01-01

    Full Text Available Development of automatic designing system for power supply of industrial enterprises is considered in the paper. Its complete structure and principle of operation are determined and established. Modern graphical interface and data scheme are developed, software is completely realized. Methodology and software correspond to the requirements of the up-to-date designing, describe a general algorithm of program process and also reveals properties of automatic designing system objects. Automatic designing system is based on module principle while using object-orientated programming. Automatic designing system makes it possible to carry out consistently designing calculations of power supply system and select the required equipment with subsequent output of all calculations in the form of explanatory note. Automatic designing system can be applied by designing organizations under conditions of actual designing.

  6. The NERIES Data Portal : building a distributed heterogeneous data search, access, and processing tool set

    Science.gov (United States)

    Kamb, Linus; Spinuso, Alessandro; Frobert, Laurent; Trani, Luca; Bossu, Remy; van Eck, Torild

    2010-05-01

    , and then operate on these data sets using processing services or utilities that are available as services. The results of these processing steps are themselves managed within the data workbench. In this way, users are able to build workflows, with automatic metadata and data set provenance tracking.

  7. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  8. Verifiable process monitoring through enhanced data authentication

    International Nuclear Information System (INIS)

    To ensure the peaceful intent for production and processing of nuclear fuel, verifiable process monitoring of the fuel production cycle is required. As part of a U.S. Department of Energy (DOE)-EURATOM collaboration in the field of international nuclear safeguards, the DOE Sandia National Laboratories (SNL), the European Commission Joint Research Centre (JRC) and Directorate General-Energy (DG-ENER) developed and demonstrated a new concept in process monitoring, enabling the use of operator process information by branching a second, authenticated data stream to the Safeguards inspectorate. This information would be complementary to independent safeguards data, improving the understanding of the plant's operation. The concept is called the Enhanced Data Authentication System (EDAS). EDAS transparently captures, authenticates, and encrypts communication data that is transmitted between operator control computers and connected analytical equipment utilized in nuclear processes controls. The intent is to capture information as close to the sensor point as possible to assure the highest possible confidence in the branched data. Data must be collected transparently by the EDAS: Operator processes should not be altered or disrupted by the insertion of the EDAS as a monitoring system for safeguards. EDAS employs public key authentication providing 'jointly verifiable' data and private key encryption for confidentiality. Timestamps and data source are also added to the collected data for analysis. The core of the system hardware is in a security enclosure with both active and passive tamper indication. Further, the system has the ability to monitor seals or other security devices in close proximity. This paper will discuss the EDAS concept, recent technical developments, intended application philosophy and the planned future progression of this system.

  9. Data processing in high energy physics and vector processing computers

    International Nuclear Information System (INIS)

    The data handling done in high energy physics in order to extract the results from the large volumes of data collected in typical experiments is a very large consumer of computing capacity. More than 70 vector processing computers have now been installed and many fields of applications have been tried on such computers as the ILLIAC IV, the TI ASC, the CDC STAR-100 and more recently on the CRAY-1, the CDC Cyber 205, the ICL DAP and the CRAY X-MP. This paper attempts to analyze the reasons for the lack of use of these computers in processing results from high energy physics experiments. Little work has been done to look at the possible vectorisation of the large codes in this field, but the motivation to apply vector processing computers in high energy physics data handling may be increasing as the gap between the scalar performance and the vector performance offered by large computers available on the market widens

  10. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    Science.gov (United States)

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. PMID:27152933

  11. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Science.gov (United States)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  12. Automatic Region-Based Brain Classification of MRI-T1 Data.

    Science.gov (United States)

    Yazdani, Sepideh; Yusof, Rubiyah; Karimian, Alireza; Mitsukira, Yasue; Hematian, Amirshahram

    2016-01-01

    Image segmentation of medical images is a challenging problem with several still not totally solved issues, such as noise interference and image artifacts. Region-based and histogram-based segmentation methods have been widely used in image segmentation. Problems arise when we use these methods, such as the selection of a suitable threshold value for the histogram-based method and the over-segmentation followed by the time-consuming merge processing in the region-based algorithm. To provide an efficient approach that not only produce better results, but also maintain low computational complexity, a new region dividing based technique is developed for image segmentation, which combines the advantages of both regions-based and histogram-based methods. The proposed method is applied to the challenging applications: Gray matter (GM), White matter (WM) and cerebro-spinal fluid (CSF) segmentation in brain MR Images. The method is evaluated on both simulated and real data, and compared with other segmentation techniques. The obtained results have demonstrated its improved performance and robustness. PMID:27096925

  13. Dolphin: a tool for automatic targeted metabolite profiling using 1D and 2D (1)H-NMR data.

    Science.gov (United States)

    Gómez, Josep; Brezmes, Jesús; Mallol, Roger; Rodríguez, Miguel A; Vinaixa, Maria; Salek, Reza M; Correig, Xavier; Cañellas, Nicolau

    2014-12-01

    One of the main challenges in nuclear magnetic resonance (NMR) metabolomics is to obtain valuable metabolic information from large datasets of raw NMR spectra in a high throughput, automatic, and reproducible way. To date, established software packages used to match and quantify metabolites in NMR spectra remain mostly manually operated, leading to low resolution results and subject to inconsistencies not attributable to the NMR technique itself. Here, we introduce a new software package, called Dolphin, able to automatically quantify a set of target metabolites in multiple sample measurements using an approach based on 1D and 2D NMR techniques to overcome the inherent limitations of 1D (1)H-NMR spectra in metabolomics. Dolphin takes advantage of the 2D J-resolved NMR spectroscopy signal dispersion to avoid inconsistencies in signal position detection, enhancing the reliability and confidence in metabolite matching. Furthermore, in order to improve accuracy in quantification, Dolphin uses 2D NMR spectra to obtain additional information on all neighboring signals surrounding the target metabolite. We have compared the targeted profiling results of Dolphin, recorded from standard biological mixtures, with those of two well established approaches in NMR metabolomics. Overall, Dolphin produced more accurate results with the added advantage of being a fully automated and high throughput processing package. PMID:25370160

  14. Differential method for processing scanning lidar data.

    Science.gov (United States)

    Kovalev, Vladimir

    2015-11-20

    The significant deficiency of the classic multiangle data-processing technique is that the accuracy of the lidar-data inversion strongly depends on whether the assumption of the horizontal stratification of the searched atmosphere is valid. The aggravating factor is that no reliable methodology exists that would allow establishment of whether the above assumption is met; even the thorough analysis of the measured lidar signals rarely allows for a reliable conclusion about the fulfillment of this requirement. In this study, a new multiangle differential data-processing method is considered, which provides the renewed interpretation of multiangle measurements. It allows for distinguishing and separating the data points from the areas where the backscatter extinction coefficient is not constant in the horizontal directions. Simulated and experimental data are presented that illustrate the principle and specifics of such a differential technique. PMID:26836537

  15. The MEM in Measuring Data Processing

    Institute of Scientific and Technical Information of China (English)

    L(U) Wen; TONG Ling; CHEN Guang-ju

    2004-01-01

    A kind of new method in measuring data processing called maximum entropy method(MEM) is introduced. The probability-density function (pdf) is deduced by MEM under the restraint of the square of the data. A group of experiment data is processed using the method and the result closed to the real distribution is got. The different pdf got under different order of square and by different content of samples is discussed. It draws conclusions that the pdf in the square restrain with more order than three using MEM can give the distribution of the data basically, and it is not suitable to use the square restraint of data when the samples are less.

  16. Developing numerical methods for experimental data processing

    International Nuclear Information System (INIS)

    Materials study implies experimental measurements the results of which are always affected by noise. To perform numerical data processing, as for instance, numerical derivation preparatory smoothing it is necessary to avoid instabilities. This implies the noise extraction from the experimental data. When obtaining great amount of data is possible, many of the noise related problems can be solved by using statistical indicators. In case of high cost experiments or problems of unique type, the task of extracting useful information referring to given materials parameters is of paramount significance. The paper presents several numerical methods for processing the experimental data developed at INR Pitesti. These were employed in treating the experimental data obtained in nuclear materials studies and which aimed at materials characterization and fabrication technology development. To refine and determine the accuracy of the real experimental data processing methods, computerized simulations were largely used. These methods refer to the transfer relations for important statistical indicators in case of mediate measurements, to increase the resolution of the measurements carried out with linear detectors as well as for numerical smoothing of experimental data. A figure is given with results obtained by applying the numerical smoothing method for the experimental data from X-ray diffraction measurements on Zircaloy-4. The numerical methods developed were applied in materials studies of the structure materials used in CANDU 600 reactor and advanced CANDU type fuels as well as for natural uranium or thorium and thorium-uranium fuel pellets. These methods helped in increasing the measurements' accuracy and confidence level

  17. NISAR ISRO science data processing and products

    Science.gov (United States)

    Agrawal, Krishna Murari; Mehra, Raghav; Ryali, Usha Sundari

    2016-05-01

    NASA-ISRO Synthetic Aperture Radar (NISAR) is a Dual Frequency (L & S band) mission which will be operating in SweepSAR mode. As compared to traditional SAR imaging modes in which Swath and resolution are at trade-off, SweepSAR imaging concept can acquire data over large swath (240 Km) without compromising azimuth resolution (6m approximately). NISAR L-band & S-band sensors will be developed by JPL-NASA and ISRO respectively. NISAR science data will be downloaded at both NASA and ISRO ground stations. SAC-ISRO will develop the SAR processor for both L & S band data to generate products in compliance with science requirements. Moreover, JPL will develop L-band SAR processor and all data products will be available to users. Distributed data processing architecture will be used for handling large volume of data resulting from moderate resolution and larger swath in SweepSAR mode. Data products will be available in multiple processing levels like raw signal products, signal processed single-look and multi-look products, ground range products and Geo-Referenced products in HDF5 & GeoTiff formats. Derived Geo-Referenced Polarimetric and Interferometric data products will also be available for dissemination to the users. A rigorous calibration exercise will be performed by acquiring data over reference targets like Amazon rain-forest & corner reflectors sites for the generation of calibrated data products. Furthermore, various science data products (for science applications) will also be derived from basic data products for operational dissemination.

  18. Automatic Generation of Assembly Sequence for the Planning of Outfitting Processes in Shipbuilding

    NARCIS (Netherlands)

    Wei, Y.

    2012-01-01

    The most important characteristics of the outfitting processes in shipbuilding are: 1. The processes involve many interferences between yard and different subcontractors. In recent years, the use of outsourcing and subcontracting has become a widespread strategy of western shipyards. There exists no

  19. A Web-based Tool for Automatizing the Software Process Improvement Initiatives in Small Software Enterprises

    NARCIS (Netherlands)

    Garcia, I.; Pacheco, C.

    2010-01-01

    Top-down process improvement approaches provide a high-level model of what the process of a software development organization should be. Such models are based on the consensus of a designated working group on how software should be developed or maintained. They are very useful in that they provide g

  20. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    Science.gov (United States)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  1. Love thy neighbour: automatic animal behavioural classification of acceleration data using the K-nearest neighbour algorithm.

    Directory of Open Access Journals (Sweden)

    Owen R Bidder

    Full Text Available Researchers hoping to elucidate the behaviour of species that aren't readily observed are able to do so using biotelemetry methods. Accelerometers in particular are proving particularly effective and have been used on terrestrial, aquatic and volant species with success. In the past, behavioural modes were detected in accelerometer data through manual inspection, but with developments in technology, modern accelerometers now record at frequencies that make this impractical. In light of this, some researchers have suggested the use of various machine learning approaches as a means to classify accelerometer data automatically. We feel uptake of this approach by the scientific community is inhibited for two reasons; 1 Most machine learning algorithms require selection of summary statistics which obscure the decision mechanisms by which classifications are arrived, and 2 they are difficult to implement without appreciable computational skill. We present a method which allows researchers to classify accelerometer data into behavioural classes automatically using a primitive machine learning algorithm, k-nearest neighbour (KNN. Raw acceleration data may be used in KNN without selection of summary statistics, and it is easily implemented using the freeware program R. The method is evaluated by detecting 5 behavioural modes in 8 species, with examples of quadrupedal, bipedal and volant species. Accuracy and Precision were found to be comparable with other, more complex methods. In order to assist in the application of this method, the script required to run KNN analysis in R is provided. We envisage that the KNN method may be coupled with methods for investigating animal position, such as GPS telemetry or dead-reckoning, in order to implement an integrated approach to movement ecology research.

  2. Definition of an automatic information retrieval system independent from the data base used

    International Nuclear Information System (INIS)

    A bibliographic information retrieval system using data stored at the standardized interchange format ISO 2709 or ANSI Z39.2, is specified. A set of comands for interchange format manipulation wich allows the data access at the logical level, achieving the data independence, are used. A data base description language, a storage structure and data base manipulation comands are specified, using retrieval techniques which consider the applications needs. (Author)

  3. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  4. The effects of total sleep deprivation on semantic priming: event-related potential evidence for automatic and controlled processing strategies.

    Science.gov (United States)

    López Zunini, Rocío; Muller-Gass, Alexandra; Campbell, Kenneth

    2014-02-01

    There is general consensus that performance on a number of cognitive tasks deteriorates following total sleep deprivation. At times, however, subjects manage to maintain performance. This may be because of an ability to switch cognitive strategies including the exertion of compensatory effort. The present study examines the effects of total sleep deprivation on a semantic word priming task. Word priming is unique because it can be carried out using different strategies involving either automatic, effortless or controlled, effortful processing. Twelve subjects were presented with word pairs, a prime and a target, that were either highly semantically associated (cat…dog), weakly associated (cow…barn) or unassociated (apple…road). In order to increase the probability of the use of controlled processing following normal sleep, the subject's task was to determine if the target word was semantically related to the prime. Furthermore, the time between the offset of the prime and the onset of the target was relatively long, permitting the use of an effortful, expectancy-predictive strategy. Event-related potentials (ERPs) were recorded from 64 electrode sites. After normal sleep, RTs were faster and accuracy higher to highly associated targets; this performance advantage was also maintained following sleep deprivation. A large negative deflection, the N400, was larger to weakly associated and unassociated targets in both sleep-deprived and normal conditions. The overall N400 was however larger in the normal sleep condition. Moreover, a long-lasting negative slow wave developed between the offset of the prime and the onset of the target. These physiological measures are consistent with the use of an effortful, predictive strategy following normal sleep but an automatic, effortless strategy following total sleep deprivation. A picture priming task was also run. This task benefits less from the use of a predictive strategy. Accordingly, in this task, ERPs following the

  5. Internally- and Externally-Driven Network Transitions as a Basis for Automatic and Strategic Processes in Semantic Priming: Theory and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Itamar eLerner

    2014-04-01

    Full Text Available For the last four decades, semantic priming – the facilitation in recognition of a target word when it follows the presentation of a semantically related prime word – has been a central topic in research of human cognitive processing. Studies have drawn a complex picture of findings which demonstrated the sensitivity of this priming effect to a unique combination of variables, including, but not limited to, the type of relatedness between primes and targets, the prime-target SOA, the relatedness proportion in the stimuli list and the specific task subjects are required to perform. Automatic processes depending on the activation patterns of semantic representations in memory and controlled strategies adapted by individuals when attempting to maximize their recognition performance have both been implicated in contributing to the results. Lately, we have published a new model of semantic priming that addresses the majority of these findings within one conceptual framework. In our model, semantic memory is depicted as an attractor neural network in which stochastic transitions from one stored pattern to another are continually taking place due to synaptic depression mechanisms. We have shown how such transitions, in combination with a reinforcement-learning rule that adjusts their pace, resemble the classic automatic and controlled processes involved in semantic priming and account for a great number of the findings in the literature. Here, we review the core findings of our model and present new simulations that show how similar principles of parameter-adjustments could account for additional data not addressed in our previous studies, such as the relation between expectancy and inhibition in priming, target frequency and target degradation effects. Finally, we describe two human experiments that validate several key predictions of the model.

  6. Improved SDT Process Data Compression Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.

  7. Data processing system of GA and PPPL

    International Nuclear Information System (INIS)

    Results of research in 1997 to General Atomics (GA) and Princeton Plasma Physics Laboratory (PPPL) are reported. The author visited the computer system of fusion group in GA. He joined the tokamak experiment in DIII-D, especially on the demonstration of the remote experiment inside U.S., and investigated the data processing system of DIII-D and the computer network, etc. After the visit to GA, He visited PPPL and exchanged the information about the equipment of remote experiment between JAERI and PPPL based on the US-Japan fusion energy research cooperation. He also investigated the data processing system of TFTR tokamak, the computer network and so on. Results of research of the second visit to GA in 2000 are also reported, which describes a rapid progress of each data processing equipment by the advance on the computer technology in just three years. (author)

  8. Data processing system of GA and PPPL

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, Takayuki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2001-11-01

    Results of research in 1997 to General Atomics (GA) and Princeton Plasma Physics Laboratory (PPPL) are reported. The author visited the computer system of fusion group in GA. He joined the tokamak experiment in DIII-D, especially on the demonstration of the remote experiment inside U.S., and investigated the data processing system of DIII-D and the computer network, etc. After the visit to GA, He visited PPPL and exchanged the information about the equipment of remote experiment between JAERI and PPPL based on the US-Japan fusion energy research cooperation. He also investigated the data processing system of TFTR tokamak, the computer network and so on. Results of research of the second visit to GA in 2000 are also reported, which describes a rapid progress of each data processing equipment by the advance on the computer technology in just three years. (author)

  9. Brain activation associated with automatic processing of alcohol‐related cues in young heavy drinkers and its modulation by alcohol administration

    NARCIS (Netherlands)

    F. Kreusch; V. Goffaux; N. Siep; K. Houben; E. Quertemont; R.W. Wiers

    2015-01-01

    Background: While the automatic processing of alcohol-related cues by alcohol abusers is well established in experimental psychopathology approaches, the cerebral regions involved in this phenomenon and the influence of alcohol intake on this process remain unknown. The aim of this functional magnet

  10. A fully automatic processing chain to produce Burn Scar Mapping products, using the full Landsat archive over Greece

    Science.gov (United States)

    Kontoes, Charalampos; Papoutsis, Ioannis; Herekakis, Themistoklis; Michail, Dimitrios; Ieronymidi, Emmanuela

    2013-04-01

    Remote sensing tools for the accurate, robust and timely assessment of the damages inflicted by forest wildfires provide information that is of paramount importance to public environmental agencies and related stakeholders before, during and after the crisis. The Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing of the National Observatory of Athens (IAASARS/NOA) has developed a fully automatic single and/or multi date processing chain that takes as input archived Landsat 4, 5 or 7 raw images and produces precise diachronic burnt area polygons and damage assessments over the Greek territory. The methodology consists of three fully automatic stages: 1) the pre-processing stage where the metadata of the raw images are extracted, followed by the application of the LEDAPS software platform for calibration and mask production and the Automated Precise Orthorectification Package, developed by NASA, for image geo-registration and orthorectification, 2) the core-BSM (Burn Scar Mapping) processing stage which incorporates a published classification algorithm based on a series of physical indexes, the application of two filters for noise removal using graph-based techniques and the grouping of pixels classified as burnt to form the appropriate pixels clusters before proceeding to conversion from raster to vector, and 3) the post-processing stage where the products are thematically refined and enriched using auxiliary GIS layers (underlying land cover/use, administrative boundaries, etc.) and human logic/evidence to suppress false alarms and omission errors. The established processing chain has been successfully applied to the entire archive of Landsat imagery over Greece spanning from 1984 to 2012, which has been collected and managed in IAASARS/NOA. The number of full Landsat frames that were subject of process in the framework of the study was 415. These burn scar mapping products are generated for the first time to such a temporal and spatial

  11. Data processing for fabrication of GMT primary segments: raw data to final surface maps

    Science.gov (United States)

    Tuell, Michael T.; Hubler, William; Martin, Hubert M.; West, Steven C.; Zhou, Ping

    2014-07-01

    The Giant Magellan Telescope (GMT) primary mirror is a 25 meter f/0.7 surface composed of seven 8.4 meter circular segments, six of which are identical off-axis segments. The fabrication and testing challenges with these severely aspheric segments (about 14 mm of aspheric departure, mostly astigmatism) are well documented. Converting the raw phase data to useful surface maps involves many steps and compensations. They include large corrections for: image distortion from the off-axis null test; misalignment of the null test; departure from the ideal support forces; and temperature gradients in the mirror. The final correction simulates the active-optics correction that will be made at the telescope. Data are collected and phase maps are computed in 4D Technology's 4SightTM software. The data are saved to a .h5 (HDF5) file and imported into MATLAB® for further analysis. A semi-automated data pipeline has been developed to reduce the analysis time as well as reducing the potential for error. As each operation is performed, results and analysis parameters are appended to a data file, so in the end, the history of data processing is embedded in the file. A report and a spreadsheet are automatically generated to display the final statistics as well as how each compensation term varied during the data acquisition. This gives us valuable statistics and provides a quick starting point for investigating atypical results.

  12. Advanced instrumentation for the collection, retrieval, and processing of urban stormwater data

    Science.gov (United States)

    Robinson, Jerald B.; Bales, Jerad D.; Young, Wendi S.; ,

    1995-01-01

    The U.S. Geological Survey, in cooperation with the City of Charlotte and Mecklenburg County, North Carolina, has developed a data-collection network that uses advanced instrumentation to automatically collect, retrieve, and process urban stormwater data. Precipitation measurement and water-quality networks provide data for (1) planned watershed simulation models, (2) early warning of possible flooding, (3) computation of material export, and (4) characterization of water quality in relation to basin conditions. Advantages of advanced instrumentation include remote access to real-time data, reduced demands on and more efficient use of limited human resources, and direct importation of data into a geographical information system for display and graphic analysis.

  13. A Prototype Expert System for Automatic Generation of Image Processing Programs

    Institute of Scientific and Technical Information of China (English)

    宋茂强; FelixGrimm; 等

    1991-01-01

    A prototype expert system for generating image processing programs using the subroutine package SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program using SPIDER routines.

  14. Data Processing at the Pierre Auger Observatory

    CERN Document Server

    Vicha, J

    2015-01-01

    Cosmic-ray particles with ultra-high energies (above $10^{18}$ eV) are studied through the properties of extensive air showers which they initiate in the atmosphere. The Pierre Auger Observatory detects these showers with unprecedented exposure and precision and the collected data are processed via dedicated software codes. Monte Carlo simulations of extensive air showers are very computationally expensive, especially at the highest energies and calculations are performed on the GRID for this purpose. The processing of measured and simulated data is described, together with a brief list of physics results which have been achieved.

  15. Sensitometric comparison of E and F dental radiographic films using manual and automatic processing systems

    OpenAIRE

    Dabaghi A.; Tahmasbi MJ.; Karbasi N.; Tabesh H.

    2008-01-01

    Background and Aim: Processing conditions affect sensitometric properties of X-ray films. In this study, we aimed to evaluate the sensitometric characteristics of InSight (IP), a new F-speed film, in fresh and used processing solutions in dental office condition and compare them with Ektaspeed Plus (EP).Materials and Methods: In this experimental in vitro study, an aluminium step wedge was used to construct characteristic curves for InSight and Ektaspeed Plus films (Kodak Eastman, Rochester, ...

  16. Seismic and Tilt Data Processing for Monitoring Groundwater Contamination

    Science.gov (United States)

    Zhang, J.; Spetzler, H. A.

    2003-12-01

    We are conducting a feasibility study to see if we can detect changes in the state of saturation in groundwater by seismic means. This field study is based on laboratory experiments that show large changes in seismic attenuation when contaminants change the wettability of porous rocks. Three tiltmeters and three seismometers were installed at different distances from a controlled irrigation site near Maricopa, AZ. The research site has a facility to controllably irrigate a 50 m by 50 m area with water and chemical surfactants. The instruments are used to record naturally-occurring, low frequency strain and seismic signals before, during and after irrigations. The purpose of the data analysis is to develop techniques for looking for the differences in station response due to local differences, such as contamination in the vadose zone and groundwater. Ours is not a conventional way of data processing for our non-traditional use of the data, since the variations in instrument response caused by the trace amount of contaminants are very small. We are looking for small changes in the relative response between the instruments. For the seismic data, not only do we examine large events, such as Earthquakes, but also microseisms. We use microseisms as our source and the related processing is an attempt to measure the tiny changes in instrument response caused by differences in irrigation and contamination at the three different locations. In tilt data processing, the large events caused by regional water pumping, oil productions, and Earthquakes, etc. need to be removed, since we wish to use the Earth solid tide as our strain source. The key issue during the process of removing the large events is to make sure that the tide signals are not also removed or greatly distorted. A method and corresponding codes were developed for automatically removing data at the three stations induced by large events. After completing this processing, the signal left is the local Earth tide

  17. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  18. Big Bicycle Data Processing: from Personal Data to Urban Applications

    Science.gov (United States)

    Pettit, C. J.; Lieske, S. N.; Leao, S. Z.

    2016-06-01

    Understanding the flows of people moving through the built environment is a vital source of information for the planners and policy makers who shape our cities. Smart phone applications enable people to trace themselves through the city and these data can potentially be then aggregated and visualised to show hot spots and trajectories of macro urban movement. In this paper our aim is to develop procedures for cleaning, aggregating and visualising human movement data and translating this into policy relevant information. In conducting this research we explore using bicycle data collected from a smart phone application known as RiderLog. We focus on the RiderLog application initially in the context of Sydney, Australia and discuss the procedures and challenges in processing and cleaning this data before any analysis can be made. We then present some preliminary map results using the CartoDB online mapping platform where data are aggregated and visualised to show hot spots and trajectories of macro urban movement. We conclude the paper by highlighting some of the key challenges in working with such data and outline some next steps in processing the data and conducting higher volume and more extensive analysis.

  19. Algorithm of Dynamic Operation Process of Hydraulic Automatically Operated Canals with Constant-Downstream Level Gates

    Institute of Scientific and Technical Information of China (English)

    ZHANG Li-wei; FENG Xiao-bo; WANG Chang-de

    2005-01-01

    On the basis of analysis the governing process of downstream water level gates AVIO and AVIS, a mathematical model for simulation of dynamic operation process of hydraulically automated irrigation canals installed with AVIO and AVIS gates is presented. the main point of this mathematical model is firstly applying a set of unsteady flow equations (St. Venant equations here) and treating the condition of gate movement as its dynamic boundary, and then decoupling this interaction of gate movement with the change of canal flow. In this process, it is necessary to give the gates' open-loop transfer function whose input is water level deviation and output is gate discharge. The result of this simulation for a practical reach has shown it has satisfactory accuracy.

  20. Near Real Time Processing Chain for Suomi NPP Satellite Data

    Science.gov (United States)

    Monsorno, Roberto; Cuozzo, Giovanni; Costa, Armin; Mateescu, Gabriel; Ventura, Bartolomeo; Zebisch, Marc

    2014-05-01

    Since 2009, the EURAC satellite receiving station, located at Corno del Renon, in a free obstacle site at 2260 m a.s.l., has been acquiring data from Aqua and Terra NASA satellites equipped with Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The experience gained with this local ground segmenthas given the opportunity of adapting and modifying the processing chain for MODIS data to the Suomi NPP, the natural successor to Terra and Aqua satellites. The processing chain, initially implemented by mean of a proprietary system supplied by Seaspace and Advanced Computer System, was further developed by EURAC's Institute for Applied Remote Sensing engineers. Several algorithms have been developed using MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) data to produce Snow Cover, Particulate Matter estimation and Meteo maps. These products are implemented on a common processor structure based on the use of configuration files and a generic processor. Data and products have then automatically delivered to the customers such as the Autonomous Province of Bolzano-Civil Protection office. For the processing phase we defined two goals: i) the adaptation and implementation of the products already available for MODIS (and possibly new ones) to VIIRS, that is one of the sensors onboard Suomi NPP; ii) the use of an open source processing chain in order to process NPP data in Near Real Time, exploiting the knowledge we acquired on parallel computing. In order to achieve the second goal, the S-NPP data received and ingested are sent as input to RT-STPS (Real-time Software Telemetry Processing System) software developed by the NASA Direct Readout Laboratory 1 (DRL) that gives as output RDR files (Raw Data Record) for VIIRS, ATMS (Advanced Technology Micorwave Sounder) and CrIS (Cross-track Infrared Sounder)sensors. RDR are then transferred to a server equipped with CSPP2 (Community Satellite Processing Package) software developed by the University of

  1. Automatic Radiation Monitoring in Slovenia

    International Nuclear Information System (INIS)

    Full text: The automatic radiation monitoring system in Slovenia started in early nineties and now it comprises measurements of: 1. External gamma radiation: For the time being there are forty-three probes with GM tubes integrated into a common automatic network, operated at the SNSA. The probes measure dose rate in 30 minute intervals. 2. Aerosol radioactivity: Three automatic aerosol stations measure the concentration of artificial alpha and beta activity in the air, gamma emitting radionuclides, radioactive iodine 131 in the air (in all chemical forms, - natural radon and thoron progeny, 3. Radon progeny concentration: Radon progeny concentration is measured hourly and results are displayed as the equilibrium equivalent concentrations (EEC), 4. Radioactive deposition measurements: As a support to gamma dose rate measurements - the SNSA developed and installed an automatic measuring station for surface contamination equipped with gamma spectrometry system (with 3x3' NaI(Tl) detector). All data are transferred through the different communication pathways to the SNSA. They are collected in 30 minute intervals. Within these intervals the central computer analyses and processes the collected data, and creates different reports. Every month QA/QC analysis of data is performed, showing the statistics of acquisition errors and availability of measuring results. All results are promptly available at the our WEB pages. The data are checked and daily sent to the EURDEP system at Ispra (Italy) and also to the Austrian, Croatian and Hungarian authorities. (author)

  2. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity

    NARCIS (Netherlands)

    Cserjesi, Renata; Vermeulen, Nicolas; Lenard, Laszlo; Luminet, Olivier

    2011-01-01

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with

  3. A Web-Based Data Management and Analysis System for CO2 Capture Process

    OpenAIRE

    Wu, Yuxiang; Chan, Christine W.

    2010-01-01

    A web-based data management and analysis system for the CO2 capture process called CO2DMA has been developed. The system has a user friendly interface and therefore does not require a steep learning curve for the user. Since the system is built as a web service application, there is no need to install any software in the user’s computer. By automatically

  4. Robust processing of mining subsidence monitoring data

    Energy Technology Data Exchange (ETDEWEB)

    Wang Mingzhong; Huang Guogang [Pingdingshan Mining Bureau (China); Wang Yunjia; Guogangli [China Univ. of Mining and Technology, Xuzhou (China)

    1996-12-31

    Since China began to do research on mining subsidence in 1950s, more than one thousand lines have been observed. Yet, monitoring data sometimes contain quite a lot of outliers because of the limit of observation and geological mining conditions. In China, nowdays, the method of processing mining subsidence monitoring data is based on the principle of the least square method. It is possible to produce lower accuracy, less reliability, or even errors. For reason given above, the authors, according to Chinese actual situation, have done some research work on the robust processing of mining subsidence monitoring data in respect of how to get prediction parameters. The authors have derived related formulas, designed some computational programmes, done a great quantity of actual calculation and simulation, and achieved good results. (orig.)

  5. Multimission image processing and science data visualization

    Science.gov (United States)

    Green, William B.

    1993-01-01

    The Operational Science Analysis (OSA) Functional area supports science instrument data display, analysis, visualization and photo processing in support of flight operations of planetary spacecraft managed by the Jet Propulsion Laboratory (JPL). This paper describes the data products generated by the OSA functional area, and the current computer system used to generate these data products. The objectives on a system upgrade now in process are described. The design approach to development of the new system are reviewed, including use of the Unix operating system and X-Window display standards to provide platform independence, portability, and modularity within the new system, is reviewed. The new system should provide a modular and scaleable capability supporting a variety of future missions at JPL.

  6. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping

    Directory of Open Access Journals (Sweden)

    Sophie Crommelinck

    2016-08-01

    Full Text Available Unmanned Aerial Vehicles (UAVs have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow.

  7. Haptic Landmark Positioning and Automatic Landmark Transfer in 4D Lung CT Data

    Science.gov (United States)

    Färber, Matthias; Gawenda, Björn; Bohn, Christian-Arved; Handels, Heinz

    Manual landmark positioning in volumetric image data is a complex task and often results in erroneous landmark positions. The landmark positioning tool presented uses image curvature features to precompute suitable candidates for landmark positions on surface data of anatomical structures. A force-feedback I/O device is then used to haptically guide the user during the definition of the correct landmarks in the 3D data volume. Furthermore, existing landmarks in a time-point of a sequence of 3D volumes (4D data set) can iteratively be transferred to other time-points using a surface based registration technique. First results show significant time savings and small interobserver variability (IROV) compared to the IROV of manually defined landmark positions using orthogonal slices of the image data.

  8. ICESat-2 Data Management Services and Processes

    Science.gov (United States)

    Tanner, S.; Fowler, D. K.; Bond, C.; Stowe, M.; Webster, D.; Steiker, A. E.; Fowler, C.; McAllister, M.

    2015-12-01

    NASA'S Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) will be launching in 2017 and will quickly begin generating an enormous amount of data. Close to a terabyte (TB) of data per day will be associated with the satellite's Advanced Topographic Laser Altimeter System (ATLAS) instrument. These data will be archived and made available for the public through NASA's Distributed Active Archive Center (DAAC) located at the National Snow and Ice Data Center (NSIDC) in Boulder, Colorado. Because of the expected volume of data, NSIDC and its partners are working on new capabilities and preparations that will be required to fully support the user community. These include using new processes and protocols simply to move the data to the NSIDC, as well as new tools for helping users find and download only the data they need. Subsetting, visualization and analysis capabilities across all of the ICESat-2 data products will be critical to dealing with data. This presentation will explore the steps being taken by NSIDC and others to implement and make these capabilities available.

  9. Square Kilometre Array Science Data Processing

    Science.gov (United States)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  10. Automatic control strategy for step feed anoxic/aerobic biological nitrogen removal process

    Institute of Scientific and Technical Information of China (English)

    ZHU Gui-bing; PENG Yong-zhen; WU Shu-yun; WANG Shu-ying

    2005-01-01

    Control of sludge age and mixed liquid suspended solids concentration in the activated sludge process is critical for ensuring effective wastewater treatment. A nonlinear dynamic model for a step-feed activated sludge process was developed in this study. The system is based on the control of the sludge age and mixed liquor suspended solids in the aerator of last stage by adjusting the sludge recycle and wastage flow rates respectively. The simulation results showed that the sludge age remained nearly constant at a value of 16 d in the variation of the influent characteristics. The mixed liquor suspended solids in the aerator of last stage were also maintained to a desired value of 2500 g/m3 by adjusting wastage flow rates.

  11. Automatic Process Optimization Of Sheet Metal Forming With Multi-objective

    International Nuclear Information System (INIS)

    It's crucial for process engineers to determine optimal value and combination of process parameters in the design of sheet metal forming. The multi-objective genetic algorithm (MOGA) based on Pareto approach and numerical simulation codes were integrated in this paper to fulfill the optimal formability in the sheet metal forming. Three objective functions of local formability on fracture, wrinkling and insufficient stretching were presented based on the strains state at the end of the forming process on the Forming Limit Diagram. By using Pareto-based MOGA, the optimal global formability which represents the trade-off between different local formability was decided. For the efficiency and accuracy of optimization procedure, both inverse and incremental finite element analysis were used to evaluate the value of objective functions. This method was applied to a complex engineering optimization problem: an engine hood outer panel, the optimal blank holder force and draw bead restraining forces were determined to satisfy the given objective functions for the forming of the auto body panels. The approach proposed in this paper has been shown to be a powerful tool than manual numerical simulation procedure

  12. SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns

    Science.gov (United States)

    Koestel, John

    2016-04-01

    3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected

  13. BepiColombo Science Data Processing and Archiving System

    Science.gov (United States)

    Martinez, Santa; Ortiz de Landaluce, Inaki

    2015-12-01

    The approach selected for BepiColombo for the processing, analysis and archiving of the science data represents a significant change with respect to previous ESA planetary missions, and the Science Ground Segment (SGS), located at ESAC, will play a key role in these activities. This contribution will summarise the key features of the selected approach, and will describe its implementation, with focus on the following aspects: - The use of state-of-the-art virtualisation technology for automatic build, deployment and execution of the pipelines as independent application containers. This will allow specific software environments, and underlying hardware resources, to be isolated, scaled and accessed in a homogeneous fashion. - A set of core libraries under development at the SGS (e.g. telemetry decoding, PDS product generation/validation, conversion to engineering units, Java to SPICE binding, geometry computations) aimed to be reused for certain processing steps in different pipelines. The implementation follows a quite generic and modular architecture providing a high level of flexibility and adaptability, which will allow its re-usability by future ESA planetary missions.

  14. Automatic data-quality monitoring for continuous GPS tracking stations in Taiwan

    Science.gov (United States)

    Yeh, T. K.; Wang, C. S.; Chao, B. F.; Chen, C. S.; Lee, C. W.

    2007-10-01

    Taiwan has more than 300 Global Positioning System (GPS) tracking stations maintained by the Ministry of the Interior (MOI), Academia Sinica, the Central Weather Bureau and the Central Geological Survey. In the future, GPS tracking stations may replace the GPS control points after being given a legal status. Hence, the data quality of the tracking stations is an increasingly significant factor. This study considers the feasibility of establishing a system for monitoring GPS receivers. This investigation employs many data-quality indices and examines the relationship of these indices and the positioning precision. The frequency stability of the GPS receiver is the most important index; the cycle slip is the second index and the multipath is the third index. An auto-analytical system for analysing GPS data quality and monitoring the MOI's tracking stations can quickly find and resolve problems, or changes in station environment, to maintain high data quality for the tracking stations.

  15. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    OpenAIRE

    Li, Y.; Hu, X.; H. Guan; Liu, P.

    2016-01-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these...

  16. Trends in Automatic Individual Tree Crown Detection and Delineation—Evolution of LiDAR Data

    Directory of Open Access Journals (Sweden)

    Zhen Zhen

    2016-04-01

    Full Text Available Automated individual tree crown detection and delineation (ITCD using remotely sensed data plays an increasingly significant role in efficiently, accurately, and completely monitoring forests. This paper reviews trends in ITCD research from 1990–2015 from several perspectives—data/forest type, method applied, accuracy assessment and research objective—with a focus on studies using LiDAR data. This review shows that active sources are becoming more prominent in ITCD studies. Studies using active data—LiDAR in particular—accounted for 80% of the total increase over the entire time period, those using passive data or fusion of passive and active data comprised relatively small proportions of the total increase (8% and 12%, respectively. Additionally, ITCD research has moved from incremental adaptations of algorithms developed for passive data sources to innovative approaches that take advantage of the novel characteristics of active datasets like LiDAR. These improvements make it possible to explore more complex forest conditions (e.g., closed hardwood forests, suburban/urban forests rather than a single forest type although most published ITCD studies still focused on closed softwood (41% or mixed forest (22%. Approximately one-third of studies applied individual tree level (30% assessment, with only a quarter reporting more comprehensive multi-level assessment (23%. Almost one-third of studies (32% that concentrated on forest parameter estimation based on ITCD results had no ITCD-specific evaluation. Comparison of methods continues to be complicated by both choice of reference data and assessment metric; it is imperative to establish a standardized two-level assessment framework to evaluate and compare ITCD algorithms in order to provide specific recommendations about suitable applications of particular algorithms. However, the evolution of active remotely sensed data and novel platforms implies that automated ITCD will continue to be a

  17. Automatic Synthesis of UML Designs from Requirements in an Iterative Process

    Science.gov (United States)

    Schumann, Johann; Whittle, Jon; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Unified Modeling Language (UML) is gaining wide popularity for the design of object-oriented systems. UML combines various object-oriented graphical design notations under one common framework. A major factor for the broad acceptance of UML is that it can be conveniently used in a highly iterative, Use Case (or scenario-based) process (although the process is not a part of UML). Here, the (pre-) requirements for the software are specified rather informally as Use Cases and a set of scenarios. A scenario can be seen as an individual trace of a software artifact. Besides first sketches of a class diagram to illustrate the static system breakdown, scenarios are a favorite way of communication with the customer, because scenarios describe concrete interactions between entities and are thus easy to understand. Scenarios with a high level of detail are often expressed as sequence diagrams. Later in the design and implementation stage (elaboration and implementation phases), a design of the system's behavior is often developed as a set of statecharts. From there (and the full-fledged class diagram), actual code development is started. Current commercial UML tools support this phase by providing code generators for class diagrams and statecharts. In practice, it can be observed that the transition from requirements to design to code is a highly iterative process. In this talk, a set of algorithms is presented which perform reasonable synthesis and transformations between different UML notations (sequence diagrams, Object Constraint Language (OCL) constraints, statecharts). More specifically, we will discuss the following transformations: Statechart synthesis, introduction of hierarchy, consistency of modifications, and "design-debugging".

  18. Automatic detection of thermal damage in grinding process by artificial neural network

    Directory of Open Access Journals (Sweden)

    Fábio Romano Lofrano Dotto

    2003-12-01

    Full Text Available This work aims to develop an intelligent system for detecting the workpiece burn in the surface grinding process by utilizing a multi-perceptron neural network trained to generalize the process and, in turn, obtnaing the burning threshold. In general, the burning occurrence in grinding process can be detected by the DPO and FKS parameters. However, these ones were not efficient at the grinding conditions used in this work. Acoustic emission and electric power of the grinding wheel drive motor are the input variable and the output variable is the burning occurrence to the neural network. In the experimental work was employed one type of steel (ABNT-1045 annealed and one type of grinding wheel referred to as TARGA model ART 3TG80.3 NVHB.Esse trabalho tem por objetivo o desenvolvimento de um sistema inteligente para detecção da queima no processo de retificação tangencial plana através da utilização de uma rede neural perceptron multi camadas, treinada para generalizar o processo e, conseqüentemente, obter o limiar de queima. Em geral, a ocorrência da queima no processo de retificação pode ser detectada pelos parâmetros DPO e FKS. Porém esses parâmetros não são eficientes nas condições de usinagem usadas nesse trabalho. Os sinais de emissão acústica e potência elétrica do motor de acionamento do rebolo são variáveis de entrada e a variável de saída é a ocorrência da queima. No trabalho experimental, foram empregados um tipo de aço (ABNT 1045 temperado e um tipo de rebolo denominado TARGA, modelo ART 3TG80.3 NVHB.

  19. A new automatic SAR-based flood mapping application hosted on the European Space Agency's grid processing on demand fast access to imagery environment

    Science.gov (United States)

    Hostache, Renaud; Chini, Marco; Matgen, Patrick; Giustarini, Laura

    2013-04-01

    There is a clear need for developing innovative processing chains based on earth observation (EO) data to generate products supporting emergency response and flood management at a global scale. Here an automatic flood mapping application is introduced. The latter is currently hosted on the Grid Processing on Demand (G-POD) Fast Access to Imagery (Faire) environment of the European Space Agency. The main objective of the online application is to deliver flooded areas using both recent and historical acquisitions of SAR data in an operational framework. It is worth mentioning that the method can be applied to both medium and high resolution SAR images. The flood mapping application consists of two main blocks: 1) A set of query tools for selecting the "crisis image" and the optimal corresponding pre-flood "reference image" from the G-POD archive. 2) An algorithm for extracting flooded areas using the previously selected "crisis image" and "reference image". The proposed method is a hybrid methodology, which combines histogram thresholding, region growing and change detection as an approach enabling the automatic, objective and reliable flood extent extraction from SAR images. The method is based on the calibration of a statistical distribution of "open water" backscatter values inferred from SAR images of floods. Change detection with respect to a pre-flood reference image helps reducing over-detection of inundated areas. The algorithms are computationally efficient and operate with minimum data requirements, considering as input data a flood image and a reference image. Stakeholders in flood management and service providers are able to log onto the flood mapping application to get support for the retrieval, from the rolling archive, of the most appropriate pre-flood reference image. Potential users will also be able to apply the implemented flood delineation algorithm. Case studies of several recent high magnitude flooding events (e.g. July 2007 Severn River flood

  20. Telemedicine optoelectronic biomedical data processing system

    Science.gov (United States)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.