WorldWideScience

Sample records for automatic data processing

  1. Automatically processing physical data from LHD experiments

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, M., E-mail: emoto.masahiko@nifs.ac.jp; Ida, K.; Suzuki, C.; Yoshida, M.; Akiyama, T.; Nakamura, Y.; Sakamoto, R.; Yokoyama, M.; Yoshinuma, M.

    2014-05-15

    Physical data produced by large helical device (LHD) experiments is supplied by the Kaiseki server, and registers more than 200 types of diagnostic data. Dependencies exist amongst the data; i.e., in many cases, the calculation of one data requires other data. Therefore, to obtain unregistered data, one needs to calculate not only the diagnostic data itself but also the dependent data; however, because the data is registered by different scientists, each scientist must separately calculate and register their respective data. To simplify this complicated procedure, we have developed an automatic calculation system called AutoAna. The calculation programs of AutoAna are distributed on a network, and the number of such programs can be easily increased dynamically. Our system is therefore scalable and ready for substantial increases in the size of the target data.

  2. Automatic data processing of nondestructive testing results

    International Nuclear Information System (INIS)

    The ADP system for the documentation of inservice inspection results of nuclear power plants is described. The same system can be used during the whole operational life time of the plant. To make this possible the ADP system has to be independent of the type of hardware, data recording and software. The computer programs are made using Fortran IV programming language. The results of nondestructive testing are recorded in an inspection register by ADP methods. Different outputs can be utilized for planning, performance and reporting of inservice inspections. (author)

  3. Automatic data processing and crustal modeling on Brazilian Seismograph Network

    Science.gov (United States)

    Moreira, L. P.; Chimpliganond, C.; Peres Rocha, M.; Franca, G.; Marotta, G. S.; Von Huelsen, M. G.

    2014-12-01

    The Brazilian Seismograph Network (RSBR) is a joint project of four Brazilian research institutions with the support of Petrobras and its main goal is to monitor the seismic activities, generate alerts of seismic hazard and provide data for Brazilian tectonic and structure research. Each institution operates and maintain their seismic network, sharing their data in an virtual private network. These networks have seismic stations transmitting in real time (or near real time) raw data to their respective data centers, where the seismogram files are then shared with other institutions. Currently RSBR has 57 broadband stations, some of them operating since 1994, transmitting data through mobile phone data networks or satellite links. Station management, data acquisition and storage and earthquake data processing at the Seismological Observatory of the University of Brasilia is automatically performed by SeisComP3 (SC3). However, the SC3 data processing is limited to event detection, location and magnitude. An automatic crustal modeling system was designed process raw seismograms and generate 1D S-velocity profiles. This system automatically calculates receiver function (RF) traces, Vp/Vs ratio (h-k stack) and surface waves dispersion (SWD) curves. These traces and curves are then used to calibrate the lithosphere seismic velocity models using a joint inversion scheme The results can be reviewed by an analyst, change processing parameters and selecting/neglecting RF traces and SWD curves used in lithosphere model calibration. The results to be obtained from this system will be used to generate and update a quasi-3D crustal model of Brazil's territory.

  4. Guidelines for Automatic Data Processing Physical Security and Risk Management. Federal Information Processing Standards Publication 31.

    Science.gov (United States)

    National Bureau of Standards (DOC), Washington, DC.

    These guidelines provide a handbook for use by federal organizations in structuring physical security and risk management programs for their automatic data processing facilities. This publication discusses security analysis, natural disasters, supporting utilities, system reliability, procedural measures and controls, off-site facilities,…

  5. Towards Automatic Capturing of Manual Data Processing Provenance

    OpenAIRE

    Wombacher, Andreas; Huq, Mohammad R.

    2011-01-01

    Often data processing is not implemented by a work ow system or an integration application but is performed manually by humans along the lines of a more or less specified procedure. Collecting provenance information during manual data processing can not be automated. Further, manual collection of provenance information is error prone and time consuming. Therefore, we propose to infer provenance information based on the read and write access of users. The derived provenance information is comp...

  6. Nuclear demagnetization refrigerator with automatic control, pick up and data process system

    International Nuclear Information System (INIS)

    A nuclear demagnetization refrigerator for various physical research at ultralow temperatures with automatic control, pick up and data process system is developed . The design of the main units and performance of the refrigerator and automatic system are described. The possibilities of the set-up operation in various regimes are analyzed for the case of NMR investigation of helium quantum crystals

  7. Dialog system for automatic data input/output and processing with two BESM-6 computers

    International Nuclear Information System (INIS)

    This paper presents a system for conducting experiments with fully automatic processing of data from multichannel recorders in the dialog mode. The system acquires data at a rate of 2.5 . 103 readings/sec, processes in real time, and outputs digital and graphical material in a multitasking environment

  8. Automatic post processing algorithm for passive seismic monitoring data

    International Nuclear Information System (INIS)

    The problem of monitoring of different types of seismic events – geoacoustic precursors of earthquakes, industrial and field explosions, places fragments fall of separating parts of rockets-carriers, etc. is one of the key in the modern ecology of the environment. The peculiarity of this kind of monitoring is that it is mobile seismic groups, which should be based in the proposed area of occurrence of events. One of the most important steps for solving the problems connected with the detection and identification of recorded data from passive sensors in mobile seismic array (MSA). The task of determining the nature of the source and its' coordinates lies in the basis of direction, referred to as the geoacoustic location. Using a new approach (not by location but by neural classification of waveform portraits) usability of algorithm which based on quantitative parameters of signal will be demonstrated.

  9. Automatic Geometric Processing for Very High Resolution Optical Satellite Data Based on Vector Roads and Orthophotos

    Directory of Open Access Journals (Sweden)

    Peter Pehani

    2016-04-01

    Full Text Available In response to the increasing need for fast satellite image processing SPACE-SI developed STORM—a fully automatic image processing chain that performs all processing steps from the input optical images to web-delivered map-ready products for various sensors. This paper focuses on the automatic geometric corrections module and its adaptation to very high resolution (VHR multispectral images. In the automatic ground control points (GCPs extraction sub-module a two-step algorithm that utilizes vector roads as a reference layer and delivers GCPs for high resolution RapidEye images with near pixel accuracy was initially implemented. Super-fine positioning of individual GCPs onto an aerial orthophoto was introduced for VHR images. The enhanced algorithm is capable of achieving accuracy of approximately 1.5 pixels on WorldView-2 data. In the case of RapidEye images the accuracies of the physical sensor model reach sub-pixel values at independent check points. When compared to the reference national aerial orthophoto the accuracies of WorldView-2 orthoimages automatically produced with the rational function model reach near-pixel values. On a heterogeneous set of 41 RapidEye images the rate of automatic processing reached 97.6%. Image processing times remained under one hour for standard-size images of both sensor types.

  10. Automatic classification of oranges using image processing and data mining techniques

    OpenAIRE

    Mercol, Juan Pablo; Gambini, María Juliana; Santos, Juan Miguel

    2008-01-01

    Data mining is the discovery of patterns and regularities from large amounts of data using machine learning algorithms. This can be applied to object recognition using image processing techniques. In fruits and vegetables production lines, the quality assurance is done by trained people who inspect the fruits while they move in a conveyor belt, and classify them in several categories based on visual features. In this paper we present an automatic orange’s classification system, which us...

  11. Automatic processing of macromolecular crystallography X-ray diffraction data at the ESRF.

    Science.gov (United States)

    Monaco, Stéphanie; Gordon, Elspeth; Bowler, Matthew W; Delagenière, Solange; Guijarro, Matias; Spruce, Darren; Svensson, Olof; McSweeney, Sean M; McCarthy, Andrew A; Leonard, Gordon; Nanao, Max H

    2013-06-01

    The development of automated high-intensity macromolecular crystallography (MX) beamlines at synchrotron facilities has resulted in a remarkable increase in sample throughput. Developments in X-ray detector technology now mean that complete X-ray diffraction datasets can be collected in less than one minute. Such high-speed collection, and the volumes of data that it produces, often make it difficult for even the most experienced users to cope with the deluge. However, the careful reduction of data during experimental sessions is often necessary for the success of a particular project or as an aid in decision making for subsequent experiments. Automated data reduction pipelines provide a fast and reliable alternative to user-initiated processing at the beamline. In order to provide such a pipeline for the MX user community of the European Synchrotron Radiation Facility (ESRF), a system for the rapid automatic processing of MX diffraction data from single and multiple positions on a single or multiple crystals has been developed. Standard integration and data analysis programs have been incorporated into the ESRF data collection, storage and computing environment, with the final results stored and displayed in an intuitive manner in the ISPyB (information system for protein crystallography beamlines) database, from which they are also available for download. In some cases, experimental phase information can be automatically determined from the processed data. Here, the system is described in detail. PMID:23682196

  12. preAssemble: a tool for automatic sequencer trace data processing

    Directory of Open Access Journals (Sweden)

    Laerdahl Jon K

    2006-01-01

    Full Text Available Abstract Background Trace or chromatogram files (raw data are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling. This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages – Phred and Staden are used by preAssemble to perform sequence quality processing. Results The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. Conclusion preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence

  13. Automatic processing and modeling of GPR data for pavement thickness and properties

    Science.gov (United States)

    Olhoeft, Gary R.; Smith, Stanley S., III

    2000-04-01

    A GSSI SIR-8 with 1 GHz air-launched horn antennas has been modified to acquire data from a moving vehicle. Algorithms have been developed to acquire the data, and to automatically calibrate, position, process, and full waveform model it without operator intervention. Vehicle suspension system bounce is automatically compensated (for varying antenna height). Multiple scans are modeled by full waveform inversion that is remarkably robust and relatively insensitive to noise. Statistical parameters and histograms are generated for the thickness and dielectric permittivity of concrete or asphalt pavements. The statistical uncertainty with which the thickness is determined is given with each thickness measurement, along with the dielectric permittivity of the pavement material and of the subgrade material at each location. Permittivities are then converted into equivalent density and water content. Typical statistical uncertainties in thickness are better than 0.4 cm in 20 cm thick pavement. On a Pentium laptop computer, the data may be processed and modeled to have cross-sectional images and computed pavement thickness displayed in real time at highway speeds.

  14. Advanced method for automatic processing of seismic and infra-sound data

    International Nuclear Information System (INIS)

    Governmental organizations have manifested their need for rapid and precise information in the two main fields covered by operational seismology, i.e.: major earthquake alerts and the detection of nuclear explosions. To satisfy both of these constraints, it is necessary to implement increasingly elaborate automation methods for processing the data. Automatic processing methods are mainly based on the flowing elementary steps: detection of a seismic signal on a recording; identification of the type of wave associated with the signal; linking of the different detected arrivals to the same seismic event; localization of the source, which also determines the characteristics of the event. Otherwise, two main categories of processing may be distinguished: methods suitable for large aperture networks, which are characterized by single-channel treatment for detection and identification, and antenna-type methods which are based on searching for consistent signals on the scale of the net work. Within the two main fields of research mentioned here, our effort has focused on regional-scale seismic waves in relation to large-aperture networks as well as on detection techniques using a mini-network (antenna). We have taken advantage of the extensive set of examples in order to implement an automatic procedure for identifying regional seismic waves on single-channel recordings. With the mini-networks, we have developed a novel method universally applicable and successfully applied to various different types of recording (e.g. seismic, micro-barometric, etc) and networks adapted to different wavelength bands. (authors)

  15. Explodet Project:. Methods of Automatic Data Processing and Analysis for the Detection of Hidden Explosive

    Science.gov (United States)

    Lecca, Paola

    2003-12-01

    The research of the INFN Gruppo Collegato di Trento in the ambit of EXPLODET project for the humanitarian demining, is devoted to the development of a software procedure for the automatization of data analysis and decision taking about the presence of hidden explosive. Innovative algorithms of likely background calculation, a system based on neural networks for energy calibration and simple statistical methods for the qualitative consistency check of the signals are the main parts of the software performing the automatic data elaboration.

  16. Data mining process automatization of air pollution data by the LISp-Miner system

    OpenAIRE

    Ochodnická, Zuzana

    2014-01-01

    This thesis is focused on the area of automated data mining. The aim of this thesis is a description of the area of automated data mining, creation of a design of an automated data mining tasks creation process for verification of set domain knowledge and new knowledge search, and also an implementation of verification of set domain knowledge of attribute dependency type influence with search space adjustments. The implementation language is the LMCL language that enables usage of the LISp-Mi...

  17. Grid infrastructure for automatic processing of SAR data for flood applications

    Science.gov (United States)

    Kussul, Natalia; Skakun, Serhiy; Shelestov, Andrii

    2010-05-01

    More and more geosciences applications are being put on to the Grids. Due to the complexity of geosciences applications that is caused by complex workflow, the use of computationally intensive environmental models, the need of management and integration of heterogeneous data sets, Grid offers solutions to tackle these problems. Many geosciences applications, especially those related to the disaster management and mitigations require the geospatial services to be delivered in proper time. For example, information on flooded areas should be provided to corresponding organizations (local authorities, civil protection agencies, UN agencies etc.) no more than in 24 h to be able to effectively allocate resources required to mitigate the disaster. Therefore, providing infrastructure and services that will enable automatic generation of products based on the integration of heterogeneous data represents the tasks of great importance. In this paper we present Grid infrastructure for automatic processing of synthetic-aperture radar (SAR) satellite images to derive flood products. In particular, we use SAR data acquired by ESA's ENVSAT satellite, and neural networks to derive flood extent. The data are provided in operational mode from ESA rolling archive (within ESA Category-1 grant). We developed a portal that is based on OpenLayers frameworks and provides access point to the developed services. Through the portal the user can define geographical region and search for the required data. Upon selection of data sets a workflow is automatically generated and executed on the resources of Grid infrastructure. For workflow execution and management we use Karajan language. The workflow of SAR data processing consists of the following steps: image calibration, image orthorectification, image processing with neural networks, topographic effects removal, geocoding and transformation to lat/long projection, and visualisation. These steps are executed by different software, and can be

  18. Automatic Sample and Data Processing in Studies of Calcium Metabolism in Rats

    International Nuclear Information System (INIS)

    The study of calcium metabolism in rats as a function of age or various forms of treatment entails experiments on large numbers of animals. These investigations involve: (i) studying the way in which the serum concentration of a tracer dose of 45Ca injected intravenously varies as a function of time, and (ii) carrying out measurements of chemical and radiochemical balance. By combining these two types of information and subjecting them to mathematical analysis it is possible to evolve a general model of calcium metabolism. This model can then be used to deduce the size of the exchangeable compartments and the relative importance of the different metabolic paths, such as intestinal absorption, renal and intestinal excretion, and deposition and elimination of bone calcium. The authors' work on these subjects was facilitated by the development of automatic methods for measuring the samples and processing the data, and these methods are the subject of their paper. Processing of samples: the radioactivity measurements are carried out on small samples (20-40 λ) of plasma, removed at repeated intervals, and on total quantities of faeces and urine excreted in a given period. The measuring apparatus used comprises a feed, a low-background anti-coincidence counter and a digital computer; the measurements obtained from the computer are then recorded on a printer. The novel features of the sample preparation techniques used and the performance achieved by the measuring apparatus, are discussed, with special reference to the (statistical) counting conditions, which are checked by the computer each time new measurements have to be calculated. Processing of data: this is done by an IBM-7040 digital computer, into which are fed the programme for the calculation and all the un-corrected experimental data in the form of punched cards, separately for each animal. There are three stages to the data-processing operation, namely: (1) converting the raw data and calculating the standard

  19. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS). In the...... second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable of...... identifying a release of radioactivity provided that the number of sampling locations is sufficiently high. Consequently, we believe that a combination of applying the presented mapping approach and the physical knowledge of the transport processes of radioactivity should be used to predict the extreme values...

  20. Well scintillation counter with automatic sample changing and data processing. An inexpensive instrument incorporating consumer products

    International Nuclear Information System (INIS)

    An automatic well scintillation counting system suitable for in-vitro assays with 125I has been designed with the express purpose of allowing effective operation and maintenance in laboratories in developing countries. The system incorporates comparatively simple components, notably two consumer products: A Kodak Carousel slide projector as sample changer and a Hewlett-Packard HP-97 programmable printing calculator as system controller and data processor. The instrument can accommodate 80 counting vials of dimensions 12 mm phi x 75 mm, or 40 vials of 16 mm phi x 100 mm. The calculator provides on-line control and data reduction with the mediation of an interface somewhat resembling that required between a scaler and a printer. Its program capacity is adequate for fairly complicated on-line operations, including, for example, deduction of concentration of hormone in unknown sample by interpolation from a standard curve in logit-log space, calculation of error in hormone concentration, and termination of counting when the counting error is reduced to a prescribed fraction of the composite of other random assay errors (as stored in the calculator's memory). This system is inexpensive, robust, and capable of being operated manually if automatic accessories fail. It could be improved in several ways, particularly by providing for operation from batteries and, no doubt in the immediate future, substitution of the next generation of cheaper and more powerful calculators for the calculator used at present. The instrument may be cost-effective in any small to medium-sized laboratory. (author)

  1. Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

    Science.gov (United States)

    Shuping, Ralph; Krzaczek, Robert; Vacca, William D.; Charcos-Llorens, Miguel; Reach, William T.; Alles, Rosemary; Clarke, Melanie; Melchiorri, Riccardo; Radomski, James T.; Shenoy, Sachindev S.; Sandel, David; Omelian, Eric

    2015-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. SOFIA is designed to execute observations at altitudes between 37,000 and 45,00 feet, above 99% of atmospheric water vapor. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. Once this post-processing is complete, the data can be used in scientific analysis and publications. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both automatic ("pipeline") and manual modes to process data from a variety of instruments. In this poster paper, we present an overview of the DPS concepts and architecture, as well as operational results from the first two SOFIA observing cycles (2013--2014).

  2. Automatic Data Processing Equipment (ADPE) acquisition plan for the medical sciences

    Science.gov (United States)

    1979-01-01

    An effective mechanism for meeting the SLSD/MSD data handling/processing requirements for Shuttle is discussed. The ability to meet these requirements depends upon the availability of a general purpose high speed digital computer system. This system is expected to implement those data base management and processing functions required across all SLSD/MSD programs during training, laboratory operations/analysis, simulations, mission operations, and post mission analysis/reporting.

  3. Automatic drawing and CAD actualization in processing data of radiant sampling in physics prospect

    International Nuclear Information System (INIS)

    In this paper discussed a method of processing radiant sampling data with computer. By this method can get expain the curve of radiant sampling data, and we can combine mineral masses and analyse and calculate them, then record the result on Notebook. There are many merites of this method: easy to learn, simple to use, high efficient. It adapts to all sorts of mines. (authors)

  4. Automatic drawing and cad actualiztion in processing data of radiant sampling in physics prospect

    International Nuclear Information System (INIS)

    In this paper discussed a method of processing radiant sampling data with computer. By this method can get explain the curve of radiant sampling data, and we can combine mineral masses and analyses and calculate them, then record the result on Notebook. There are many merites of this method: easy to learn, simple to use, high efficient. It adapts to all sorts of mines. (authors)

  5. Process automatization in system administration

    OpenAIRE

    Petauer, Janja

    2013-01-01

    The aim of the thesis is to present automatization of user management in company Studio Moderna. The company has grown exponentially in recent years, that is why we needed to find faster, easier and cheaper way of man- aging user accounts. We automatized processes of creating, changing and removing user accounts within Active Directory. We prepared user interface inside of existing application, used Java Script for drop down menus, wrote script in scripting programming langu...

  6. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    Science.gov (United States)

    Kortström, Jari; Tiira, Timo; Kaisko, Outi

    2016-03-01

    The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.

  7. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  8. Data processing in a small transit company using an automatic passenger counter

    OpenAIRE

    Avadhani, Umesh D.

    1986-01-01

    This thesis describes the work done in the second stage of the implementation of the Automatic Passenger Counter (APC) system at the Roanoke Valley - Metro Transit Company. This second stage deals with the preparation of a few reports and plots that would help the transit managers in efficiently managing the transit system. The reports and plots give an evaluation of the system and service operations by which the decision makers can support their decisions. For an efficie...

  9. To the problem of topological optimization of data processing and transmission networks in development of the automatic control system ''Atom''

    International Nuclear Information System (INIS)

    Some optimization problems occurring in developing the automatic control system (ASC) of a commercial amalgamation (ACS-ATOM), assessments of economically optimal structure of location of computation centres and means of data transmission in particular are considered

  10. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  11. AUTORED - the JADE automatic data reduction system

    International Nuclear Information System (INIS)

    The design and implementation of and experience with an automatic data processing system for the reduction of data from the JADE experiment at DESY is described. The central elements are a database and a job submitter which combine powerfully to minimise the need for manual intervention. (author)

  12. Software for the FODS installation automatic control, data acquisition and processing

    International Nuclear Information System (INIS)

    Software for the focusing two-arm spectrometer designed for the study of particle production processes with large transverse momenta is described. The interaction of programs, information file flow, special software for operation with an intensive particle beam are considered in detail. Organization of beam characteristic monitoring for drift chambers and electronics calibration are described. Basic principles, used at designing the software of two-computer complex with HP-2100A and ES-1040 are presented. The software described is used when carrying out experiments on the study of hadron production in proton-proton, proton-deuteron and proton-nuclear interactions in the course of which azimuthal correlations of γ-quanta with large transverse momenta are measured

  13. Automatic data acquisition and processing with the APEX goniometer, PDP 11/03 and IBM 370 computer, with application to surface texture studies of magnox fuel cladding

    International Nuclear Information System (INIS)

    This report is written in two parts and is in the form of a working manual enabling the user to operate the described system and make modifications to suit individuals requirements. Part 1 describes the general procedures required for automatic data acquisition and processing incorporating the APEX goniometer, PDP11/03 and IBM370/168 computers. A listing of the program illustrating the retrieval of data from the PDP11/03 floppy disc system is also included. Part 2 describes in detail the application of automatic data collection to texture studies of magnox fuel cladding. It is designed to enable the user to understand the method of data collection and the use of the computer facilities at Harwell including obtaining a graphical display via the GHOST system. This section incorporates a listing of the display program and the results obtained from the magnox fuel cladding. (author)

  14. Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

    CERN Document Server

    Shuping, R Y; Vacca, W D; Charcos-Llorens, M; Reach, W T; Alles, R; Clarke, M; Melchiorri, R; Radomski, J; Shenoy, S; Sandel, D; Omelian, E B

    2014-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both auto...

  15. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  16. A novel GIS-based tool for estimating present-day ocean reference depth using automatically processed gridded bathymetry data

    Science.gov (United States)

    Jurecka, Mirosława; Niedzielski, Tomasz; Migoń, Piotr

    2016-05-01

    This paper presents a new method for computing the present-day value of the reference depth (dr) which is an essential input information for assessment of past sea-level changes. The method applies a novel automatic geoprocessing tool developed using Python script and ArcGIS, and uses recent data about ocean floor depth, sediment thickness, and age of oceanic crust. The procedure is multi-step and involves creation of a bathymetric dataset corrected for sediment loading and isostasy, delineation of subduction zones, computation of perpendicular sea-floor profiles, and statistical analysis of these profiles versus crust age. The analysis of site-specific situations near the subduction zones all around the world shows a number of instances where the depth of the oceanic crust stabilizes at a certain level before reaching the subduction zone, and this occurs at depths much lower than proposed in previous approaches to the reference depth issue. An analysis of Jurassic and Cretaceous oceanic lithosphere shows that the most probable interval at which the reference depth occurs is 5300-5800 m. This interval is broadly consistent with dr estimates determined using the Global Depth-Heatflow model (GDH1), but is significantly lower than dr estimates calculated on a basis of the Parsons-Sclater Model (PSM).

  17. GAIT-ER-AID: An Expert System for Analysis of Gait with Automatic Intelligent Pre-Processing of Data

    OpenAIRE

    Bontrager, EL.; Perry, J.; Bogey, R.; Gronley, J.; Barnes, L.; Bekey, G.; Kim, JW

    1990-01-01

    This paper describes the architecture and applications of an expert system designed to identify the specific muscles responsible for a given dysfunctional gait pattern. The system consists of two parts: a data analysis expert system (DA/ES) and a gait pathology expert system (GP/ES). The DA/ES processes raw data on joint angles, foot-floor contact patterns and EMG's from relevant muscles and synthesizes them into a data frame for use by the GP/ES. Various aspects of the intelligent data pre-p...

  18. Implementation plan for automatic data processing equipment as part of the DYMAC advanced accountability system. Addendum 3 to applications of advanced accountability concepts in mixed oxide fabrication

    International Nuclear Information System (INIS)

    The Phase I study of the application of advanced accountability methods (DYMAC) in a uranium/plutonium mixed oxide facility was extended to include an implementation plan for the Automatic Data Processing System, as required by ERDA Manual Appendix 1830. The proposed system consists of a dual-control computer system with a minimum complement of peripheral equipment, which will be interfaced to the necessary measuring and display devices. Technical specifications for hardware and software system requirements are included, and cost estimates based on these specifications have been obtained

  19. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  20. An Automatic Design Flow for Data Parallel and Pipelined Signal Processing Applications on Embedded Multiprocessor with NoC: Application to Cryptography

    Directory of Open Access Journals (Sweden)

    Omar Hammami

    2009-01-01

    Full Text Available Embedded system design is increasingly based on single chip multiprocessors because of the high performance and flexibility requirements. Embedded multiprocessors on FPGA provide the additional flexibility by allowing customization through addition of hardware accelerators on FPGA when parallel software implementation does not provide the expected performance. And the overall multiprocessor architecture is still kept for additional applications. This provides a transition to software only parallel implementation while avoiding pure hardware implementation. An automatic design flow is proposed well suited for data flow signal processing exhibiting both pipelining and data parallel mode of execution. Fork-Join model-based software parallelization is explored to find out the best parallelization configuration. C-based synthesis coprocessor is added to improve performance with more hardware resource usage. The Triple Data Encryption Standard (TDES cryptographic algorithm on a 48-PE single-chip distributed memory multiprocessor is selected as an application example of the flow.

  1. Control of automatic processes: A parallel distributed-processing model of the stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1988-06-16

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

  2. A CityGML extension for traffic-sign objects that guides the automatic processing of data collected using Mobile Mapping technology

    Science.gov (United States)

    Varela-González, M.; Riveiro, B.; Arias-Sánchez, P.; González-Jorge, H.; Martínez-Sánchez, J.

    2014-11-01

    The rapid evolution of integral schemes, accounting for geometric and semantic data, has been importantly motivated by the advances in the last decade in mobile laser scanning technology; automation in data processing has also recently influenced the expansion of the new model concepts. This paper reviews some important issues involved in the new paradigms of city 3D modelling: an interoperable schema for city 3D modelling (cityGML) and mobile mapping technology to provide the features that composing the city model. This paper focuses in traffic signs, discussing their characterization using cityGML in order to ease the implementation of LiDAR technology in road management software, as well as analysing some limitations of the current technology in the labour of automatic detection and classification.

  3. Semi-automatic process partitioning for parallel computation

    Science.gov (United States)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1988-01-01

    On current multiprocessor architectures one must carefully distribute data in memory in order to achieve high performance. Process partitioning is the operation of rewriting an algorithm as a collection of tasks, each operating primarily on its own portion of the data, to carry out the computation in parallel. A semi-automatic approach to process partitioning is considered in which the compiler, guided by advice from the user, automatically transforms programs into such an interacting task system. This approach is illustrated with a picture processing example written in BLAZE, which is transformed into a task system maximizing locality of memory reference.

  4. Automatic transformations in the inference process

    Energy Technology Data Exchange (ETDEWEB)

    Veroff, R. L.

    1980-07-01

    A technique for incorporating automatic transformations into processes such as the application of inference rules, subsumption, and demodulation provides a mechanism for improving search strategies for theorem proving problems arising from the field of program verification. The incorporation of automatic transformations into the inference process can alter the search space for a given problem, and is particularly useful for problems having broad rather than deep proofs. The technique can also be used to permit the generation of inferences that might otherwise be blocked and to build some commutativity or associativity into the unification process. Appropriate choice of transformations, and new literal clashing and unification algorithms for applying them, showed significant improvement on several real problems according to several distinct criteria. 22 references, 1 figure.

  5. Etna_NETVIS: A dedicated tool for automatically pre-processing high frequency data useful to extract geometrical parameters and track the evolution of the lava field

    Science.gov (United States)

    Marsella, Maria; Junior Valentino D'Aranno, Peppe; De Bonis, Roberto; Nardinocchi, Carla; Scifoni, Silvia; Scutti, Marianna; Sonnessa, Alberico; Wahbeh, Wissam; Biale, Emilio; Coltelli, Mauro; Pecora, Emilio; Prestifilippo, Michele; Proietti, Cristina

    2016-04-01

    In volcanic areas, where it could be difficult to gain access to the most critical zones for carrying out direct surveys, digital photogrammetry techniques are rarely experimented, although in many cases they proved to have remarkable potentialities, as the possibility to follow the evolution of volcanic (fracturing, vent positions, lava fields, lava front positions) and deformation processes (inflation/deflation and instability phenomena induced by volcanic activity). These results can be obtained, in the framework of standard surveillance activities, by acquiring multi-temporal datasets including Digital Orthophotos (DO) and Digital Elevation Models (DEM) to be used for implementing a quantitative and comparative analysis. The frequency of the surveys can be intensified during emergency phases to implement a quasi real-time monitoring for supporting civil protection actions. The high level of accuracy and the short time required for image processing make digital photogrammetry a suitable tool for controlling the evolution of volcanic processes which are usually characterized by large and rapid mass displacements. In order to optimize and extend the existing permanent ground NEtwork of Thermal and VIsible Sensors located on Mt. Etna (Etna_NETVIS) and to improve the observation of the most active areas, an approach for monitoring surface sin-eruptive processes was implemented. A dedicated tool for automatically pre-processing high frequency data, useful to extract geometrical parameters as well as to track the evolution of the lava field, was developed and tested both in simulated and real scenarios. The tool allows to extract a coherent multi-temporal dataset of orthophotos useful to evaluate active flow area and to estimate effusion rates. Furthermore, Etna_NETVIS data were used to downscale the information derived from satellite data and/or to integrate the satellite datasets in case of incomplete coverage or missing acquisitions. This work was developed in the

  6. Data processing

    CERN Document Server

    Fry, T F

    2013-01-01

    Data Processing discusses the principles, practices, and associated tools in data processing. The book is comprised of 17 chapters that are organized into three parts. The first part covers the characteristics, systems, and methods of data processing. Part 2 deals with the data processing practice; this part discusses the data input, output, and storage. The last part discusses topics related to systems and software in data processing, which include checks and controls, computer language and programs, and program elements and structures. The text will be useful to practitioners of computer-rel

  7. Automatic recognition of lactating sow behaviors through depth image processing

    Science.gov (United States)

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  8. The Caltech Tomography Database and Automatic Processing Pipeline.

    Science.gov (United States)

    Ding, H Jane; Oikonomou, Catherine M; Jensen, Grant J

    2015-11-01

    Here we describe the Caltech Tomography Database and automatic image processing pipeline, designed to process, store, display, and distribute electron tomographic data including tilt-series, sample information, data collection parameters, 3D reconstructions, correlated light microscope images, snapshots, segmentations, movies, and other associated files. Tilt-series are typically uploaded automatically during collection to a user's "Inbox" and processed automatically, but can also be entered and processed in batches via scripts or file-by-file through an internet interface. As with the video website YouTube, each tilt-series is represented on the browsing page with a link to the full record, a thumbnail image and a video icon that delivers a movie of the tomogram in a pop-out window. Annotation tools allow users to add notes and snapshots. The database is fully searchable, and sets of tilt-series can be selected and re-processed, edited, or downloaded to a personal workstation. The results of further processing and snapshots of key results can be recorded in the database, automatically linked to the appropriate tilt-series. While the database is password-protected for local browsing and searching, datasets can be made public and individual files can be shared with collaborators over the Internet. Together these tools facilitate high-throughput tomography work by both individuals and groups. PMID:26087141

  9. Processing medical reports to automatically populate ontologies.

    Science.gov (United States)

    Borrego, Luís; Quaresma, Paulo

    2013-01-01

    Medical reports are, quite often, written and stored in computer systems in a non-structured free text form. As a consequence, the information contained in these reports is not easily available and it is not possible to take it into account by medical decision support systems. We propose a methodology to automatically process and analyze medical reports, identifying concepts and their instances, and populating a new ontology. This methodology is based in natural language processing techniques using linguistic and statistical information. The proposed system was applied successfully to a set of medical reports from the Veterinary Hospital of the University of Évora. PMID:23388282

  10. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    Science.gov (United States)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning

  11. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  12. XML-Based Automatic Test Data Generation

    OpenAIRE

    Halil Ibrahim Bulbul; Turgut Bakir

    2012-01-01

    Software engineering aims at increasing quality and reliability while decreasing the cost of the software. Testing is one of the most time-consuming phases of the software development lifecycle. Improvement in software testing results in decrease in cost and increase in quality of the software. Automation in software testing is one of the most popular ways of software cost reduction and reliability improvement. In our work we propose a system called XML-based automatic test data generation th...

  13. Automatic inline defect detection for a thin film transistor–liquid crystal display array process using locally linear embedding and support vector data description

    International Nuclear Information System (INIS)

    Defect detection plays a critical role in thin film transistor liquid crystal display (TFT-LCD) manufacturing. This paper proposes an inline defect-detection (IDD) system, by which the defects can be automatically detected in a TFT array process. The IDD system is composed of three stages: the image preprocessing, the appearance-based classification and the decision-making stages. In the first stage, the pixels can be segmented from an input image based on the designed pixel segmentation method. The pixels are then sent into the appearance-based classification stage for defect and non-defect classification. Two novel methods are embedded in this stage: the locally linear embedding (LLE) and the support vector data description (SVDD). LLE is able to substantially reduce the dimensions of the input pixels by manifold learning and SVDD is able to effectively discriminate the normal pixels from the defective ones with a hypersphere by one-class classification. After aggregating the classification results, the third stage outputs the final detection result. Experimental results, carried out on real images provided by a LCD manufacturer, show that the IDD system can not only achieve a high defect-detection rate of over 98%, but also accomplish the task of inline defect detection within 4 s for one input image

  14. Control of automatic processes: A parallel distributed-processing account of the Stroop effect. Technical report

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J.D.; Dunbar, K.; McClelland, J.L.

    1989-11-22

    A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirical data suggest that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a processing pathway and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning. This was accomplished by combining the cascade mechanism described by McClelland (1979) with the back propagation learning algorithm (Rumelhart, Hinton, Williams, 1986). The model is able to simulate performance in the standard Stroop task, as well as aspects of performance in variants of this task which manipulate SOA, response set, and degree of practice. In the discussion we contrast our model with other models, and indicate how it relates to many of the central issues in the literature on attention, automaticity, and interference.

  15. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    Science.gov (United States)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  16. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  17. Data processing

    International Nuclear Information System (INIS)

    The 1988 progress report of the Data Processing laboratory (Polytechnic School, France), is presented. The laboratory research fields are: the semantics, the tests and the semantic analysis of the codes, the formal calculus, the software applications, the algorithms, the neuron networks and VLSI (Very Large Scale Integration). The investigations concerning the polynomial rings are performed by means of the standard basis approach. Among the research topics, the Pascal codes, the parallel processing, the combinatorial, statistical and asymptotic properties of the fundamental data processing tools, the signal processing and the pattern recognition. The published papers, the congress communications and the thesis are also included

  18. Data mining of geospatial data: combining visual and automatic methods

    OpenAIRE

    Demšar, Urška

    2006-01-01

    Most of the largest databases currently available have a strong geospatial component and contain potentially useful information which might be of value. The discipline concerned with extracting this information and knowledge is data mining. Knowledge discovery is performed by applying automatic algorithms which recognise patterns in the data. Classical data mining algorithms assume that data are independently generated and identically distributed. Geospatial data are multidimensional, spatial...

  19. The automatic calibration of Korean VLBI Network data

    CERN Document Server

    Hodgson, Jeffrey A; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-01-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  20. The Automatic Calibration of Korean VLBI Network Data

    Science.gov (United States)

    Hodgson, Jeffrey A.; Lee, Sang-Sung; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-08-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  1. Automatic Classification of Seafloor Image Data by Geospatial Texture Descriptors

    OpenAIRE

    Lüdtke, Andree

    2014-01-01

    A novel approach for automatic context-sensitive classification of spatially distributed image data is introduced. The proposed method targets applications of seafloor habitat mapping but is generally not limited to this domain or use case. Spatial context information is incorporated in a two-stage classification process, where in the second step a new descriptor for patterns of feature class occurrence according to a generically defined classification scheme is applied. The method is based o...

  2. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  3. Spectral Curve Fitting for Automatic Hyperspectral Data Analysis

    CERN Document Server

    Brown, Adrian J

    2014-01-01

    Automatic discovery and curve fitting of absorption bands in hyperspectral data can enable the analyst to identify materials present in a scene by comparison with library spectra. This procedure is common in laboratory spectra, but is challenging for sparse hyperspectral data. A procedure for robust discovery of overlapping bands in hyperspectral data is described in this paper. The method is capable of automatically discovering and fitting symmetric absorption bands, can separate overlapping absorption bands in a stable manner, and has relatively low sensitivity to noise. A comparison with techniques already available in the literature is presented using simulated spectra. An application is demonstrated utilizing the shortwave infrared (2.0-2.5 micron or 5000-4000 cm-1) region. A small hyperspectral scene is processed to demonstrate the ability of the method to detect small shifts in absorption wavelength caused by varying white mica chemistry in a natural setting.

  4. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    Science.gov (United States)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched

  5. Automatic processing of dominance and submissiveness

    OpenAIRE

    Moors, Agnes; De Houwer, Jan

    2005-01-01

    We investigated whether people are able to detect in a relatively automatic manner the dominant or submissive status of persons engaged in social interactions. Using a variant of the affective Simon task (De Houwer & Eelen, 1998), we demonstrated that the verbal response DOMINANT or SUBMISSIVE was facilitated when it had to be made to a target person that was respectively dominant or submissive. These results provide new information about the automatic nature of appraisals and ...

  6. The Interplay between Automatic and Control Processes in Reading.

    Science.gov (United States)

    Walczyk, Jeffrey J.

    2000-01-01

    Reviews prominent reading theories in light of their accounts of how automatic and control processes combine to produce successful text comprehension, and the trade-offs between the two. Presents the Compensatory-Encoding Model of reading, which explicates how, when, and why automatic and control processes interact. Notes important educational…

  7. Automatic process control for the food industry: an introduction

    Science.gov (United States)

    In order to ensure food security in food manufacturing operations automatic process control is desired. With the operation of the automatic process control systems the deviation of the controlled variables from the standards can be consistently perceived, adjusted, and minimized to improve the proce...

  8. Automatic image processing as a means of safeguarding nuclear material

    International Nuclear Information System (INIS)

    Problems involved in computerized analysis of pictures taken by automatic film or video cameras in the context of international safeguards implementation are described. They include technical ones as well as the need to establish objective criteria for assessing image information. In the near future automatic image processing systems will be useful in verifying the identity and integrity of IAEA seals. (author)

  9. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; De Vries, A.P.; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  10. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  11. Automatic neutron PSD transmission from a process computer to a timeshare system

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, J.B.; Sides, W.H. Jr.

    1977-04-01

    A method for automatically telephoning, connecting, and transmitting neutron power-spectral density data from a CDC-1700 process control computer to a PDP-10 time-share system is described. Detailed program listings and block diagrams are included.

  12. Automatic neutron PSD transmission from a process computer to a timeshare system

    International Nuclear Information System (INIS)

    A method for automatically telephoning, connecting, and transmitting neutron power-spectral density data from a CDC-1700 process control computer to a PDP-10 time-share system is described. Detailed program listings and block diagrams are included

  13. RFID: A Revolution in Automatic Data Recognition

    Science.gov (United States)

    Deal, Walter F., III

    2004-01-01

    Radio frequency identification, or RFID, is a generic term for technologies that use radio waves to automatically identify people or objects. There are several methods of identification, but the most common is to store a serial number that identifies a person or object, and perhaps other information, on a microchip that is attached to an antenna…

  14. Automatic processing of nuclear emulsion in the modern experiments

    International Nuclear Information System (INIS)

    This article is devoted to the methods of the nuclear emulsions processing with the automatic scanning station. The method of reconstruction for tracks located in the plane of emulsion based on Hough Transform algorithm (HT) here is described

  15. Colorized linear CCD data acquisition system with automatic exposure control

    Science.gov (United States)

    Li, Xiaofan; Sui, Xiubao

    2014-11-01

    Colorized linear cameras deliver superb color fidelity at the fastest line rates in the industrial inspection. It's RGB trilinear sensor eliminates image artifacts by placing a separate row of pixels for each color on a single sensor. It's advanced design minimizes distance between rows to minimize image artifacts due to synchronization. In this paper, the high-speed colorized linear CCD data acquisition system was designed take advantages of the linear CCD sensor μpd3728. The hardware and software design of the system based on FPGA is introduced and the design of the functional modules is performed. The all system is composed of CCD driver module, data buffering module, data processing module and computer interface module. The image data was transferred to computer by Camera link interface. The system which automatically adjusts the exposure time of linear CCD, is realized with a new method. The integral time of CCD can be controlled by the program. The method can automatically adjust the integration time for different illumination intensity under controlling of FPGA, and respond quickly to brightness changes. The data acquisition system is also offering programmable gains and offsets for each color. The quality of image can be improved after calibration in FPGA. The design has high expansibility and application value. It can be used in many application situations.

  16. QCS: Driving automatic data analysis programs for TFTR

    International Nuclear Information System (INIS)

    QCS (Queue Control System) executes on the VAX Cluster, driving programs which provide automatic analysis of per-shot data for the TFTR experiment at PPPL. QCS works in conjunction with site-specific programs to provide these noninteractive user programs with shot numbers for which all necessary conditions have been satisfied to permit processing. A typical condition is the existence of a particular data file or set of files for a shot. The user provides a boolean expression of the conditions upon which a shot number should be entered into a private Queue. The user program requests a ''ready-to-process'' shot number through a call to a specially provided function. If the specified Queue is empty, the program hibernates until another shot number is available

  17. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-01-01

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology. PMID:20375445

  18. An Automat for the Semantic Processing of Structured Information

    OpenAIRE

    Leiva-Mederos, Amed; Senso, Jos?? A.; Dom??nguez-Velasco, Sandor; H??pola, Pedro

    2012-01-01

    Using the database of the PuertoTerm project, an indexing system based on the cognitive model of Brigitte Enders was built. By analyzing the cognitive strategies of three abstractors, we built an automat that serves to simulate human indexing processes. The automat allows the texts integrated in the system to be assessed, evaluated and grouped by means of the Bipartite Spectral Graph Partitioning algorithm, which also permits visualization of the terms and the documents. The system features a...

  19. Automatic data acquisition of anthropological measurements

    DEFF Research Database (Denmark)

    Lynnerup, N; Lynnerup, O

    1993-01-01

    A computer program in BASIC is presented which enables the input of measurement data from a caliper directly into specific records in a dBASE IV or PARADOX database. The program circumvents the tedious procedure of first recording measurement data manually and then entering the data into a computer....... Thus much time can be saved and the risk of wrong data entry is lowered. The program was easy to use, and no significant problems were encountered. Necessary hardware is a standard IBM compatible desktop computer, Mitotoyu Digimatic (TM) calipers and a Mitotoyu Digimatic MUX-10 Multiplexer (TM)....

  20. Robust indexing for automatic data collection

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  1. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. ((orig.))

  2. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  3. Automatic retrieval of bone fracture knowledge using natural language processing.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew S; Maley, Joan; Biswal, Sandip

    2013-08-01

    Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time. PMID:23053906

  4. Application of software technology to automatic test data analysis

    Science.gov (United States)

    Stagner, J. R.

    1991-01-01

    The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.

  5. From Automatic to Adaptive Data Acquisition

    DEFF Research Database (Denmark)

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yet the main driving force behind these deployments are still computer scien- tists. The denser sampling and added modalities oered by sensornets could drive these elds in new directions, but not until the domain scientists be......- come familiar with sensornets and use them as any other instrument in their toolbox. We explore three dierent directions in which sensornets can become easier to deploy, collect data of higher quality, and oer more exibility, and we postulate that sensornets should be instruments for domain scientists....... As a tool to ease designing and deploying sensornets, we developed a method- ology to characterize mote performance and predict the resource consumption for applications on dierent platforms, without actually having to execute them. This enables easy comparison of dierent platforms. In order to reduce...

  6. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  7. An Automatic Development Process for Integrated Modular Avionics Software

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2013-05-01

    Full Text Available With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA. ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic development process based on Architecture Analysis & Design Language. An automatic model transformation approach from domain-specific models to platform-specific ARINC653 models and safety-critical ARINC653-compliant code generation technology are respectively presented during this process. A simplified multi-task flight application as a case study with preliminary experiment result is given to show the validity of this process.

  8. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    Science.gov (United States)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  9. The protein crystallography beamline BW6 at DORIS - automatic operation and high-throughput data collection

    International Nuclear Information System (INIS)

    The wiggler beamline BW6 at DORIS has been optimized for de-novo solution of protein structures on the basis of MAD phasing. Facilities for automatic data collection, rapid data transfer and storage, and online processing have been developed which provide adequate conditions for high-throughput applications, e.g., in structural genomics

  10. SimWorld – Automatic Generation of realistic Landscape models for Real Time Simulation Environments – a Remote Sensing and GIS-Data based Processing Chain

    OpenAIRE

    Sparwasser, Nils; Stöbe, Markus; Friedl, Hartmut; Krauß, Thomas; Meisner, Robert

    2007-01-01

    The interdisciplinary project “SimWorld” - initiated by the German Aerospace Center (DLR) - aims to improve and to facilitate the generation of virtual landscapes for driving simulators. It integrates the expertise of different research institutes working in the field of car simulation and remote sensing technology. SimWorld will provide detailed virtual copies of the real world derived from air- and satellite-borne remote sensing data, using automated geo-scientific analysis techniques for m...

  11. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan;

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...... experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.......2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were...

  12. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  13. AUTOMATIC REGISTRATION OF MULTI-SOURCE DATA USING MUTUAL INFORMATION

    OpenAIRE

    E. G. Parmehr; Zhang, C.; C. S. Fraser

    2012-01-01

    Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI) as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI) approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imager...

  14. The FAST-DATA System: Fully Automatic Stochastic Technology for Data Acquisition, Transmission, and Analysis

    International Nuclear Information System (INIS)

    The potential to automatically collect, classify, and report on stochastic data (signals with random, time-varying components) from power plants has long been discussed by utilities, government, industries, national laboratories and universities. It has become clear to all concerned that such signals often contain information about plant conditions which may provide the basis for increased plant availability through early detection and warning of developing malfunctions. Maintenance can then be scheduled at opportune times. Inopportune failures of major and minor power plant components are a major cause of down-time and detracts significantly from availability of the plant. A complete system to realize automatic stochastic data processing has been conceptually designed. Development of the FAST-DATA system has been initiated through a program of periodic measurements performed on the vibration and loose parts monitoring system of the Trojan reactor (1130-MW(e)PWR) operated by Portland General Electric Company. The development plan for the system consists of a six-step procedure. The initial steps depend on a significant level of human involvement. In the course of development of the system, the routine duties of operators and analysts are gradually replaced by computerized automatic data handling procedures. In the final configuration, the operator and analysts are completely freed of routine chores by logical machinery. The results achieved to date from actual application of the proof-of-principle system are discussed. The early developmental phases have concentrated on system organization and examination of a representative data base. Preliminary results from the signature analysis program using Trojan data indicate that the performance specifications predicted for the FAST-DATA system are achievable in practice. (author)

  15. Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Mathias Disney

    2013-01-01

    Full Text Available This paper presents a new method for constructing quickly and automatically precision tree models from point clouds of the trunk and branches obtained by terrestrial laser scanning. The input of the method is a point cloud of a single tree scanned from multiple positions. The surface of the visible parts of the tree is robustly reconstructed by making a flexible cylinder model of the tree. The thorough quantitative model records also the topological branching structure. In this paper, every major step of the whole model reconstruction process, from the input to the finished model, is presented in detail. The model is constructed by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the tree surface. The neighbor-relations and geometrical properties of these cover sets are used to reconstruct the details of the tree and, step by step, the whole tree. The point cloud and the sets are segmented into branches, after which the branches are modeled as collections of cylinders. From the model, the branching structure and size properties, such as volume and branch size distributions, for the whole tree or some of its parts, can be approximated. The approach is validated using both measured and modeled terrestrial laser scanner data from real trees and detailed 3D models. The results show that the method allows an easy extraction of various tree attributes from terrestrial or mobile laser scanning point clouds.

  16. Automatic and controlled processing and the Broad Autism Phenotype.

    Science.gov (United States)

    Camodeca, Amy; Voelker, Sylvia

    2016-01-30

    Research related to verbal fluency in the Broad Autism Phenotype (BAP) is limited and dated, but generally suggests intact abilities in the context of weaknesses in other areas of executive function (Hughes et al., 1999; Wong et al., 2006; Delorme et al., 2007). Controlled processing, the generation of search strategies after initial, automated responses are exhausted (Spat, 2013), has yet to be investigated in the BAP, and may be evidenced in verbal fluency tasks. One hundred twenty-nine participants completed the Delis-Kaplan Executive Function System Verbal Fluency test (D-KEFS; Delis et al., 2001) and the Broad Autism Phenotype Questionnaire (BAPQ; Hurley et al., 2007). The BAP group (n=53) produced significantly fewer total words during the 2nd 15" interval compared to the Non-BAP (n=76) group. Partial correlations indicated similar relations between verbal fluency variables for each group. Regression analyses predicting 2nd 15" interval scores suggested differentiation between controlled and automatic processing skills in both groups. Results suggest adequate automatic processing, but slowed development of controlled processing strategies in the BAP, and provide evidence for similar underlying cognitive constructs for both groups. Controlled processing was predictive of Block Design score for Non-BAP participants, and was predictive of Pragmatic Language score on the BAPQ for BAP participants. These results are similar to past research related to strengths and weaknesses in the BAP, respectively, and suggest that controlled processing strategy use may be required in instances of weak lower-level skills. PMID:26652842

  17. Automatic high power RF processing system using PLC

    International Nuclear Information System (INIS)

    We have developed the automatic control system using Programmable Logic Controller (PLC) for the high power RF processing, which is used for the C-band (5712-MHz) accelerating structure and the klystron in SPring-8 Compact SASE Source (SCSS) project. The PLC has been used in industry to have many advantages, such as reliable, compact, low-cost. In addition the PLC is recently able to communicate with the upper-layer controller through a network. We use this system for the klystron RF power test. In this paper, we will describe the configuration of the system and the detail of the high power RF processing. (author)

  18. Towards automatic building of continuous and discrete process simulator

    International Nuclear Information System (INIS)

    The problem to be solved is the simulation of essentially continuous processes but involving a limited number of events leading to discontinuities. The NEPTUNIX simulation package solves this problem in the folloving way: a description of the process model is made, using a non-procedural language, the model is then analysed and, if it is found correct, NEPTUNIX generates automatically the corresponding simulator. This simulator is efficient and transportable. Model description and other compiler outputs build up a complete documentation of the model, which documentation is also fundamental for easy and efficient operation of the simulator

  19. Automatic Defect Detection in X-Ray Images Using Image Data Fusion

    Institute of Scientific and Technical Information of China (English)

    TIAN Yuan; DU Dong; CAI Guorui; WANG Li; ZHANG Hua

    2006-01-01

    Automatic defect detection in X-ray images is currently a focus of much research at home and abroad. The technology requires computerized image processing, image analysis, and pattern recognition. This paper describes an image processing method for automatic defect detection using image data fusion which synthesizes several methods including edge extraction, wave profile analyses, segmentation with dynamic threshold, and weld district extraction. Test results show that defects that induce an abrupt change over a predefined extent of the image intensity can be segmented regardless of the number, location, shape, or size. Thus, the method is more robust and practical than the current methods using only one method.

  20. Automatic Discovery of Non-Compositional Compounds in Parallel Data

    CERN Document Server

    Melamed, I D

    1997-01-01

    Automatic segmentation of text into minimal content-bearing units is an unsolved problem even for languages like English. Spaces between words offer an easy first approximation, but this approximation is not good enough for machine translation (MT), where many word sequences are not translated word-for-word. This paper presents an efficient automatic method for discovering sequences of words that are translated as a unit. The method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languages. It can discover hundreds of non-compositional compounds on each iteration, and constructs longer compounds out of shorter ones. Objective evaluation on a simple machine translation task has shown the method's potential to improve the quality of MT output. The method makes few assumptions about the data, so it can be applied to parallel data other than parallel texts, such as word spellings and pronunciations.

  1. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    OpenAIRE

    Chuqing Cao; Ying Sun

    2014-01-01

    Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data....

  2. Automatic Generation of Thematically Focused Information Portals from Web Data

    OpenAIRE

    Sizov, Sergej

    2005-01-01

    Finding the desired information on the Web is often a hard and time-consuming task. This thesis presents the methodology of automatic generation of thematically focused portals from Web data. The key component of the proposed Web retrieval framework is the thematically focused Web crawler that is interested only in a specific, typically small, set of topics. The focused crawler uses classification methods for filtering of fetched documents and identifying most likely relevant Web source...

  3. Real-time wireless acquisition of process data

    OpenAIRE

    Zhang, Ye

    2011-01-01

    This study discusses a novel method called automatic process measurement, which is based on the idea of mining process data from workflow logs. We improve the process mining technique by using Bluetooth wireless technology to do real-time acquisition of process data. The automatic measurement system is capable of collecting process data of elderly people's daily process as well as nursing personnel's behavior in the open healthcare. Similarly, retail and logistics processes can be measured wi...

  4. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    International Nuclear Information System (INIS)

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same

  5. Automatic Identification And Data Collection Via Barcode Laser Scanning.

    Science.gov (United States)

    Jacobeus, Michel

    1986-07-01

    How to earn over 100 million a year by investing 40 million ? No this is not the latest Wall Street "tip" but the costsavings obtained by the U.S. Department of Defense. 2 % savings on annual turnover claim supermarkets ! Millions of Dollars saved report automotive companies ! These are not daydreams, but tangible results measured by users after implemen-ting Automatic Identification and Data Collection systems, based on bar codes. To paraphrase the famous sentence "I think, thus I am", with AI/ADC systems "You knonw, thus you are". Indeed, in today's world, an immediate, accurate and precise information is a vital management need for companies growth and survival. AI/ADC techniques fullfill these objectives by supplying automatically and without any delay nor alteration the right information.

  6. DMET-Analyzer: automatic analysis of Affymetrix DMET Data

    Directory of Open Access Journals (Sweden)

    Guzzi Pietro

    2012-10-01

    Full Text Available Abstract Background Clinical Bioinformatics is currently growing and is based on the integration of clinical and omics data aiming at the development of personalized medicine. Thus the introduction of novel technologies able to investigate the relationship among clinical states and biological machineries may help the development of this field. For instance the Affymetrix DMET platform (drug metabolism enzymes and transporters is able to study the relationship among the variation of the genome of patients and drug metabolism, detecting SNPs (Single Nucleotide Polymorphism on genes related to drug metabolism. This may allow for instance to find genetic variants in patients which present different drug responses, in pharmacogenomics and clinical studies. Despite this, there is currently a lack in the development of open-source algorithms and tools for the analysis of DMET data. Existing software tools for DMET data generally allow only the preprocessing of binary data (e.g. the DMET-Console provided by Affymetrix and simple data analysis operations, but do not allow to test the association of the presence of SNPs with the response to drugs. Results We developed DMET-Analyzer a tool for the automatic association analysis among the variation of the patient genomes and the clinical conditions of patients, i.e. the different response to drugs. The proposed system allows: (i to automatize the workflow of analysis of DMET-SNP data avoiding the use of multiple tools; (ii the automatic annotation of DMET-SNP data and the search in existing databases of SNPs (e.g. dbSNP, (iii the association of SNP with pathway through the search in PharmaGKB, a major knowledge base for pharmacogenomic studies. DMET-Analyzer has a simple graphical user interface that allows users (doctors/biologists to upload and analyse DMET files produced by Affymetrix DMET-Console in an interactive way. The effectiveness and easy use of DMET Analyzer is demonstrated through different

  7. ATLASWatchMan, a tool for automatized data analysis

    International Nuclear Information System (INIS)

    The ATLAS detector will start soon to take data and many New Physics phenomena are expected. The ATLASWatchMan package has been developed with the principles of CASE (Computer Aided Software Engineering) and it helps the user setting up any analysis by automatically generating the actual analysis code and data files from user settings. ATLASWatchMan provides a light and transparent framework to plug in user-defined cuts and algorithms to look at as many channels the user wants, running the analysis both locally and on the Grid. Examples of analyses run with the package using the latest release of the ATLAS software are shown

  8. Automatic reconstruction of a bacterial regulatory network using Natural Language Processing

    OpenAIRE

    Collado-Vides Julio; Martínez-Flores Irma; Salgado Heladia; Rodríguez-Penagos Carlos

    2007-01-01

    Abstract Background Manual curation of biological databases, an expensive and labor-intensive process, is essential for high quality integrated data. In this paper we report the implementation of a state-of-the-art Natural Language Processing system that creates computer-readable networks of regulatory interactions directly from different collections of abstracts and full-text papers. Our major aim is to understand how automatic annotation using Text-Mining techniques can complement manual cu...

  9. Automatic humidification system to support the assessment of food drying processes

    Science.gov (United States)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  10. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  11. Automatically identifying scatter in fluorescence data using robust techniques

    DEFF Research Database (Denmark)

    Engelen, S.; Frosch, Stina; Hubert, M.

    2007-01-01

    complicates the analysis instead and contributes to model inadequacy. As such, scatter can be considered as an example of element-wise outliers. However, no straightforward method for identifying the scatter region can be found in the literature. In this paper an automatic scatter identification method is...... input data for three different PARAFAC methods. Firstly inserting missing values in the scatter regions are tested, secondly an interpolation of the scatter regions is performed and finally the scatter regions are down-weighted. These results show that the PARAFAC method to choose after scatter...

  12. Enhancement of the automatic ultrasonic signal processing system using digital technology

    International Nuclear Information System (INIS)

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  13. Enhancement of the automatic ultrasonic signal processing system using digital technology

    Energy Technology Data Exchange (ETDEWEB)

    Koo, In Soo; Park, H. Y.; Suh, Y. S.; Kim, D. Hoon; Huh, S.; Sung, S. H.; Jang, G. S.; Ryoo, S. G.; Choi, J. H.; Kim, Y. H.; Lee, J. C.; Kim, D. Hyun; Park, H. J.; Kim, Y. C.; Lee, J. P.; Park, C. H.; Kim, M. S

    1999-12-01

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  14. ASAP (Automatic Software for ASL Processing): A toolbox for processing Arterial Spin Labeling images.

    Science.gov (United States)

    Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando

    2016-04-01

    The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. PMID:26612079

  15. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  16. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  17. Automatic removal of outliers in hydrologic time series and quality control of rainfall data: processing a real-time database of the Local System for Flood Monitoring in Klodzko County, Poland

    Science.gov (United States)

    Mizinski, Bartlomiej; Niedzielski, Tomasz; Kryza, Maciej; Szymanowski, Mariusz

    2013-04-01

    Real-time hydrological forecasting requires the highest quality of both hydrologic and meteorological data collected in a given river basin. Large outliers may lead to inaccurate predictions, with substantial departures between observations and prognoses considered even in short term. Although we need the correctness of both riverflow and rainfall data, they cannot be processed in the same way to produce a filtered output. Indeed, hydrologic time series at a given gauge can be interpolated in time domain after having detected suspicious values, however if no outlier has been detected at the upstream sites. In the case of rainfall data, interpolation is not suitable as we cannot verify the potential outliers at a given site against data from other sites especially in the complex terrain. This is due to the fact that very local convective events may occur, leading to large rainfall peaks at a limited space. Hence, instead of interpolating data, we rather perform a flagging procedure that only ranks outliers according to the likelihood of occurrence. Following the aforementioned assumptions, we have developed a few modules that serve a purpose of a fully automated correction of a database that is updated in real-time every 15 minutes, and the main objective of the work was to produce a high-quality database for a purpose of hydrologic rainfall-runoff modeling and ensemble prediction. The database in question is available courtesy of the County Office in Kłodzko (SW Poland), the institution which owns and maintains the Local System for Flood Monitoring in Kłodzko County. The dedicated prediction system, known as HydroProg, is now being built at the University of Wrocław (Poland). As the entire prediction system, the correction modules work automatically in real time and are developed in R language. They are plugged in to a larger IT infrastructure. Hydrologic time series, which are water levels recorded every 15 minutes at 22 gauges located in Kłodzko County, are

  18. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  19. Controlled versus Automatic Processes: Which Is Dominant to Safety? The Moderating Effect of Inhibitory Control

    OpenAIRE

    Yaoshan Xu; Yongjuan Li; Weidong Ding; Fan Lu

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study...

  20. Sensitometric comparison of E and F dental radiographic films using manual and automatic processing systems

    Directory of Open Access Journals (Sweden)

    Dabaghi A.

    2008-04-01

    Full Text Available Background and Aim: Processing conditions affect sensitometric properties of X-ray films. In this study, we aimed to evaluate the sensitometric characteristics of InSight (IP, a new F-speed film, in fresh and used processing solutions in dental office condition and compare them with Ektaspeed Plus (EP.Materials and Methods: In this experimental in vitro study, an aluminium step wedge was used to construct characteristic curves for InSight and Ektaspeed Plus films (Kodak Eastman, Rochester, USA.All films were processed in Champion solution (X-ray Iran, Tehran, Iran both manually and automatically in a period of six days. Unexposed films of both types were processed manually and automatically to determine base plus fog density. Speed and film contrast were measured according to ISO definition. Data were analyzed using one-way ANOVA and T tests with P<0.05 as the level of significance.Results: IP was 20 to 22% faster than EP and showed to be an F-speed film when processed in automatic condition and E-F film when processed manually. Also it was F-speed in fresh solution and E-speed in old solution. IP and EP contrasts were similar in automatic processing but EP contrast was higher when processed manually. Both EP and IP films had standard values of base plus fog (<0.35 and B+F densities were decreased in old solution.Conclusion: Based on the results of this study, InSight is a F-speed film with a speed of at least 20% greater than Ektaspeed. In addition, it reduces patient exposure with no damage to image quality.

  1. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  2. An Engineering Process for Automatic Seismic Trip System Implementation

    International Nuclear Information System (INIS)

    To cope with potential seismic risks of catastrophic earthquakes, an Automatic Seismic Trip System (ASTS) is being developed for the operating nuclear power plants in Korea. The ASTS is designed to automatically trip the reactor upon an occurrence of the Safe Shutdown Earthquake (SSE) at the plant site. By the end of 2012, the ASTS will have been installed at all twenty (20) nuclear power plants currently under operation in Korea as of 2010. The system must be designed and constructed so that it not only ensures a highly reliable reactor trip upon the SSE but also minimizes a spurious trip during normal operation and maintenance. Since the ASTS is designed for the currently operating plants, the system design must consider different reactor types, including the Pressurized Water Reactor (PWR) and the Pressurized Heavy Water Reactor (PHWR), and easy operation and maintenance as well. This paper presents an engineering process for the design and implementation of the ASTS for the nuclear power plants. The presentation is mainly focused on the application of a systematic design process and a rigorous verification process for the various nuclear power plants. The systematic design process is based on the concepts of modularization and standardization. For the systematic design, the ASTS is functionally divided into three separate modules consisting of a sensor module, a trip logic module, and a trip actuation module. Rigorous verification is applied to the hardware qualification and the software verification. Environmental, seismic, and EMI qualifications are included in hardware verification. Strict software verification is also performed through the entire life cycle for software development. Even though the ASTS is not designed as a safety-related Class 1E system per the requirement of ANS-51.1, the intent of safety related design standards are applied, to the extent practical, to the hardware qualification and software verification. Application of this engineering

  3. Automatic Weissenberg data collection system for time-resolved protein crystallography

    CERN Document Server

    Sakabe, N; Higashi, T; Igarashi, N; Suzuki, M; Watanabe, N; Sasaki, K

    2001-01-01

    A totally new type of fully automatic Weissenberg data-collection system called 'Galaxy' was developed and was installed at the Photon Factory. This automatic data collection system consists of a rotated-inclined focusing monochromator, a screenless Weissenberg type camera, an image reader, an eraser, a cassette transportation mechanism, a control console and a safety and high-speed computer network system linking a control console, data processing computers and data servers. The special characteristics of this system are a Weissenberg camera with a fully cylindrical cassette which can be rotated to exchange a frame, a maximum number of 36 images to be recorded in an IP cassette, and a very high speed IP reader with five reading heads. Since the frame exchange time is only a few seconds, this system is applicable for time-resolved protein crystallography at seconds or minutes of time-scale.

  4. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  5. Automatic processing of unattended object features by functional connectivity

    Directory of Open Access Journals (Sweden)

    Katja Martina Mayer

    2013-05-01

    Full Text Available Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended and task-irrelevant (unattended object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, colour, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks or remained constant (no-change blocks during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioural level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localised several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction analyses revealed that several functionally-localised areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.

  6. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  7. An automatic integrated image segmentation, registration and change detection method for water-body extraction using HSR images and GIS data

    OpenAIRE

    H.G. Sui; Chen, G.; Hua, L.

    2013-01-01

    Automatic water-body extraction from remote sense images is a challenging problem. Using GIS data to update and extract waterbody is an old but active topic. However, automatic registration and change detection of the two data sets often presents difficulties. In this paper, a novel automatic water-body extraction method is proposed. The core idea is to integrate image segmentation, image registration and change detection with GIS data as a whole processing. A new iterative segmentat...

  8. Data processing made simple

    CERN Document Server

    Wooldridge, Susan

    2013-01-01

    Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The

  9. Automatic delimitation of microwatershed using SRTM data of the NASA

    Directory of Open Access Journals (Sweden)

    Freddy Aníbal Jumbo Castillo

    2015-12-01

    Full Text Available The watershed as the basic territorial unit of planning and management of water resources, requires its proper delimitation of the catchment or drainage area, faced with this situation, the lack of geographic information of Casacay river micro watersheds, hydrographic unit should be resolved, for this purpose the research was aimed at automatic delimitation of micro watersheds using of Geographic Information Systems (GIS techniques and the project Shuttle Radar Topographic Mission (SRTM 30 meters spatial resolution data. The selected methodology was the Pfafstetter one, with which nine micro watersheds were obtained with their respective codification allowing to continue with watersheds standardization adopted by Ecuador Water's Secretariat. With the investigation results watersheds will be updated with more detail information, promoting the execution of tasks or activities related to the integrated management of the hydrographic unit studied

  10. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  11. Automatic Lameness Detection in a Milking Robot : Instrumentation, measurement software, algorithms for data analysis and a neural network model

    OpenAIRE

    Pastell, Matti

    2007-01-01

    The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feedi...

  12. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  13. Analysis on the Influence of Automatic Station Temperature Data on the Sequence Continuity of Historical Meteorological Data

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The research aimed to study the influence of automatic station data on the sequence continuity of historical meteorological data. [Method] Based on the temperature data which were measured by the automatic meteorological station and the corresponding artificial observation data during January-December in 2001, the monthly average, maximum and minimum temperatures in the automatic station were compared with the corresponding artificial observation temperature data in the parallel observation peri...

  14. AUTOMATIC AND SEMI-AUTOMATIC PROCESSES OF WORDSMITH 3.0 AS A TEXTBOOK EVALUATION INSTRUMENT: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Jayakaran Mukundan

    2006-01-01

    Full Text Available As checklists developed for textbook evaluation are question-able in terms of reliability and validity, other ways are being sought to bring about more systematic, efficient and objective evaluation instruments, which can provide greater insight into the strengths and weak-nesses of textbooks. With this in mind, the researchers explored the abilities of WordSmith 3.0, a concordance software, in providing some insights into the structure of textbooks. This study will provide findings on data WordSmith 3.0 generates automatically and semi-automatically, and how this information could be used in the evaluation of textbooks.

  15. Process monitoring using three dimensional computed tomography and automatic image processing

    International Nuclear Information System (INIS)

    In this paper we present a fast three dimensional computed tomography system in combination with automatic 3d image processing, which gathers necessary information for production control of sugar beet seed. We outline the design of the inspection system and the three dimensional image processing. Within one hour the geometrical parameters of more than 1000 seeds (about 3-4 mm diameter) are measured with accuracy better than 0.1 mm. (authors)

  16. Automatic Registration of Multi-Source Data Using Mutual Information

    Science.gov (United States)

    Parmehr, E. G.; Zhang, C.; Fraser, C. S.

    2012-07-01

    Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI) as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI) approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM) and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  17. AUTOMATIC REGISTRATION OF MULTI-SOURCE DATA USING MUTUAL INFORMATION

    Directory of Open Access Journals (Sweden)

    E. G. Parmehr

    2012-07-01

    Full Text Available Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  18. Automatic fault detection on BIPV systems without solar irradiation data

    CERN Document Server

    Leloux, Jonathan; Luna, Alberto; Desportes, Adrien

    2014-01-01

    BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar ...

  19. AUTOMATICALLY CONVERTING TABULAR DATA TO RDF: AN ONTOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Kumar Sharma

    2015-07-01

    Full Text Available Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information

  20. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  1. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  2. Measuring Service Reliability Using Automatic Vehicle Location Data

    Directory of Open Access Journals (Sweden)

    Zhenliang Ma

    2014-01-01

    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.

  3. An Automatic Number Plate Recognition System under Image Processing

    Directory of Open Access Journals (Sweden)

    Sarbjit Kaur

    2016-03-01

    Full Text Available Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whole ANPR system depends on number plate extraction phase as character segmentation and character recognition phases are also depend on the output of this phase. Further the accuracy of Number Plate Extraction phase depends on the quality of captured vehicle image. Higher be the quality of captured input vehicle image more will be the chances of proper extraction of vehicle number plate area. The existing methods of ANPR works well for dark and bright/light categories image but it does not work well for Low Contrast, Blurred and Noisy images and the detection of exact number plate area by using the existing ANPR approach is not successful even after applying existing filtering and enhancement technique for these types of images. Due to wrong extraction of number plate area, the character segmentation and character recognition are also not successful in this case by using the existing method. To overcome these drawbacks I proposed an efficient approach for ANPR in which the input vehicle image is pre-processed firstly by iterative bilateral filtering , adaptive histogram equalization and number plate is extracted from pre-processed vehicle image using morphological operations, image subtraction, image binarization/thresholding, sobel vertical edge detection and by boundary box analysis. Sometimes the extracted plate area also contains noise, bolts, frames etc. So the extracted plate area is enhanced by using morphological operations to improve the quality of

  4. Automatic Inspection of Nuclear-Reactor Tubes During Production and Processing, Using Eddy-Current Methods

    International Nuclear Information System (INIS)

    The possibilities of automatic and semi-automatic inspection of tubes using eddy-current methods are described. The paper deals in particular with modem processes, compared to the use of other non-destructive methods. The essence of the paper is that the methods discussed are ideal for objective automatic inspection. Not only are the known methods described, but certain new methods and their application to the detection of flaws in reactor tubes are discussed. (author)

  5. Automatic Creation of Structural Models from Point Cloud Data: the Case of Masonry Structures

    Science.gov (United States)

    Riveiro, B.; Conde-Carnero, B.; González-Jorge, H.; Arias, P.; Caamaño, J. C.

    2015-08-01

    One of the fields where 3D modelling has an important role is in the application of such 3D models to structural engineering purposes. The literature shows an intense activity on the conversion of 3D point cloud data to detailed structural models, which has special relevance in masonry structures where geometry plays a key role. In the work presented in this paper, color data (from Intensity attribute) is used to automatically segment masonry structures with the aim of isolating masonry blocks and defining interfaces in an automatic manner using a 2.5D approach. An algorithm for the automatic processing of laser scanning data based on an improved marker-controlled watershed segmentation was proposed and successful results were found. Geometric accuracy and resolution of point cloud are constrained by the scanning instruments, giving accuracy levels reaching a few millimetres in the case of static instruments and few centimetres in the case of mobile systems. In any case, the algorithm is not significantly sensitive to low quality images because acceptable segmentation results were found in cases where blocks could not be visually segmented.

  6. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music and of the......The purpose of this paper is to demonstrate the potential for generating interesting aggregate data on certain aspect of the lives of thousands of composers, and indeed other creative groups, from large on-line dictionaries and to be able to do so relatively quickly. A purpose-built java...

  7. Processing NOAA Spectroradiometric Data

    OpenAIRE

    Broenkow, William W.; Greene, Nancy, T.; Feinholz, Michael, E.

    1993-01-01

    This report outlines the NOAA spectroradiometer data processing system implemented by the MLML_DBASE programs. This is done by presenting the algorithms and graphs showing the effects of each step in the algorithms. [PDF contains 32 pages

  8. Information Processing - Administrative Data Processing

    Science.gov (United States)

    Bubenko, Janis

    A three semester, 60-credit course package in the topic of Administrative Data Processing (ADP), offered in 1966 at Stockholm University (SU) and the Royal Institute of Technology (KTH) is described. The package had an information systems engineering orientation. The first semester focused on datalogical topics, while the second semester focused on the infological topics. The third semester aimed to deepen the students’ knowledge in different parts of ADP and at writing a bachelor thesis. The concluding section of this paper discusses various aspects of the department’s first course effort. The course package led to a concretisation of our discipline and gave our discipline an identity. Our education seemed modern, “just in time”, and well adapted to practical needs. The course package formed the first concrete activity of a group of young teachers and researchers. In a forty-year perspective, these people have further developed the department and the topic to an internationally well-reputed body of knowledge and research. The department has produced more than thirty professors and more than one hundred doctoral degrees.

  9. Image processing applied to automatic detection of defects during ultrasonic examination

    International Nuclear Information System (INIS)

    This work is a study about image processing applied to ultrasonic BSCAN images which are obtained in the field of non destructive testing of weld. The goal is to define what image processing techniques can bring to ameliorate the exploitation of the data collected and, more precisely, what image processing can do to extract the meaningful echoes which enable to characterize and to size the defects. The report presents non destructive testing by ultrasounds in the nuclear field and it indicates specificities of the propagation of ultrasonic waves in austenitic weld. It gives a state of the art of the data processing applied to ultrasonic images in nondestructive evaluation. A new image analysis is then developed. It is based on a powerful tool, the co-occurrence matrix. This matrix enables to represent, in a whole representation, relations between amplitudes of couples of pixels. From the matrix analysis, a new complete and automatic method has been set down in order to define a threshold which separates echoes from noise. An automatic interpretation of the ultrasonic echoes is then possible. Complete validation has been done with standard pieces

  10. Automatic processing of induced events in the geothermal reservoirs Landau and Insheim, Germany

    Science.gov (United States)

    Olbert, Kai; Küperkoch, Ludger; Meier, Thomas

    2016-04-01

    Induced events can be a risk to local infrastructure that need to be understood and evaluated. They represent also a chance to learn more about the reservoir behavior and characteristics. Prior to the analysis, the waveform data must be processed consistently and accurately to avoid erroneous interpretations. In the framework of the MAGS2 project an automatic off-line event detection and a phase onset time determination algorithm are applied to induced seismic events in geothermal systems in Landau and Insheim, Germany. The off-line detection algorithm works based on a cross-correlation of continuous data taken from the local seismic network with master events. It distinguishes events between different reservoirs and within the individual reservoirs. Furthermore, it provides a location and magnitude estimation. Data from 2007 to 2014 are processed and compared with other detections using the SeisComp3 cross correlation detector and a STA/LTA detector. The detected events are analyzed concerning spatial or temporal clustering. Furthermore the number of events are compared to the existing detection lists. The automatic phase picking algorithm combines an AR-AIC approach with a cost function to find precise P1- and S1-phase onset times which can be used for localization and tomography studies. 800 induced events are processed, determining 5000 P1- and 6000 S1-picks. The phase onset times show a high precision with mean residuals to manual phase picks of 0s (P1) to 0.04s (S1) and standard deviations below ±0.05s. The received automatic picks are applied to relocate a selected number of events to evaluate influences on the location precision.

  11. Automatic Mapping Of Large Signal Processing Systems To A Parallel Machine

    Science.gov (United States)

    Printz, Harry; Kung, H. T.; Mummert, Todd; Scherer, Paul M.

    1989-12-01

    Since the spring of 1988, Carnegie Mellon University and the Naval Air Development Center have been working together to implement several large signal processing systems on the Warp parallel computer. In the course of this work, we have developed a prototype of a software tool that can automatically and efficiently map signal processing systems to distributed-memory parallel machines, such as Warp. We have used this tool to produce Warp implementations of small test systems. The automatically generated programs compare favorably with hand-crafted code. We believe this tool will be a significant aid in the creation of high speed signal processing systems. We assume that signal processing systems have the following characteristics: •They can be described by directed graphs of computational tasks; these graphs may contain thousands of task vertices. • Some tasks can be parallelized in a systolic or data-partitioned manner, while others cannot be parallelized at all. • The side effects of each task, if any, are limited to changes in local variables. • Each task has a data-independent execution time bound, which may be expressed as a function of the way it is parallelized, and the number of processors it is mapped to. In this paper we describe techniques to automatically map such systems to Warp-like parallel machines. We identify and address key issues in gracefully combining different parallel programming styles, in allocating processor, memory and communication bandwidth, and in generating and scheduling efficient parallel code. When iWarp, the VLSI version of the Warp machine, becomes available in 1990, we will extend this tool to generate efficient code for very large applications, which may require as many as 3000 iWarp processors, with an aggregate peak performance of 60 gigaflops.

  12. The neural signatures of processing semantic end values in automatic number comparisons

    Directory of Open Access Journals (Sweden)

    Michal Pinhas

    2015-11-01

    Full Text Available The brain activity associated with processing numerical end values has received limited research attention. The present study explored the neural correlates associated with processing semantic end values under conditions of automatic number processing. Event-related potentials (ERPs were recorded while participants performed the numerical Stroop task, in which they were asked to compare the physical size of pairs of numbers, while ignoring their numerical values. The smallest end value in the set, which is a task irrelevant factor, was manipulated between participant groups. We focused on the processing of the lower end values of 0 and 1 because these numbers were found to be automatically tagged as the “smallest.” Behavioral results showed that the size congruity effect was modulated by the presence of the smallest end value in the pair. ERP data revealed a spatially extended centro-parieto-occipital P3 that was enhanced for congruent versus incongruent trials. Importantly, over centro-parietal sites, the P3 congruity effect (congruent minus incongruent was larger for pairs containing the smallest end value than for pairs containing nonsmallest values. These differences in the congruency effect were localized to the precuneus. The presence of an end value within the pair also modulated P3 latency. Our results provide the first neural evidence for the encoding of numerical end values. They further demonstrate that the use of end values as anchors is a primary aspect of processing symbolic numerical information.

  13. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  14. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  15. FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-09-01

    Full Text Available The estimation of the transformation parameters between different point clouds is still a crucial task as it is usually followed by scene reconstruction, object detection or object recognition. Therefore, the estimates should be as accurate as possible. Recent developments show that it is feasible to utilize both the measured range information and the reflectance information sampled as image, as 2D imagery provides additional information. In this paper, an image-based registration approach for TLS data is presented which consists of two major steps. In the first step, the order of the scans is calculated by checking the similarity of the respective reflectance images via the total number of SIFT correspondences between them. Subsequently, in the second step, for each SIFT correspondence the respective SIFT features are filtered with respect to their reliability concerning the range information and projected to 3D space. Combining the 3D points with 2D observations on a virtual plane yields 3D-to-2D correspondences from which the coarse transformation parameters can be estimated via a RANSAC-based registration scheme including the EPnP algorithm. After this coarse registration, the 3D points are again checked for consistency by using constraints based on the 3D distance, and, finally, the remaining 3D points are used for an ICP-based fine registration. Thus, the proposed methodology provides a fast, reliable, accurate and fully automatic image-based approach for the registration of unorganized point clouds without the need of a priori information about the order of the scans, the presence of regular surfaces or human interaction.

  16. Image processing techniques for remote sensing data

    Digital Repository Service at National Institute of Oceanography (India)

    RameshKumar, M.R.

    -Type text/plain; charset=UTF-8 4. IMAGE PROCE:>SINGTOO~IQUE3FOR RmOTE SmSING DATA M. R. RAIirnH KUMAR National Institute of Oceanography, Dona PaUla, Goa-403004. Digital image processing is used for improvement of pictorial information for human... interpretation and for processing of scene data for autonomous machine perception. The technique of digital image processing are used for' automatic character/pattern recognition, industrial robots for product assembly and inspection, military recognizance...

  17. Experience with automatic orientation from different data sets

    DEFF Research Database (Denmark)

    Potucková, Marketa

    2003-01-01

    Automatic orientation of aerial images based on existing databases was a topic of the OEEPE research project running in 1998 and 1999. Different approaches for solving this task have been published until now. The method developed at Aalborg University uses the existing topographic database and or...

  18. Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes

    Science.gov (United States)

    Awrangjeb, M.; Lu, G.; Fraser, C.

    2014-08-01

    This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the

  19. Automatic "pipeline" analysis of 3-D MRI data for clinical trials: application to multiple sclerosis.

    Science.gov (United States)

    Zijdenbos, Alex P; Forghani, Reza; Evans, Alan C

    2002-10-01

    The quantitative analysis of magnetic resonance imaging (MRI) data has become increasingly important in both research and clinical studies aiming at human brain development, function, and pathology. Inevitably, the role of quantitative image analysis in the evaluation of drug therapy will increase, driven in part by requirements imposed by regulatory agencies. However, the prohibitive length of time involved and the significant intraand inter-rater variability of the measurements obtained from manual analysis of large MRI databases represent major obstacles to the wider application of quantitative MRI analysis. We have developed a fully automatic "pipeline" image analysis framework and have successfully applied it to a number of large-scale, multicenter studies (more than 1,000 MRI scans). This pipeline system is based on robust image processing algorithms, executed in a parallel, distributed fashion. This paper describes the application of this system to the automatic quantification of multiple sclerosis lesion load in MRI, in the context of a phase III clinical trial. The pipeline results were evaluated through an extensive validation study, revealing that the obtained lesion measurements are statistically indistinguishable from those obtained by trained human observers. Given that intra- and inter-rater measurement variability is eliminated by automatic analysis, this system enhances the ability to detect small treatment effects not readily detectable through conventional analysis techniques. While useful for clinical trial analysis in multiple sclerosis, this system holds widespread potential for applications in other neurological disorders, as well as for the study of neurobiology in general. PMID:12585710

  20. Gaia Data Processing Architecture

    CERN Document Server

    O'Mullane, W; Bailer-Jones, C; Bastian, U; Brown, A; Drimmel, R; Eyer, L; Huc, C; Jansen, F; Katz, D; Lindegren, L; Pourbaix, D; Luri, X; Mignard, F; Torra, J; van Leeuwen, F

    2006-01-01

    Gaia is ESA's ambitious space astrometry mission the main objective of which is to astrometrically and spectro-photometrically map 1000 Million celestial objects (mostly in our galaxy) with unprecedented accuracy. The announcement of opportunity for the data processing will be issued by ESA late in 2006. The Gaia Data Processing and Analysis Consortium (DPAC) has been formed recently and is preparing an answer. The satellite will downlink close to 100 TB of raw telemetry data over 5 years. To achieve its required accuracy of a few 10s of Microarcsecond astrometry, a highly involved processing of this data is required. In addition to the main astrometric instrument Gaia will host a Radial Velocity instrument, two low-resolution dispersers for multi-color photometry and two Star Mappers. Gaia is a flying Giga Pixel camera. The various instruments each require relatively complex processing while at the same time being interdependent. We describe the overall composition of the DPAC and the envisaged overall archi...

  1. Principles and methods for automatic and semi-automatic tissue segmentation in MRI data.

    Science.gov (United States)

    Wang, Lei; Chitiboi, Teodora; Meine, Hans; Günther, Matthias; Hahn, Horst K

    2016-04-01

    The development of magnetic resonance imaging (MRI) revolutionized both the medical and scientific worlds. A large variety of MRI options have generated a huge amount of image data to interpret. The investigation of a specific tissue in 3D or 4D MR images can be facilitated by image processing techniques, such as segmentation and registration. In this work, we provide a brief review of the principles and methods that are commonly applied to achieve superior tissue segmentation results in MRI. The impacts of MR image acquisition on segmentation outcome and the principles of selecting and exploiting segmentation techniques tailored for specific tissue identification tasks are discussed. In the end, two exemplary applications, breast and fibroglandular tissue segmentation in MRI and myocardium segmentation in short-axis cine and real-time MRI, are discussed to explain the typical challenges that can be posed in practical segmentation tasks in MRI data. The corresponding solutions that are adopted to deal with these challenges of the two practical segmentation tasks are thoroughly reviewed. PMID:26755062

  2. Online data processing system

    International Nuclear Information System (INIS)

    A pulse height analyzer terminal system PHATS has been developed for online data processing via JAERI-TOKAI computer network. The system is controled by using a micro-computer MICRO-8 which was developed for the JAERI-TOKAI network. The system program consists of two subprograms, online control system ONLCS and pulse height analyzer control system PHACS. ONLCS links the terminal with the conversational programming system of FACOM 230/75 through the JAERI-TOKAI network and controls data processing in TSS and remote batch modes. PHACS is used to control INPUT/OUTPUT of data between pulse height analyzer and cassette-MT or typewriter. This report describes the hardware configuration and the system program in detail. In the appendix, explained are real time monitor, type of message, PEX to PEX protocol and Host to Host protocol, required for the system programming. (author)

  3. Methods and automatic procedures for processing images based on blind evaluation of noise type and characteristics

    Science.gov (United States)

    Lukin, Vladimir V.; Abramov, Sergey K.; Ponomarenko, Nikolay N.; Uss, Mikhail L.; Zriakhov, Mikhail; Vozel, Benoit; Chehdi, Kacem; Astola, Jaakko T.

    2011-01-01

    In many modern applications, methods and algorithms used for image processing require a priori knowledge or estimates of noise type and its characteristics. Noise type and basic parameters can be sometimes known in advance or determined in an interactive manner. However, it occurs more and more often that they should be estimated in a blind manner. The results of noise-type blind determination can be false, and the estimates of noise parameters are characterized by certain accuracy. Such false decisions and estimation errors have an impact on performance of image-processing techniques that is based on the obtained information. We address some issues of such a negative influence. Possible structures of automatic procedures are presented and discussed for several typical applications of image processing as remote sensing data preprocessing and compression.

  4. Automatic Detection and Characterization of Subsurface Features from Mars Radar Sounder Data

    Science.gov (United States)

    Ferro, A.; Bruzzone, L.; Heggy, E.; Plaut, J. J.

    2010-12-01

    MARSIS and SHARAD are currently orbiting Mars in an attempt to explore structural and volatile elements in its subsurface. The data returned from these two experiments are complementary in their nature for providing different penetration capabilities and vertical resolutions that is crucial to constrain the ambiguities on the subsurface structural and geophysical properties. To this day, both radars have acquired a substantial large volume of data that are yet to be quantitatively analyzed with more accurate radar inversion algorithms. Manual investigation of the radargrams is a time consuming task that is often dependent on user visual ability to distinguish subsurface reflectors. Such process induces a substantial ambiguity in data analysis from user to user, limits the amount of data to be explored and reduces efficiency of fusion studies to compile MARSIS and SHARAD data in a metric process. To address this deficiency, we started the development of automated techniques for the extraction of subsurface information from the radar sounding data. Such methods will greatly improve the ability to perform scientific analysis on larger scale areas using the two data sets from MARSIS and SHARAD simultaneously [Ferro and Bruzzone, 2009]. Our automated data analysis chain has been preliminarily applied only to SHARAD data for the statistical characterization of the radargrams and the automatic detection of linear subsurface features [Ferro and Bruzzone, 2010]. Our current development has been extended for the integration of both SHARAD and MARSIS data. We identified two targets of interest to test and validate our automated tools to explore subsurface features: (1) The North Polar Layer Deposits, and (2) Elysium Planitia. On the NPLD, the technique was able to extract the position and the extension of the returns coming from basal unit from SHARAD radargrams, both in range and azimuth. Therefore, it was possible to map the depth and thickness of the icy polar cap. The

  5. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  6. Cell Processing Engineering for Regenerative Medicine : Noninvasive Cell Quality Estimation and Automatic Cell Processing.

    Science.gov (United States)

    Takagi, Mutsumi

    2016-01-01

    The cell processing engineering including automatic cell processing and noninvasive cell quality estimation of adherent mammalian cells for regenerative medicine was reviewed. Automatic cell processing necessary for the industrialization of regenerative medicine was introduced. The cell quality such as cell heterogeneity should be noninvasively estimated before transplantation to patient, because cultured cells are usually not homogeneous but heterogeneous and most protocols of regenerative medicine are autologous system. The differentiation level could be estimated by two-dimensional cell morphology analysis using a conventional phase-contrast microscope. The phase-shifting laser microscope (PLM) could determine laser phase shift at all pixel in a view, which is caused by the transmitted laser through cell, and might be more noninvasive and more useful than the atomic force microscope and digital holographic microscope. The noninvasive determination of the laser phase shift of a cell using a PLM was carried out to determine the three-dimensional cell morphology and estimate the cell cycle phase of each adhesive cell and the mean proliferation activity of a cell population. The noninvasive discrimination of cancer cells from normal cells by measuring the phase shift was performed based on the difference in cytoskeleton density. Chemical analysis of the culture supernatant was also useful to estimate the differentiation level of a cell population. A probe beam, an infrared beam, and Raman spectroscopy are useful for diagnosing the viability, apoptosis, and differentiation of each adhesive cell. PMID:25373455

  7. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  8. An Automatic Framework Using Space-Time Processing and TR-MUSIC for Subsurface and Through-Wall Multitarget Imaging

    Directory of Open Access Journals (Sweden)

    Si-hao Tan

    2012-01-01

    Full Text Available We present an automatic framework combined space-time signal processing with Time Reversal electromagnetic (EM inversion for subsurface and through-wall multitarget imaging using electromagnetic waves. This framework is composed of a frequency-wavenumber (FK filter to suppress direct wave and medium bounce, a FK migration algorithm to automatically estimate the number of targets and identify target regions, which can be used to reduce the computational complexity of the following imaging algorithm, and a EM inversion algorithm using Time Reversal Multiple Signal Classification (TR-MUSIC to reconstruct hidden objects. The feasibility of the framework is demonstrated with simulated data generated by GPRMAX.

  9. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data.

    Science.gov (United States)

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L; Polimeni, Jonathan R

    2016-07-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we present a framework for deriving cortical surface reconstructions directly from high-resolution EPI-based reference images that provide anatomical models exactly geometric distortion-matched to the functional data. Anatomical EPI data with 1mm isotropic voxel size were acquired using a fast multiple inversion recovery time EPI sequence (MI-EPI) at 7T, from which quantitative T1 maps were calculated. Using these T1 maps, volumetric data mimicking the tissue contrast of standard anatomical data were synthesized using the Bloch equations, and these T1-weighted data were automatically processed using FreeSurfer. The spatial alignment between T2(⁎)-weighted EPI data and the synthetic T1-weighted anatomical MI-EPI-based images was improved compared to the conventional anatomical reference. In particular, the alignment near the regions vulnerable to distortion due to magnetic susceptibility differences was improved, and sampling of the adjacent tissue classes outside of the cortex was reduced when using cortical surface reconstructions derived directly from the MI-EPI reference. The MI-EPI method therefore produces high-quality anatomical data that can be automatically segmented with standard software, providing cortical surface reconstructions that are geometrically matched to the BOLD fMRI data. PMID:27079529

  10. Automatic extraction of highway light poles and towers from mobile LiDAR data

    Science.gov (United States)

    Yan, Wai Yeung; Morsy, Salem; Shaker, Ahmed; Tulloch, Mark

    2016-03-01

    Mobile LiDAR has been recently demonstrated as a viable technique for pole-like object detection and classification. Despite that a desirable accuracy (around 80%) has been reported in the existing studies, majority of them were presented in the street level with relatively flat ground and very few of them addressed how to extract the entire pole structure from the ground or curb surface. Therefore, this paper attempts to fill the research gap by presenting a workflow for automatic extraction of light poles and towers from mobile LiDAR data point cloud, with a particular focus on municipal highway. The data processing workflow includes (1) an automatic ground filtering mechanism to separate aboveground and ground features, (2) an unsupervised clustering algorithm to cluster the aboveground data point cloud, (3) a set of decision rules to identify and classify potential light poles and towers, and (4) a least-squares circle fitting algorithm to fit the circular pole structure so as to remove the ground points. The workflow was tested with a set of mobile LiDAR data collected for a section of highway 401 located in Toronto, Ontario, Canada. The results showed that the proposed method can achieve an over 91% of detection rate for five types of light poles and towers along the study area.

  11. AUTOMATIC RECOGNITION OF PIPING SYSTEM FROM LARGE-SCALE TERRESTRIAL LASER SCAN DATA

    Directory of Open Access Journals (Sweden)

    K. Kawashima

    2012-09-01

    Full Text Available Recently, changes in plant equipment have been becoming more frequent because of the short lifetime of the products, and constructing 3D shape models of existing plants (as-built models from large-scale laser scanned data is expected to make their rebuilding processes more efficient. However, the laser scanned data of the existing plant has massive points, captures tangled objects and includes a large amount of noises, so that the manual reconstruction of a 3D model is very time-consuming and costs a lot. Piping systems especially, account for the greatest proportion of plant equipment. Therefore, the purpose of this research was to propose an algorithm which can automatically recognize a piping system from terrestrial laser scan data of the plant equipment. The straight portion of pipes, connecting parts and connection relationship of the piping system can be recognized in this algorithm. Eigenvalue analysis of the point clouds and of the normal vectors allows for the recognition. Using only point clouds, the recognition algorithm can be applied to registered point clouds and can be performed in a fully automatic way. The preliminary results of the recognition for large-scale scanned data from an oil rig plant have shown the effectiveness of the algorithm.

  12. Investigation of registration algorithms for the automatic tile processing system

    Science.gov (United States)

    Tamir, Dan E.

    1995-01-01

    The Robotic Tile Inspection System (RTPS), under development in NASA-KSC, is expected to automate the processes of post-flight re-water-proofing and the process of inspection of the Shuttle heat absorbing tiles. An important task of the robot vision sub-system is to register the 'real-world' coordinates with the coordinates of the robot model of the Shuttle tiles. The model coordinates relate to a tile data-base and pre-flight tile-images. In the registration process, current (post-flight) images are aligned with pre-flight images to detect the rotation and translation displacement required for the coordinate systems rectification. The research activities performed this summer included study and evaluation of the registration algorithm that is currently implemented by the RTPS, as well as, investigation of the utility of other registration algorithms. It has been found that the current algorithm is not robust enough. This algorithm has a success rate of less than 80% and is, therefore, not suitable for complying with the requirements of the RTPS. Modifications to the current algorithm has been developed and tested. These modifications can improve the performance of the registration algorithm in a significant way. However, this improvement is not sufficient to satisfy system requirements. A new algorithm for registration has been developed and tested. This algorithm presented very high degree of robustness with success rate of 96%.

  13. Intelligent radar data processing

    Science.gov (United States)

    Holzbaur, Ulrich D.

    The application of artificial intelligence principles to the processing of radar signals is considered theoretically. The main capabilities required are learning and adaptation in a changing environment, processing and modeling information (especially dynamics and uncertainty), and decision-making based on all available information (taking its reliability into account). For the application to combat-aircraft radar systems, the tasks include the combination of data from different types of sensors, reacting to electronic counter-countermeasures, evaluation of how much data should be acquired (energy and radiation management), control of the radar, tracking, and identification. Also discussed are related uses such as monitoring the avionics systems, supporting pilot decisions with respect to the radar system, and general applications in radar-system R&D.

  14. Research on HJ-1A/B satellite data automatic geometric precision correction design

    Institute of Scientific and Technical Information of China (English)

    Xiong Wencheng; Shen Wenming; Wang Qiao; Shi Yuanli; Xiao Rulin; Fu Zhuo

    2014-01-01

    Developed independently by China,HJ-1A/B satellites have operated well on-orbit for five years and acquired a large number of high-quality observation data. The realization of the observation data geometric precision correction is of great significance for macro and dynamic ecological environment monitoring. The pa-per analyzed the parameter characteristics of HJ-1 satellite and geometric features of HJ-1 satellite level 2 data (systematic geo-corrected data). Based on this,the overall HJ-1 multi-sensor geometric correction flow and charge-coupled device (CCD) automatic geometric precision correction method were designed. Actual operating data showed that the method could achieve good result for automatic geometric precision correction of HJ-1 sat-ellite data,automatic HJ-1 CCD image geometric precision correction accuracy could be achieved within two pixels and automatic matching accuracy between the images of same satellite could be obtained less than one pixel.

  15. Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data

    Directory of Open Access Journals (Sweden)

    Günther Greiner

    2010-01-01

    Full Text Available Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

  16. Adaptive Automatic Gauge Control of a Cold Strip Rolling Process

    Directory of Open Access Journals (Sweden)

    ROMAN, N.

    2010-02-01

    Full Text Available The paper tackles with thickness control structure of the cold rolled strips. This structure is based on the rolls position control of a reversible quarto rolling mill. The main feature of the system proposed in the paper consists in the compensation of the errors introduced by the deficient dynamics of the hydraulic servo-system used for the rolls positioning, by means of a dynamic compensator that approximates the inverse system of the servo-system. Because the servo-system is considered variant over time, an on-line identification of the servo-system and parameter adapting of the compensator are achieved. The results obtained by numerical simulation are presented together with the data taken from real process. These results illustrate the efficiency of the proposed solutions.

  17. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  18. Gaussian process classification using automatic relevance determination for SAR target recognition

    Science.gov (United States)

    Zhang, Xiangrong; Gou, Limin; Hou, Biao; Jiao, Licheng

    2010-10-01

    In this paper, a Synthetic Aperture Radar Automatic Target Recognition approach based on Gaussian process (GP) classification is proposed. It adopts kernel principal component analysis to extract sample features and implements target recognition by using GP classification with automatic relevance determination (ARD) function. Compared with k-Nearest Neighbor, Naïve Bayes classifier and Support Vector Machine, GP with ARD has the advantage of automatic model selection and hyper-parameter optimization. The experiments on UCI datasets and MSTAR database show that our algorithm is self-tuning and has better recognition accuracy as well.

  19. 过程数据采集与分析系统在冷连轧机组中的应用%Application of automatic acquisition and analysis system of process data for tandem cold mill

    Institute of Scientific and Technical Information of China (English)

    侯永刚; 秦大伟; 费静; 张岩; 刘宝权; 宋君

    2012-01-01

    当代冷连轧带钢生产线具有生产速度快、控制精度高的特点,因此在实际生产过程中,需要一种能够对生产机组各种过程数据进行高效处理的数据采集系统。鞍钢冷轧钢板(莆田)有限公司冷连轧机组使用iba公司的PDA设备,组建了一套可对冷连轧机各种运行过程数据进行实时高速采集、监控、记录及分析的数据采集系统,该系统的使用为生产技术人员快速诊断冷连轧机组故障提供了有力的数据支持。%It has the characteristic of high production speed and control precision in modern cold rolling production line. During actual process, a kind of data acquisition system that can efficiently deal with all kinds of process data in cold strip mill is required. According to the practical application in Angang Putian Cold Strip Mill, a set of data acquisition system is built with PDA equipment of iba Company, which collects, monitors, records and analyzes all kinds of running process data of cold tan- dem mill with real-time and high-speed. This system provides strong data support for the rapid diagno- sis of cold rolling unit fault to production technological operators.

  20. Automatic generation of optimal business processes from business rules

    OpenAIRE

    Steen, Bas; Ferreira Pires, Luis; Iacob, Maria-Eugenia

    2010-01-01

    In recent years, business process models are increasingly being used as a means for business process improvement. Business rules can be seen as requirements for business processes, in that they describe the constraints that must hold for business processes that implement these business rules. Therefore, in principle one could devise (automated) transformations from business rules to business processes. These transformations should improve the quality (correctness) of business processes, by im...

  1. Automatic Creation of quality multi-word Lexica from noisy text data

    OpenAIRE

    Frontini, Francesca; Quochi, Valeria; Rubino, Francesco

    2012-01-01

    This paper describes the design of a tool for the automatic creation of multi-word lexica that is deployed as a web service and runs on automatically web-crawled data within the framework of the PANACEA platform. The main purpose of our task is to provide a (computationally "light") tool that creates a full high quality lexical resource of multi-word items. Within the platform, this tool is typically inserted in a work flow whose first step is automatic web-crawling. Therefore, the input data...

  2. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  3. Development of an automatic sample changer and a data acquisition system

    International Nuclear Information System (INIS)

    An automatic electro-pneumatic sample changer with a rotating sample holder is described. The changer is coupled through an electronic interface with the data acquisition station. The software to automate the system has been designed. (author)

  4. The development of automatic and controlled inhibitory retrieval processes in true and false recall

    OpenAIRE

    Knott, L.; Howe, M. L.; Wimmer, M. C.; Dewhurst, S

    2011-01-01

    In three experiments we investigated the role of automatic and controlled inhibitory retrieval processes in true and false memory development in children and adults. Experiment 1 incorporated a directed forgetting task to examine controlled retrieval inhibition. Experiments 2 and 3 utilized a part-set cue and retrieval practice task to examine automatic retrieval inhibition. In the first experiment, the forget cue had no effect on false recall for adults but reduced false recall for children....

  5. Towards Automatic Classification of Exoplanet-Transit-Like Signals: A Case Study on Kepler Mission Data

    Science.gov (United States)

    Valizadegan, Hamed; Martin, Rodney; McCauliff, Sean D.; Jenkins, Jon Michael; Catanzarite, Joseph; Oza, Nikunj C.

    2015-08-01

    Building new catalogues of planetary candidates, astrophysical false alarms, and non-transiting phenomena is a challenging task that currently requires a reviewing team of astrophysicists and astronomers. These scientists need to examine more than 100 diagnostic metrics and associated graphics for each candidate exoplanet-transit-like signal to classify it into one of the three classes. Considering that the NASA Explorer Program's TESS mission and ESA's PLATO mission survey even a larger area of space, the classification of their transit-like signals is more time-consuming for human agents and a bottleneck to successfully construct the new catalogues in a timely manner. This encourages building automatic classification tools that can quickly and reliably classify the new signal data from these missions. The standard tool for building automatic classification systems is the supervised machine learning that requires a large set of highly accurate labeled examples in order to build an effective classifier. This requirement cannot be easily met for classifying transit-like signals because not only are existing labeled signals very limited, but also the current labels may not be reliable (because the labeling process is a subjective task). Our experiments with using different supervised classifiers to categorize transit-like signals verifies that the labeled signals are not rich enough to provide the classifier with enough power to generalize well beyond the observed cases (e.g. to unseen or test signals). That motivated us to utilize a new category of learning techniques, so-called semi-supervised learning, that combines the label information from the costly labeled signals, and distribution information from the cheaply available unlabeled signals in order to construct more effective classifiers. Our study on the Kepler Mission data shows that semi-supervised learning can significantly improve the result of multiple base classifiers (e.g. Support Vector Machines, Ada

  6. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    OpenAIRE

    Mohammad Awrangjeb; Fraser, Clive S.

    2014-01-01

    Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the b...

  7. An automatic precipitation-phase distinction algorithm for optical disdrometer data over the global ocean

    Science.gov (United States)

    Burdanowitz, Jörg; Klepp, Christian; Bakan, Stephan

    2016-04-01

    The lack of high-quality in situ surface precipitation data over the global ocean so far limits the capability to validate satellite precipitation retrievals. The first systematic ship-based surface precipitation data set OceanRAIN (Ocean Rainfall And Ice-phase precipitation measurement Network) aims at providing a comprehensive statistical basis of in situ precipitation reference data from optical disdrometers at 1 min resolution deployed on various research vessels (RVs). Deriving the precipitation rate for rain and snow requires a priori knowledge of the precipitation phase (PP). Therefore, we present an automatic PP distinction algorithm using available data based on more than 4 years of atmospheric measurements onboard RV Polarstern that covers all climatic regions of the Atlantic Ocean. A time-consuming manual PP distinction within the OceanRAIN post-processing serves as reference, mainly based on 3-hourly present weather information from a human observer. For automation, we find that the combination of air temperature, relative humidity, and 99th percentile of the particle diameter predicts best the PP with respect to the manually determined PP. Excluding mixed phase, this variable combination reaches an accuracy of 91 % when compared to the manually determined PP for 149 635 min of precipitation from RV Polarstern. Including mixed phase (165 632 min), an accuracy of 81.2 % is reached for two independent PP distributions with a slight snow overprediction bias of 0.93. Using two independent PP distributions represents a new method that outperforms the conventional method of using only one PP distribution to statistically derive the PP. The new statistical automatic PP distinction method considerably speeds up the data post-processing within OceanRAIN while introducing an objective PP probability for each PP at 1 min resolution.

  8. Profiling animal toxicants by automatically mining public bioassay data: a big data approach for computational toxicology.

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    Full Text Available In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which records the responses induced when the compound interacts with different cellular systems or biological targets. Profiling compounds of environmental or pharmaceutical interest using useful toxicity bioassay data is a promising method to study complex animal toxicity. In this study, we developed an automatic virtual profiling tool to evaluate potential animal toxicants. First, we automatically acquired all PubChem bioassay data for a set of 4,841 compounds with publicly available rat acute toxicity results. Next, we developed a scoring system to evaluate the relevance between these extracted bioassays and animal acute toxicity. Finally, the top ranked bioassays were selected to profile the compounds of interest. The resulting response profiles proved to be useful to prioritize untested compounds for their animal toxicity potentials and form a potential in vitro toxicity testing panel. The protocol developed in this study could be combined with structure-activity approaches and used to explore additional publicly available bioassay datasets for modeling a broader range of animal toxicities.

  9. Automatic Data Filter Customization Using a Genetic Algorithm

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  10. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  11. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  12. An automatic precipitation phase distinction algorithm for optical disdrometer data over the global ocean

    Science.gov (United States)

    Burdanowitz, J.; Klepp, C.; Bakan, S.

    2015-12-01

    The lack of high quality in situ surface precipitation data over the global ocean so far limits the capability to validate satellite precipitation retrievals. The first systematic ship-based surface precipitation dataset OceanRAIN (Ocean Rainfall And Ice-phase precipitation measurement Network) aims at providing a comprehensive statistical basis of in situ precipitation reference data from optical disdrometers at 1 min resolution deployed on various research vessels (RVs). Deriving the precipitation rate for rain and snow requires a priori knowledge of the precipitation phase (PP). Therefore, we present an automatic PP distinction algorithm using available data based on more than four years of atmospheric measurements onboard RV Polarstern that covers all climatic regions of the Atlantic Ocean. A time-consuming manual PP distinction within the OceanRAIN post-processing serves as reference, mainly based on 3 hourly present weather information from a human observer. For automation, we find that the combination of air temperature, relative humidity and 99th percentile of the particle diameter predicts best the PP with respect to the manually determined PP. Excluding mixed-phase, this variable combination reaches an accuracy of 91 % when compared to the manually determined PP for about 149 000 min of precipitation from RV Polarstern. Including mixed-phase (165 000 min), 81.2 % accuracy are reached with a slight snow overprediction bias of 0.93 for two independent PP distributions. In that respect, a method using two independent PP distributions outperforms a method based on only one PP distribution. The new statistical automatic PP distinction method significantly speeds up the data post-processing within OceanRAIN while introducing an objective PP probability for each PP at 1 min resolution.

  13. Research on automatic loading & unloading technology for vertical hot ring rolling process

    Directory of Open Access Journals (Sweden)

    Xiaokai Wang

    2015-01-01

    Full Text Available The automatic loading & unloading technology is the key to the automatic ring production line. In this paper, the automatic vertical hot ring rolling (VHRR process is taken as the target, the method of the loading & unloading for VHRR is proposed, and the mechanical structure of loading & unloading system is designed, The virtual prototype model of VHRR mill and loading & unloading mechanism is established, and the coordinated control method of VHRR mill and loading & unloading auxiliaries is studied, the movement trace and dynamic characteristic of the critical components are obtained. Finally, a series of hot ring rolling tests are conducted on the VHRR mill, and the production rhythm and the formed rings' geometric precision are analysed. The tests results show that the loading & unloading technology can meet the high quality and high efficiency ring production requirement. The research conclusions have practical significance for the large-scale automatic ring production.

  14. Automatic convey or System with In–Process Sorting Mechanism using PLC and HMI System

    Directory of Open Access Journals (Sweden)

    Y V Aruna

    2015-11-01

    Full Text Available Programmable logic controllers are widely used in many manufacturing process like machinery packaging material handling automatic assembly. These are special type of microprocessor based controller used for any application that needs any kind of electrical controller including lighting controller and HVAC control system. Automatic conveyor system is a computerized control method of controlling and managing the sorting mechanism at the same time maintaining the efficiency of the industry & quality of the products.HMI for automatic conveyor system is considered the primary way of controlling each operation. Text displays are available as well as graphical touch screens. It is used in touch panels and local monitoring of machines. This paper deals with the efficient use of PLC in automatic conveyor system and also building the accuracy in it.

  15. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  16. Automatic subject classification of textual documents using limited or no training data

    OpenAIRE

    Joorabchi, Arash

    2010-01-01

    With the explosive growth in the number of electronic documents available on the internet, intranets, and digital libraries, there is a growing need for automatic systems capable of indexing and organising such large volumes of data more that ever. Automatic Text Classification (ATC) has become one of the principal means for enhancing the performance of information retrieval systems and organising digital libraries and other textual collections. Within this context, the use of ...

  17. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  18. Automatic structural matching of 3D image data

    Science.gov (United States)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  19. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    Science.gov (United States)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  20. Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text

    CERN Document Server

    Jonnalagadda, Siddhartha; Hakenberg, Jorg; Baral, Chitta; Gonzalez, Graciela

    2010-01-01

    The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.

  1. Process concepts for semi-automatic dismantling of LCD televisions

    OpenAIRE

    Elo, Kristofer; Sundin, Erik

    2014-01-01

    There is a large variety of electrical and electronic equipment products, for example liquid crystal display television sets (LCD TVs), in the waste stream today. Many LCD TVs contain mercury, which is a challenge to treat at the recycling plants. Two current used processes to recycle LCD TVs are automated shredding and manual disassembly. This paper aims to present concepts for semi-automated dismantling processes for LCD TVs in order to achieve higher productivity and flexibility, and in tu...

  2. Automatic digital document processing and management problems, algorithms and techniques

    CERN Document Server

    Ferilli, Stefano

    2011-01-01

    This text reviews the issues involved in handling and processing digital documents. Examining the full range of a document's lifetime, this book covers acquisition, representation, security, pre-processing, layout analysis, understanding, analysis of single components, information extraction, filing, indexing and retrieval. This title: provides a list of acronyms and a glossary of technical terms; contains appendices covering key concepts in machine learning, and providing a case study on building an intelligent system for digital document and library management; discusses issues of security,

  3. Modular toolkit for Data Processing (MDP: a Python data processing framework

    Directory of Open Access Journals (Sweden)

    Tiziano Zito

    2009-01-01

    Full Text Available Modular toolkit for Data Processing (MDP is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can easily be expanded. The implementation of new algorithms is easy and intuitive. The new implemented units are then automatically integrated with the rest of the library. MDP has been written in the context of theoretical research in neuroscience, but it has been designed to be helpful in any context where trainable data processing algorithms are used. Its simplicity on the user's side, the variety of readily available algorithms, and the reusability of the implemented units make it also a useful educational tool.

  4. An Automatic and Real-time Restoration of Gamma Dose Data by Radio Telemetry

    International Nuclear Information System (INIS)

    On-line gamma monitoring system based on a high pressurized ionization chamber has been used for determining airborne doses surrounding HANARO research reactor at KAERI (Korea Atomic Energy Research Institute). It is composed of a network of six monitoring stations and an on-line computer system. It has been operated by radio telemetry with a radio frequency of 468.8 MHz, which is able to transmit the real-time dose data measured from a remote ion chamber to the central computer for ten seconds-to seconds. Although radio telemetry has several advantages such as an effective and economical transmission, there is one main problem that data loss happen because each monitoring post only stores 300 radiation data points, which covers the previous sequential data of 50 minutes from the present in the case of a recording interval time of 10 seconds It is possible to restore the lost data by an off-line process such as a floppy disk or portable memory disk but it is ineffective method at the real-time monitoring system. Restoration, storage, and display of the current data as well as the lost data are also difficult in the present system. In this paper, an automatic and real-time restoration method by radio telemetry will be introduced

  5. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  6. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    Science.gov (United States)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  7. On the feasibility of automatic detection of range deviations from in-beam PET data

    Science.gov (United States)

    Helmbrecht, S.; Santiago, A.; Enghardt, W.; Kuess, P.; Fiedler, F.

    2012-03-01

    In-beam PET is a clinically proven method for monitoring ion beam cancer treatment. The objective is predominantly the verification of the range of the primary particles. Due to different processes leading to dose and activity, evaluation is done by comparing measured data to simulated. Up to now, the comparison is performed by well-trained observers (clinicians, physicists). This process is very time consuming and low in reproducibility. However, an automatic method is desirable. A one-dimensional algorithm for range comparison has been enhanced and extended to three dimensions. System-inherent uncertainties are handled by means of a statistical approach. To test the method, a set of data was prepared. Distributions of β+-activity calculated from treatment plans were compared to measurements performed in the framework of the German Heavy Ion Tumor Therapy Project at GSI Helmholtz Centre for Heavy Ion Research, Darmstadt, Germany. Artificial range deviations in the simulations served as test objects for the algorithm. Range modifications of different depth (4, 6 and 10 mm water equivalent path length) can be detected. Even though the sensitivity and specificity of a visual evaluation are higher, the method is feasible as the basis for the selection of patients from the data pool for retrospective evaluation of treatment and treatment plans and correlation with follow-up data. Furthermore, it can be used for the development of an assistance tool for a clinical application.

  8. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  9. Automatic Detection of Steel Ball's Surface Flaws Based on Image Processing

    Institute of Scientific and Technical Information of China (English)

    YU Zheng-lin; TAN Wei; YANG Dong-lin; CAO Guo-hua

    2007-01-01

    A new method to detect steel ball's surface flaws is presented based on computer techniques of image processing and pattern recognition. The steel ball's surface flaws is the primary factor causing bearing failure. The high efficient and precision detections for the surface flaws of steel ball can be conducted by the presented method, including spot, abrasion, burn, scratch and crack, etc. The design of main components of the detecting system is described in detail including automatic feeding mechanism, automatic spreading mechanism of steel ball's surface, optical system of microscope, image acquisition system, image processing system. The whole automatic system is controlled by an industrial control computer, which can carry out the recognition of flaws of steel ball's surface effectively.

  10. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature. PMID:26722989

  11. A study investigating variability of left ventricular ejection fraction using manual and automatic processing modes in a single setting

    International Nuclear Information System (INIS)

    Purpose: A planar multi-gated cardiac blood pool acquisition is a non-invasive technique commonly used to measure left ventricular ejection fraction (LVEF). It is essential that the calculation of LVEF be accurate, repeatable and reproducible for serial monitoring of patients. Different processing modes may be used in calculating the LVEF which require various degrees of manipulation. In addition, different operators with varying levels of experience may process the same data set. It is not known whether the inter-operator variability of LVEF values within a single nuclear medicine department has the potential to affect the calculated LVEF and in turn affect patient management. The aim of the study was to determine variability of LVEF values among operators with different levels of experience using two processing modes. Methods: A descriptive cross-sectional study was carried out in a single setting. Four operators with varying levels of experience analysed 120 left anterior oblique projections using manual and automatic processing modes to calculate the LVEF. Inter- and intra-operator correlation was determined. Results: Manual processing showed moderate to strong agreement (r1 = 0.653) between operators. Automatic processing indicated almost perfect (r1 = 0.812) inter-operator correlation. Intra-operator correlation demonstrated a trend of decreasing variability between processing modes with increasing levels of experience. Conclusion: Despite the overall inter-operator agreement, significant intra-operator variability was evident in results from operators with less experience. However, the discrepancies were such that the differences in LVEF would not play a role in patient management. It is recommended that automatic processing be used for determining LVEF to limit inter-operator variability. Additionally operator experience should be considered in the absence of standardised processing protocols when different processing modes are available in a single

  12. Adaptive Clutch Engaging Process Control for Automatic Mechanical Transmission

    Institute of Scientific and Technical Information of China (English)

    LIU Hai-ou; CHEN Hui-yan; DING Hua-rong; HE Zhong-bo

    2005-01-01

    Based on detail analysis of clutch engaging process control targets and adaptive demands, a control strategy which is based on speed signal, different from that of based on main clutch displacement signal, is put forward. It considers both jerk and slipping work which are the most commonly used quality evaluating indexes of vehicle starting phase. The adaptive control system and its reference model are discussed profoundly.Taking the adaptability to different starting gears and different road conditions as examples, some proving field test records are shown to illustrate the main clutch adaptive control strategy at starting phase. Proving field test gives acceptable results.

  13. Fast Implementation of Matched Filter Based Automatic Alignment Image Processing

    Energy Technology Data Exchange (ETDEWEB)

    Awwal, A S; Rice, K; Taha, T

    2008-04-02

    Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

  14. Plutonium monitor: data processing

    International Nuclear Information System (INIS)

    The principle of the real time determination of air voluminal activity from the measurement of the activity of the filter. The ''Pu'' measurement processing has to comple the Pu/natural radioactivity discrimination that the sampler cannot do alone. The basic process of the measurement processing is described. For the operation checkout and the examination of performance of the processing, and for the technical success of a measurement-processing system, it is possible to use a real-time simulation of the different sensors; in the case of ''Pu'' processing, a mockup of the sampler has been prefered; it gives the elementary countings due to the natural radioactivity for the two ''Pu'' and ''RaA'' windows; it has been associated to a simulator giving the pulses corresponding in the ''Pu'' window to only ''Pu'', according the chosen profile. The main results obtained after several hundreds simulations are given; eight diagrams, quite representative, are presented. To concludes the performence of the BFSA monitor, for plutonium aerosol monitoring, in which the TMAPU2 measurement processing system and a performant detection head are associated, are reviewed

  15. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    Science.gov (United States)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  16. Bayesian Updating in the EEG : Differentiation between Automatic and Controlled Processes of Human Economic Decision Making

    OpenAIRE

    Hügelschäfer, Sabine

    2011-01-01

    Research has shown that economic decision makers often do not behave according to the prescriptions of rationality, but instead show systematic deviations from rational behavior (e.g., Starmer, 2000). One approach to explain these deviations is taking a dual-process perspective (see Evans, 2008; Sanfey & Chang, 2008; Weber & Johnson, 2009) in which a distinction is made between deliberate, resource-consuming controlled processes and fast, effortless automatic processes. In many cases, deviati...

  17. Cooperative processing data bases

    Science.gov (United States)

    Hasta, Juzar

    1991-01-01

    Cooperative processing for the 1990's using client-server technology is addressed. The main theme is concepts of downsizing from mainframes and minicomputers to workstations on a local area network (LAN). This document is presented in view graph form.

  18. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  19. Automatic diagnosis of pathological myopia from heterogeneous biomedical data.

    Directory of Open Access Journals (Sweden)

    Zhuo Zhang

    Full Text Available Pathological myopia is one of the leading causes of blindness worldwide. The condition is particularly prevalent in Asia. Unlike myopia, pathological myopia is accompanied by degenerative changes in the retina, which if left untreated can lead to irrecoverable vision loss. The accurate diagnosis of pathological myopia will enable timely intervention and facilitate better disease management to slow down the progression of the disease. Current methods of assessment typically consider only one type of data, such as that from retinal imaging. However, different kinds of data, including that of genetic, demographic and clinical information, may contain different and independent information, which can provide different perspectives on the visually observable, genetic or environmental mechanisms for the disease. The combination of these potentially complementary pieces of information can enhance the understanding of the disease, providing a holistic appreciation of the multiple risks factors as well as improving the detection outcomes. In this study, we propose a computer-aided diagnosis framework for Pathological Myopia diagnosis through Biomedical and Image Informatics(PM-BMII. Through the use of multiple kernel learning (MKL methods, PM-BMII intelligently fuses heterogeneous biomedical information to improve the accuracy of disease diagnosis. Data from 2,258 subjects of a population-based study, in which demographic and clinical information, retinal fundus imaging data and genotyping data were collected, are used to evaluate the proposed framework. The experimental results show that PM-BMII achieves an AUC of 0.888, outperforming the detection results from the use of demographic and clinical information 0.607 (increase 46.3%, p<0.005, genotyping data 0.774 (increase 14.7%, P<0.005 or imaging data 0.852 (increase 4.2%, p=0.19 alone. The accuracy of the results obtained demonstrates the feasibility of using heterogeneous data for improved disease

  20. The Development of Automatic and Controlled Inhibitory Retrieval Processes in True and False Recall

    Science.gov (United States)

    Knott, Lauren M.; Howe, Mark L.; Wimmer, Marina C.; Dewhurst, Stephen A.

    2011-01-01

    In three experiments, we investigated the role of automatic and controlled inhibitory retrieval processes in true and false memory development in children and adults. Experiment 1 incorporated a directed forgetting task to examine controlled retrieval inhibition. Experiments 2 and 3 used a part-set cue and retrieval practice task to examine…

  1. Relatedness Proportion Effects in Semantic Categorization: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2014-01-01

    Semantic priming effects at a short prime-target stimulus onset asynchrony are commonly explained in terms of an automatic spreading activation process. According to this view, the proportion of related trials should have no impact on the size of the semantic priming effect. Using a semantic categorization task ("Is this a living…

  2. Semi-Automatic Post-Processing for Improved Usability of Electure Podcasts

    Science.gov (United States)

    Hurst, Wolfgang; Welte, Martina

    2009-01-01

    Purpose: Playing back recorded lectures on handheld devices offers interesting perspectives for learning, but suffers from small screen sizes. The purpose of this paper is to propose several semi-automatic post-processing steps in order to improve usability by providing a better readability and additional navigation functionality.…

  3. Towards the development of Hyperspectral Images of trench walls. Robotrench: Automatic Data acquisition

    Science.gov (United States)

    Ragona, D. E.; Minster, B.; Rockwell, T. K.; Fialko, Y.; Bloom, R. G.; Hemlinger, M.

    2004-12-01

    Previous studies on imaging spectrometry of paleoseismological excavations (Ragona, et. al, 2003, 2004) showed that low resolution Hyperspectral Imagery of a trench wall, processed with a supervised classification algorithm, provided more stratigraphic information than a high-resolution digital photography of the same exposure. Although the low-resolution images depicted the most important variations, a higher resolution hyperspectral image is necessary to assist in the recognition and documentation of paleoseismic events. Because of the fact that our spectroradiometer can only acquire one pixel at the time, creating a 25 psi image of a 1 x 1 m area of a trench wall will require 40000 individual measurements. To ease this extensive task we designed and built a device that can automatically position the spectroradiometer probe along the x-z plane of a trench wall. This device, informally named Robotrench, has two 7 feet long axes of motion (horizontal and vertical) commanded by a stepper motor controller board and a laptop computer. A platform provides the set up for the spectroradiometer probe and for the calibrated illumination system. A small circuit provided the interface between the Robotrench motion and the spectroradiomenter data collection. At its best, Robotrench ?spectroradiometer symbiotic pair can automatically record 1500-2000 pixels/hour, making the image acquisition process slow but feasible. At the time this abstract submission only a small calibration experiment was completed. This experiment was designed to calibrate the X-Z axes and to test the instrument performance. We measured a 20 x 10 cm brick wall at a 25 psi resolution. Three reference marks were set up on the trench wall as control points for the image registration process. The experiment was conducted at night under artificial light (stabilized 2 x 50 W halogen lamps). The data obtained was processed with the Spectral Angle Mapper algorithm. The image recovered from the data showed an

  4. Bioinformatics big data processing

    OpenAIRE

    Cohen-Boulakia, Sarah; Valduriez, Patrick

    2016-01-01

    The volumes of bioinformatics data available on the Web are constantly increasing.Access and joint exploitation of these highly distributed data (i.e, available in distributed Webdata sources) and highly heterogeneous (in text or tabulated les including images, in dierentformats, described with dierent levels of detail and dierent levels of quality ...) is essential forthe biological knowledge to progress. The purpose of this short report is to present in a simpleway the problems of the joint...

  5. Processing LHC data

    CERN Multimedia

    CERN IT department

    2013-01-01

    The LHC produces 600 million collisions every second in each detector, which generates approximately one petabyte of data per second. None of today’s computing systems are capable of recording such rates. Hence sophisticated selection systems are used for a first fast electronic pre-selection, only passing one out of 10 000 events. Tens of thousands of processor cores then select 1% of the remaining events. Even after such a drastic data reduction, the four big experiments, ALICE, ATLAS, CMS and LHCb, together need to store over 25 petabytes per year. The LHC data are aggregated in the CERN Data Centre, where initial data reconstruction is performed, and a copy is archived to long-term tape storage. Another copy is sent to several large scale data centres around the world. Subsequently hundreds of thousands of computers from around the world come into action: harnessed in a distributed computing service, they form the Worldwide LHC Computing Grid (WLCG), which provides the resources to store, distribute, an...

  6. Evaluation of automatic face recognition for automatic border control on actual data recorded of travellers at Schiphol Airport

    NARCIS (Netherlands)

    Spreeuwers, L.J.; Hendrikse, A.J.; Gerritsen, K.J.; Brömme, A.; Busch, C.

    2012-01-01

    Automatic border control at airports using automated facial recognition for checking the passport is becoming more and more common. A problem is that it is not clear how reliable these automatic gates are. Very few independent studies exist that assess the reliability of automated facial recognition

  7. From Automatic to Adaptive Data Acquisition:- towards scientific sensornets

    OpenAIRE

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yetthe main driving force behind these deployments are still computer scien-tists. The denser sampling and added modalities oered by sensornets coulddrive these elds in new directions, but not until the domain scientists be-come familiar with sensornets and use them as any other instrument in theirtoolbox.We explore three dierent directions in which sensornets can become easierto deploy, collect data of higher quality, and o...

  8. Automatic registration of multi-source medium resolution satellite data

    OpenAIRE

    L. Barazzetti; M. Gianinetto; M. Scaioni

    2014-01-01

    Multi-temporal and multi-source images gathered from satellite platforms are nowadays a fundamental source of information in several domains. One of the main challenges in the fusion of different data sets consists in the registration issue, i.e., the integration into the same framework of images collected with different spatial resolution and acquisition geometry. This paper presents a novel methodology to accomplish this task on the basis of a method that stands out from existing a...

  9. Parallelization and automatic data distribution for nuclear reactor simulations

    International Nuclear Information System (INIS)

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed

  10. Automatic analysis of eye tracker data from a driving simulator

    OpenAIRE

    Bergstrand, Martin

    2008-01-01

    The movement of a persons eyes is an interesting factor to study in different research areas where attention is important, for example driving. In 2004 the Swedish national road and transport research institute (VTI) introduced Simulator III – their third generation of driving simulators. Inside Simulator III a camera based eye tracking system is installed that records the eye movements of the driver. To be useful, the raw data from the eye tracking system needs to be analyzed and concentrate...

  11. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  12. Measuring Service Reliability Using Automatic Vehicle Location Data

    OpenAIRE

    2014-01-01

    Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models ...

  13. Automatic analysis (aa: efficient neuroimaging workflows and parallel processing using Matlab and XML

    Directory of Open Access Journals (Sweden)

    Rhodri eCusack

    2015-01-01

    Full Text Available Recent years have seen neuroimaging data becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complex to set up and run (increasing the risk of human error and time consuming to execute (restricting what analyses are attempted. Here we present an open-source framework, automatic analysis (aa, to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (redone. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA. However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast and efficient, for simple single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  14. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    Science.gov (United States)

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E

    2014-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address. PMID:25642185

  15. Big Data Analysis of Manufacturing Processes

    Science.gov (United States)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  16. An algorithm for discovering Lagrangians automatically from data

    CERN Document Server

    Hills, D J A; Hudson, J J

    2015-01-01

    An activity fundamental to science is building mathematical models. These models are used to both predict the results of future experiments and gain insight into the structure of the system under study. We present an algorithm that automates the model building process in a scientifically principled way. The algorithm can take observed trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. It does this by applying the principle of least action and searching for the simplest Lagrangian that describes the system's behaviour. By generating this Lagrangian in a human interpretable form, it also provides insight into the working of the system.

  17. AUTOMR: An automatic processing program system for the molecular replacement method

    International Nuclear Information System (INIS)

    An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)

  18. Automatic derivation of earth observation products from satellite data within the Siberian Earth System Science Cluster (SIB-ESS-C)

    Science.gov (United States)

    Eberle, J.; Schmullius, C. C.

    2011-12-01

    The Siberian Earth System Science Cluster (SIB-ESS-C) established at the University of Jena (Germany) is a spatial data infrastructure implementing standards published by the Open Geospatial Consortium (OGC) and the International Organization for Standardization (ISO) aimed at providing researchers with focus on Siberia with the technical means for data discovery, data access, data publication and data analysis in work with earth observation data. At the current development stage the SIB-ESS-C system comprises a federated metadata catalogue accessible through the SIB-ESS-C Web Portal or from any OGC-CSW compliant client. The Web Portal also contains a simple map-like visualization component which is currently being extended to a comprehensive visualization and analysis tool. The visualization component enables users to overlay different dataset found during a catalogue search. All data products are accessible as Web Mapping, Web Feature or Web Coverage Services allowing users to directly incorporate the data into their application. New developments aims on automatic registration and processing of raw earth observation data to derive permanently earth observation products. A data registry system within a whole process system including process chains to implement algorithms is currently designed. This will be extended with a system to process these incoming data automatically and permanently, depending on registered algorithms. Algorithms should know which input data is necessary and registered data should know which algorithms could be executed on it. This paper describes current developments as well as future ideas to build up a usefull and userfriendly access to satellite data, algorithms and therefrom derived products with state of the art web technologies and standards of the OGC.

  19. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  20. Organ dose calculation in CT based on scout image data and automatic image registration

    Energy Technology Data Exchange (ETDEWEB)

    Kortesniemi, Mika; Salli, Eero; Seuri, Raija [HUS Helsinki Medical Imaging Center, Univ. of Helsinki, Helsinki (Finland)], E-mail: mika.kortesniemi@hus.fi

    2012-10-15

    Background Computed tomography (CT) has become the main contributor of the cumulative radiation exposure in radiology. Information on cumulative exposure history of the patient should be available for efficient management of radiation exposures and for radiological justification. Purpose To develop and evaluate automatic image registration for organ dose calculation in CT. Material and Methods Planning radiograph (scout) image data describing CT scan ranges from 15 thoracic CT examinations (9 men and 6 women) and 10 abdominal CT examinations (6 men and 4 women) were co-registered with the reference trunk CT scout image. 2-D affine transformation and normalized correlation metric was used for image registration. Longitudinal (z-axis) scan range coordinates on the reference scout image were converted into slice locations on the CT-Expo anthropomorphic male and female models, following organ and effective dose calculations. Results The average deviation of z-location of studied patient images from the corresponding location in the reference scout image was 6.2 mm. The ranges of organ and effective doses with constant exposure parameters were from 0 to 28.0 mGy and from 7.3 to 14.5 mSv, respectively. The mean deviation of the doses for fully irradiated organs (inside the scan range), partially irradiated organs and non-irradiated organs (outside the scan range) was 1%, 5%, and 22%, respectively, due to image registration. Conclusion The automated image processing method to registrate individual chest and abdominal CT scout radiograph with the reference scout radiograph is feasible. It can be used to determine the individual scan range coordinates in z-direction to calculate the organ dose values. The presented method could be utilized in automatic organ dose calculation in CT for radiation exposure tracking of the patients.

  1. Organ dose calculation in CT based on scout image data and automatic image registration

    International Nuclear Information System (INIS)

    Background Computed tomography (CT) has become the main contributor of the cumulative radiation exposure in radiology. Information on cumulative exposure history of the patient should be available for efficient management of radiation exposures and for radiological justification. Purpose To develop and evaluate automatic image registration for organ dose calculation in CT. Material and Methods Planning radiograph (scout) image data describing CT scan ranges from 15 thoracic CT examinations (9 men and 6 women) and 10 abdominal CT examinations (6 men and 4 women) were co-registered with the reference trunk CT scout image. 2-D affine transformation and normalized correlation metric was used for image registration. Longitudinal (z-axis) scan range coordinates on the reference scout image were converted into slice locations on the CT-Expo anthropomorphic male and female models, following organ and effective dose calculations. Results The average deviation of z-location of studied patient images from the corresponding location in the reference scout image was 6.2 mm. The ranges of organ and effective doses with constant exposure parameters were from 0 to 28.0 mGy and from 7.3 to 14.5 mSv, respectively. The mean deviation of the doses for fully irradiated organs (inside the scan range), partially irradiated organs and non-irradiated organs (outside the scan range) was 1%, 5%, and 22%, respectively, due to image registration. Conclusion The automated image processing method to registrate individual chest and abdominal CT scout radiograph with the reference scout radiograph is feasible. It can be used to determine the individual scan range coordinates in z-direction to calculate the organ dose values. The presented method could be utilized in automatic organ dose calculation in CT for radiation exposure tracking of the patients

  2. Automatic testing system design and data analysis of permafrost temperature in Qinghai-Tibet Railway

    Institute of Scientific and Technical Information of China (English)

    尚迎春; 齐红元

    2008-01-01

    Aimed at the characteristics of permafrost temperature influencing the safety of Qinghai-Tibet Railway and its on-line testing system, comparing the achievement of permafrost study nationwide with those worldwide, an automatic testing system of permafrost temperature, containing a master computer and some slave computers, was designed. By choosing high-precise thermistors as temperature sensor, designing and positioning the depth and interval of testing sections, testing, keeping and sending permafrost temperature data at time over slave computers, and receiving, processing and analyzing the data of collecting permafrost temperature over master computer, the change of the permafrost temperature can be described and analyzed, which can provide information for permafrost railway engineering design. Moreover, by taking permafrost temperature testing in a certain section of Qinghai-Tibet Railway as an instance, the collected data of permafrost temperature were analyzed, and the effect of permafrost behavior was depicted under the railway, as well as, a BP model was set up to predict the permafrost characteristics. This testing system will provide information timely about the change of the permafrost to support the safety operation in Qinghai-Tibet Railway.

  3. Automatic and Accurate Conflation of Different Road-Network Vector Data towards Multi-Modal Navigation

    Directory of Open Access Journals (Sweden)

    Meng Zhang

    2016-05-01

    Full Text Available With the rapid improvement of geospatial data acquisition and processing techniques, a variety of geospatial databases from public or private organizations have become available. Quite often, one dataset may be superior to other datasets in one, but not all aspects. In Germany, for instance, there were three major road network vector data, viz. Tele Atlas (which is now “TOMTOM”, NAVTEQ (which is now “here”, and ATKIS. However, none of them was qualified for the purpose of multi-modal navigation (e.g., driving + walking: Tele Atlas and NAVTEQ consist of comprehensive routing-relevant information, but many pedestrian ways are missing; ATKIS covers more pedestrian areas but the road objects are not fully attributed. To satisfy the requirements of multi-modal navigation, an automatic approach has been proposed to conflate different road networks together, which involves five routines: (a road-network matching between datasets; (b identification of the pedestrian ways; (c geometric transformation to eliminate geometric inconsistency; (d topologic remodeling of the conflated road network; and (e error checking and correction. The proposed approach demonstrates high performance in a number of large test areas and therefore has been successfully utilized for the real-world data production in the whole region of Germany. As a result, the conflated road network allows the multi-modal navigation of “driving + walking”.

  4. GPU applications for data processing

    Energy Technology Data Exchange (ETDEWEB)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); Aleksandrov, Andrey [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); INFN sezione di Napoli, I-80125 Napoli (Italy); Tioukov, Valeri [INFN sezione di Napoli, I-80125 Napoli (Italy)

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  5. GPU applications for data processing

    Science.gov (United States)

    Vladymyrov, Mykhailo; Aleksandrov, Andrey; Tioukov, Valeri

    2015-12-01

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  6. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  7. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  8. Prejudice and perception: the role of automatic and controlled processes in misperceiving a weapon.

    Science.gov (United States)

    Payne, B K

    2001-08-01

    Two experiments used a priming paradigm to investigate the influence of racial cues on the perceptual identification of weapons. In Experiment 1, participants identified guns faster when primed with Black faces compared with White faces. In Experiment 2, participants were required to respond quickly, causing the racial bias to shift from reaction time to accuracy. Participants misidentified tools as guns more often when primed with a Black face than with a White face. L. L. Jacoby's (1991) process dissociation procedure was applied to demonstrate that racial primes influenced automatic (A) processing, but not controlled (C) processing. The response deadline reduced the C estimate but not the A estimate. The motivation to control prejudice moderated the relationship between explicit prejudice and automatic bias. Implications are discussed on applied and theoretical levels. PMID:11519925

  9. EXPERIMENTAL INVESTIGATION OF TIME DELAYS DATA TRANSMISSION IN AUTOMATIC CONTROL SYSTEMS

    OpenAIRE

    RYABENKIY Vladimir Mikhailovich; USHKARENKO Alexander Olegovich

    2015-01-01

    The method of statistical analysis of time-delay transmission of information and control packets over the network in automatic control systems are considered in this article. The results of measurements of time delays, which are obtained based on the analytic dependence of the probability density allow theoretically determine the time delays for data transmission.

  10. Automatic sample changer and microprocessor controlled data router for a small bulk-sample counter

    International Nuclear Information System (INIS)

    We have designed a gamma-ray counting system for small bulk-samples that incorporates an automatic sample-changer and multiple data-output device. The system includes an inexpensive microprocessor and is constructed mainly of materials and equipment commonly available at most institutions engaged in nuclear research

  11. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML

    OpenAIRE

    Rhodri Cusack; Alejandro Vicente-Grabovetsky; Daniel J Mitchell; Peelle, Jonathan E.

    2015-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by m...

  12. Necessary Processing of Personal Data

    DEFF Research Database (Denmark)

    Tranberg, Charlotte Bagger

    2006-01-01

    The Data Protection Directive prohibits processing of sensitive data (racial or ethnic origin, political, religious or philosophical convictions, trade union membership and information on health and sex life). All other personal data may be processed, provided processing is deemed necessary in...... Handelsgesellschaft. The aim of this article is to clarify the necessity requirement of the Data Protection Directive in terms of the general principle of proportionality. The usefulness of the principle of proportionality as the standard by which processing of personal data may be weighed is illustrated by the Peck...

  13. Automatic detection of zebra crossings from mobile LiDAR data

    Science.gov (United States)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  14. Progress on statistical learning systems as data mining tools for the creation of automatic databases in Fusion environments

    International Nuclear Information System (INIS)

    Nowadays, processing all information of a fusion database is a much more important issue than acquiring more data. Although typically fusion devices produce tens of thousands of discharges, specialized databases for physics studies are normally limited to a few tens of shots. This is due to the fact that these databases are almost always generated manually, which is a very time consuming and unreliable activity. The development of automatic methods to create specialized databases ensures first, the reduction of human efforts to identify and locate physical events, second, the standardization of criteria (reducing the vulnerability to human errors) and, third, the improvement of statistical relevance. Classification and regression techniques have been used for these purposes. The objective has been the automatic recognition of physical events (that can appear in a random and/or infrequent way) in waveforms and video-movies. Results are shown for the JET database.

  15. Evaluation of Automatic Building Detection Approaches Combining High Resolution Images and LiDAR Data

    Directory of Open Access Journals (Sweden)

    Javier Estornell

    2011-06-01

    Full Text Available In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral response. The other approach follows the standard scheme of object-based image classification: segmentation, feature extraction and selection, and classification, here performed using decision trees. In addition, the effect of the inclusion in the building detection process of contextual relations with the shadows is evaluated. Quality assessment is performed at two different levels: area and object. Area-level evaluates the building delineation performance, whereas object-level assesses the accuracy in the spatial location of individual buildings. The results obtained show a high efficiency of the evaluated methods for building detection techniques, in particular the thresholding-based approach, when the parameters are properly adjusted and adapted to the type of urban landscape considered.

  16. Big data processing with Hadoop

    OpenAIRE

    Wu, Shiqi

    2015-01-01

    Computing technology has changed the way we work, study, and live. The distributed data processing technology is one of the popular topics in the IT field. It provides a simple and centralized computing platform by reducing the cost of the hardware. The characteristics of distributed data processing technology have changed the whole industry. Hadoop, as the open source project of Apache foundation, is the most representative platform of distributed big data processing. The Hadoop distribu...

  17. Event-related potential evidence of dysfunction in automatic processing in abstinent alcoholics.

    Science.gov (United States)

    Realmuto, G; Begleiter, H; Odencrantz, J; Porjesz, B

    The preattentive automatic processing of 63 alcoholics and 27 controls was evaluated with an auditory inattentive event-related oddball paradigm. We examined the mismatch negativity and the N2-P3 complex. Results showed significantly greater amplitude for N2, P3 and the N2-P3 complex for controls but no individual lead (Fz, Cz, Pz) differences by group. A group-by-lead interaction was found for N2 and for the N2-P3 complex. There were no significant latency differences between groups; however, a significant age-by-group interaction effect on latency was greatest at the Cz electrode. Results reflect a possible aberration of automatic processing in alcoholics because of a defect in the mnemonic template necessary to match with an infrequent deviant stimuli. We also found suggestive evidence of a relative weakness of frontal cortical organization in alcoholics. Future studies are suggested that would help clarify these differences in alcoholics. PMID:8329490

  18. Entropy algorithm for automatic detection of oil spill from radarsat-2 SAR data

    International Nuclear Information System (INIS)

    Synthetic aperture radar (SAR) is a precious foundation of oil spill detection, surveying and monitoring that improves oil spill detection by various approaches. The main objective of this work is to design automatic detection procedures for oil spill in synthetic aperture radar (SAR) satellite data. In doing so the Entropy algorithm tool was designed to investigate the occurrence of oil spill in Gulf of Mexico using RADARSAT-2 SAR satellite data. The study shows that entropy algorithm provides accurate pattern of oil slick in SAR data. This shown by 90% for oil spill, 3% look-alike and 7% for sea roughness using the receiver -operational characteristics (ROC) curve. It can therefore be concluded Entropy algorithm can be used as automatic tool for oil spill detection in RADARSAT-2 SAR data

  19. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study

    OpenAIRE

    Tongran Liu; Tong Xiao; Xiaoyan Li; Jiannong Shi

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross change...

  20. Experimental Data Processing. Part 2

    OpenAIRE

    Wilhelm LAURENZI

    2011-01-01

    This paper represents the second part of a study regarding the processing of experimental monofactorialdata, and it presents the original program developed by the author for processing experimental data.Using consecrated methods and relations, this program allows establishing the number of samples,generating the experimental plan, entering and saving the measured data, identifying the data corrupted byaberrant errors, verifying the randomness, verifying the normality of data repartition, calc...

  1. SDPG: Spatial Data Processing Grid

    Institute of Scientific and Technical Information of China (English)

    XIAO Nong(肖侬); FUV Wei(付伟)

    2003-01-01

    Spatial applications will gain high complexity as the volume of spatial data increases rapidly. A suitable data processing and computing infrastructure for spatial applications needs to be established. Over the past decade, grid has become a powerful computing environment for data intensive and computing intensive applications. Integrating grid computing with spatial data processing technology, the authors designed a spatial data processing grid (called SDPG) to address the related problems. Requirements of spatial applications are examined and the architecture of SDPG is described in this paper. Key technologies for implementing SDPG are discussed with emphasis.

  2. Evaluation of automatic building detection approaches combining high resolution images and LiDAR data

    OpenAIRE

    Javier Estornell; Recio, Jorge A.; Txomin Hermosilla; Ruiz, Luis A.

    2011-01-01

    In this paper, two main approaches for automatic building detection and localization using high spatial resolution imagery and LiDAR data are compared and evaluated: thresholding-based and object-based classification. The thresholding-based approach is founded on the establishment of two threshold values: one refers to the minimum height to be considered as building, defined using the LiDAR data, and the other refers to the presence of vegetation, which is defined according to the spectral re...

  3. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis. PMID:27457481

  4. Process acceptance and adjustment techniques for Swiss automatic screw machine parts. Final report

    International Nuclear Information System (INIS)

    Product tolerance requirements for small, cylindrical, piece parts produced on swiss automatic screw machines have progressed to the reliability limits of inspection equipment. The miniature size, configuration, and tolerance requirements (plus or minus 0.0001 in.) (0.00254 mm) of these parts preclude the use of screening techniques to accept product or adjust processes during setup and production runs; therefore, existing means of product acceptance and process adjustment must be refined or new techniques must be developed. The purpose of this endeavor has been to determine benefits gained through the implementation of a process acceptance technique (PAT) to swiss automatic screw machine processes. PAT is a statistical approach developed for the purpose of accepting product and centering processes for parts produced by selected, controlled processes. Through this endeavor a determination has been made of the conditions under which PAT can benefit a controlled process and some specific types of screw machine processes upon which PAT could be applied. However, it was also determined that PAT, if used indiscriminately, may become a record keeping burden when applied to more than one dimension at a given machining operation

  5. An object-based classification method for automatic detection of lunar impact craters from topographic data

    Science.gov (United States)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  6. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Maria Vakalopoulou

    2014-04-01

    Full Text Available Frame hyperspectral sensors, in contrast to push-broom or line-scanning ones, produce hyperspectral datasets with, in general, better geometry but with unregistered spectral bands. Being acquired at different instances and due to platform motion and movements (UAVs, aircrafts, etc., every spectral band is displaced and acquired with a different geometry. The automatic and accurate registration of hyperspectral datasets from frame sensors remains a challenge. Powerful local feature descriptors when computed over the spectrum fail to extract enough correspondences and successfully complete the registration procedure. To this end, we propose a generic and automated framework which decomposes the problem and enables the efficient computation of a sufficient amount of accurate correspondences over the given spectrum, without using any ancillary data (e.g., from GPS/IMU. First, the spectral bands are divided in spectral groups according to their wavelength. The spectral borders of each group are not strict and their formulation allows certain overlaps. The spectral variance and proximity determine the applicability of every spectral band to act as a reference during the registration procedure. The proposed decomposition allows the descriptor and the robust estimation process to deliver numerous inliers. The search space of possible solutions has been effectively narrowed by sorting and selecting the optimal spectral bands which under an unsupervised manner can quickly recover hypercube’s geometry. The developed approach has been qualitatively and quantitatively evaluated with six different datasets obtained by frame sensors onboard aerial platforms and UAVs. Experimental results appear promising.

  7. Data acquisition and automatic processing by 123D Catch

    OpenAIRE

    Francesca Picchio

    2013-01-01

    Il progetto di Masada si è sviluppato su una ricerca di collaborazione ancora in atto tra il Department of Interior Building and Environment Design of Shenkar College of Design and Engineering, il Dipartimento di Architettura dell'Università di Firenze ed il Dipartimento di Architettura ed Ingegneria Civile dell'Università di Pavia. Dietro agli aspetti di ricerca il progetto ha anche aspetti didattici. Consiste in una proposta per la documentazione digitale dei siti archeologici di Masada dir...

  8. Automatic differentiation tools in the dynamic simulation of chemical engineering processes

    Directory of Open Access Journals (Sweden)

    Castro M.C.

    2000-01-01

    Full Text Available Automatic Differentiation is a relatively recent technique developed for the differentiation of functions applicable directly to the source code to compute the function written in standard programming languages. That technique permits the automatization of the differentiation step, crucial for dynamic simulation and optimization of processes. The values for the derivatives obtained with AD are exact (to roundoff. The theoretical exactness of the AD comes from the fact that it uses the same rules of differentiation as in differential calculus, but these rules are applied to an algorithmic specification of the function rather than to a formula. The main purpose of this contribution is to discuss the impact of Automatic Differentiation in the field of dynamic simulation of chemical engineering processes. The influence of the differentiation technique on the behavior of the integration code, the performance of the generated code and the incorporation of AD tools in consistent initialization tools are discussed from the viewpoint of dynamic simulation of typical models in chemical engineering.

  9. Data processing code system for foil experiments

    International Nuclear Information System (INIS)

    A code system has been developed for an efficient measurement of reaction rates in foil irradiation experiments. The code system consists of four codes, namely of, (i) setting up experimental parameters and collecting γ-ray spectrum data, (ii) analysing γ-ray spectrum, (iii) calculating reaction rate distributions, and (iv) furnishing utility programs. This code system provides a useful tool of data processing of irradiated foil to obtain the γ-ray spectrum and the reaction rate distribution. These procedures can be executed automatically. The routine for processing foil counting data covers the following functions : the data smoothing, the peak searching by means of the first and second derivative methods, and the determination of the photo peak area and its error with use of a functional fitted by a non-linear least squares method. The code for reaction rate calculation has the following functions : the determination of decay constants of each isotope by using decay data of foil counting and the calculation of reaction rates after correcting irradiation time and weight of a foil. These codes are written by FORTRAN-77 for mini-computer PDP-11/44 (DEC), of which the maximum program memory size is limited to 32k bytes. (author)

  10. Process and Data Flow Control in KLOE

    Science.gov (United States)

    Pasqualucci, E.; KLOE Collaboration

    2001-10-01

    The core of the KLOE distributed event building system is a switched network. The online processes are distributed over a large set of processors in this network. All processes have to change coherently their state of activity as a consequence of local or remote commands. A fast and reliable message system based on the SNMP protocol has been developed. A command server has been implemented as a non privileged daemon able to respond to "set" and "get" queries on private SNMP variables. This process is able to convert remote set operations into local commands and to map automatically an SNMP subtree on a user-defined set of process variables. Process activity can be continuously monitored by remotely accessing their variables by means of the command server. Only the command server is involved in these operations, without disturbing the process flow. Subevents coming from subdetectors are sent to different nodes of a computing farm for the last stage of event building. Based on features of the SNMP protocol and of the KLOE message system, the Data Flow Control System (DFC) is able to rapidly redirect network traffic, keeping in account the dynamics of the whole DAQ system in order to assure coherent subevent addressing in an asynchronous "push" architecture, without introducing dead time. The KLOE DFC is currently working in the KLOE DAQ system. Its main characteristics and performance are discussed.

  11. Process Mining Online Assessment Data

    Science.gov (United States)

    Pechenizkiy, Mykola; Trcka, Nikola; Vasilyeva, Ekaterina; van der Aalst, Wil; De Bra, Paul

    2009-01-01

    Traditional data mining techniques have been extensively applied to find interesting patterns, build descriptive and predictive models from large volumes of data accumulated through the use of different information systems. The results of data mining can be used for getting a better understanding of the underlying educational processes, for…

  12. Fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier, E-mail: nurizzo@esrf.fr [European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France); Bowler, Matthew W., E-mail: nurizzo@esrf.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); Université Grenoble Alpes–EMBL–CNRS, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France)

    2015-07-31

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  13. A semi-automatic software tool for batch processing of yeast colony images

    Czech Academy of Sciences Publication Activity Database

    Schier, Jan; Kovář, Bohumil

    Innsbruck: The International Association of Science and Technology for Development (IASTED), 2011 - (Zhang, J.), s. 206-212 ISBN 978-0-88986-865-6. [Eighth IASTED International Conference on Signal Processing, Pattern Recognition, and Applications (SPPRA). Innsbruck (AT), 16.02.2011-18.02.2011] R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : yeast colonies * Petri dish * Image segmentation * Fast radial transform Subject RIV: JC - Computer Hardware ; Software http://library.utia.cas.cz/separaty/2011/ZS/schier-a semi-automatic software tool for batch processing of yeast colony images.pdf

  14. Influence of the automatic regulator parameters on the power transition processes of the IBR-2 reactor

    International Nuclear Information System (INIS)

    With the help of the IBR-2 reactor models based on a block structure with z-transformation of variable and experimentally determined parameters of feed-backs, the power transition processes at various values of parameters of the automatic regulator (AR) are calculated. It is shown, that at regular disturbances of a reactivity the best transition processes correspond to the greatest speed of the AR while the AR smoothing-unit is eliminated. The recommendations of selection of the AR parameters are given if there are random disturbances of a reactivity which have place at normal operation of the IBR-2 reactor. (author)

  15. Graphical Language for Data Processing

    Science.gov (United States)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  16. Modeling, Learning, and Processing of Text Technological Data Structures

    CERN Document Server

    Kühnberger, Kai-Uwe; Lobin, Henning; Lüngen, Harald; Storrer, Angelika; Witt, Andreas

    2012-01-01

    Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book “Modelling, Learning and Processing of Text-Technological Data Structures” deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.

  17. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  18. Automatic calibration and signal switching system for the particle beam fusion research data acquisition facility

    International Nuclear Information System (INIS)

    This report describes both the hardware and software components of an automatic calibration and signal system (Autocal) for the data acquisition system for the Sandia particle beam fusion research accelerators Hydra, Proto I, and Proto II. The Autocal hardware consists of off-the-shelf commercial equipment. The various hardware components, special modifications and overall system configuration are described. Special software has been developed to support the Autocal hardware. Software operation and maintenance are described

  19. Protokol Interchangeable Data pada VMeS (Vessel Messaging System) dan AIS (Automatic Identification System)

    OpenAIRE

    Farid Andhika; Trika Pitana; Achmad Affandi

    2012-01-01

    VMeS (Vessel Messaging System) merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System) yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehin...

  20. Sla-Oriented Semi-Automatic Management Of Data Storage And Applications In Distributed Environments

    OpenAIRE

    Dariusz Król,; Włodzimierz Funika; Renata Słota; Jacek Kitowski

    2010-01-01

    In this paper we describe a semi-automatic programming framework for supporting userswith managing the deployment of distributed applications along with storing large amountsof data in order to maintain Quality of Service in highly dynamic and distributed environments,e.g., Grid. The Polish national PL-GRID project aims to provide Polish science withboth hardware and software infrastructures which will allow scientists to perform complexsimulations and in-silico experiments on a scale greater...

  1. AN AUTOMATIC PROCEDURE FOR COMBINING DIGITAL IMAGES AND LASER SCANNER DATA

    OpenAIRE

    Moussa, W.; Abdel-Wahab, M.; D. Fritsch

    2012-01-01

    Besides improving both the geometry and the visual quality of the model, the integration of close-range photogrammetry and terrestrial laser scanning techniques directs at filling gaps in laser scanner point clouds to avoid modeling errors, reconstructing more details in higher resolution and recovering simple structures with less geometric details. Thus, within this paper a flexible approach for the automatic combination of digital images and laser scanner data is presented. Our approach com...

  2. REAL TIME DATA PROCESSING FRAMEWORKS

    Directory of Open Access Journals (Sweden)

    Yash Sakaria

    2015-09-01

    Full Text Available On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solve the problem of live stream data mining. In summary, the traditional approach of big data being co-relational to Hadoop is false; focus needs to be given on business value as well. Data Warehousing, Hadoop and stream processing complement each other very well. In this paper, we have tried reviewing a few frameworks and products which use real time data streaming by providing modifications to Hadoop.

  3. REAL TIME DATA PROCESSING FRAMEWORKS

    OpenAIRE

    Yash Sakaria; Chetashri Bhadane

    2015-01-01

    On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solv...

  4. BRICORK: an automatic machine with image processing for the production of corks

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Carvalho, Fernando D.; Rodrigues, Fernando C.

    1991-06-01

    The production of cork stoppers from raw cork strip is a manual and labour-intensive process in which a punch-operator quickly inspects all sides of the cork strip for defects and decides where to punch out stoppers. He then positions the strip underneath a rotating tubular cutter and punches out the stoppers one at a time. This procedure is somewhat subjective and prone to error, being dependent on the judgement and accuracy of the operator. This paper describes the machine being developed jointly by Mecanova, Laboratorio Nacional de Engenharia e Tecnologia (LNETI) and Empresa de Investiga&sigmafcoe Desenvolvimento de Electronica SA (EID) which automatically processes cork strip introduced by an unskilled operator. The machine uses both image processing and laser inspection techniques to examine the strip. Defects in the cork are detected and categorised in order to determine regions where stoppers may be punched. The precise locations are then automatically optimised for best usage of the raw material (quantity and quality of stoppers). In order to achieve the required speed of production these image processing techniques may be implemented in hardware. The paper presents results obtained using the vision system software under development together with descriptions of both the image processing and mechanical aspects of the proposed machine.

  5. Using Hybrid Decision Tree -Houph Transform Approach For Automatic Bank Check Processing

    Directory of Open Access Journals (Sweden)

    Heba A. Elnemr

    2012-05-01

    Full Text Available One of the first steps in the realization of an automatic system of bank check processing is the automatic classification of checks and extraction of handwritten area. This paper presents a new hybrid method which couple together the statistical color histogram features, the entropy, the energy and the Houph transform to achieve the automatic classification of checks as well as the segmentation and recognition of the various information on the check. The proposed method relies on two stages. First, a two-step classification algorithm is implemented. In the first step, a decision classification tree is built using the entropy, the energy, the logo location and histogram features of colored bank checks. These features are used to classify checks into several groups. Each group may contain one or more type of checks. Therefore, in the second step the bank logo or bank name are matched against its stored template to identify the correct prototype. Second, Hough transform is utilized to detect lines in the classified checks. These lines are used as indicator to the bank check fields. A group of experiments is performed showing that the proposed technique is promising as regards classifying the bank checks and extracting the important fields in that check.

  6. Automatic Inspection and Processing of Accessory Based on Vision Stitching and Spectral Illumination

    Directory of Open Access Journals (Sweden)

    Wen-Yang Chang

    2014-08-01

    Full Text Available The study investigates automatic inspection and processing of the stem accessories based on vision stitching and spectral illumination. The vision stitching mainly involves algorithms of white balance, scale-invariant feature transforms (SIFT and roundness for whole image of automatic accessory inspection. The illumination intensities, angles, and spectral analyses of light sources are analyzed for image optimal inspections. The unrealistic color casts of feature inspection is removed using a white balance algorithm for global automatic adjustment. The SIFT is used to extract and detect the image features for big image stitching. The Hough transform is used to detect the parameters of a circle for roundness of the bicycle accessories. The feature inspections of a stem contain geometry size, roundness, and image stitching. Results showed that maximum errors of 0°, 10°, 30°, and 50° degree for the spectral illumination of white light LED arrays with differential shift displacements are 4.4, 4.2, 6.8, and 3.5 %, respectively. The deviation error of image stitching for the stem accessory in x and y coordinates are 2 pixels. The SIFT and RANSAC enable to transform the stem image into local feature coordinates.

  7. Automatic processing of gamma ray spectra employing classical and modified Fourier transform approach

    International Nuclear Information System (INIS)

    This report describes methods for automatic processing of gamma ray spectra acquired with HPGe detectors. The processing incorporated both classical and signal processing approach. The classical method was used for smoothing, detecting significant peaks, finding peak envelope limits and a proposed method of finding peak limits, peak significance index, full width at half maximum, detecting doublets for further analysis. To facilitate application of signal processing to nuclear spectra, Madan et al. gave a new classification of signals and identified nuclear spectra as Type II signals, mathematically formalized modified Fourier transform and pioneered its application to process doublet envelopes acquired with modern spectrometers. It was extended to facilitate routine analysis of the spectra. A facility for energy and efficiency calibration was also included. The results obtained by analyzing observed gamma-ray spectra using the above approach compared favourably with those obtained with SAMPO and also those derived from table of radioisotopes. (author). 15 refs., 3 figs., 3 tabs

  8. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  9. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  10. Performing the processing required for automatically get a PDF/A version of the CERN Library documentation

    CERN Document Server

    Molina Garcia-Retamero, Antonio

    2015-01-01

    The aim of the project was to perform the processing required for automatically get a PDF/A version of the CERN Library documentation. For this, it is necessary to extract as much metadata as possible from the sources files, inject the required data into the original source files creating new ones ready for being compiled with all related dependencies. Besides, I’ve proposed the creation of a HTML version consistent with the PDF and navigable for easy access, I’ve been trying to perform some Natural Language Processing for extracting metadata, I’ve proposed the injection of the cern library documentation into the HTML version of the long writeups where it is referenced (for instance, when a CERN Library function is referenced in a sample code) Finally, I’ve designed and implemented a Graphical User Interface in order to simplify the process for the user.

  11. Linear Processes for Functional Data

    OpenAIRE

    Mas, André; Pumo, Besnik

    2009-01-01

    International audience Linear processes on functional spaces were born about fifteen years ago. And this original topic went through the same fast development as the other areas of functional data modeling such as PCA or regression. They aim at generalizing to random curves the classical ARMA models widely known in time series analysis. They offer a wide spectrum of models suited to the statistical inference on continuous time stochastic processes within the paradigm of functional data. Es...

  12. Visual Execution and Data Visualisation in Natural Language Processing

    OpenAIRE

    Rodgers, Peter; Gaizauskas, Robert; Humphreys, Kevin; Cunningham, Hamish

    1997-01-01

    We describe GGI, a visual system that allows the user to execute an automatically generated data flow graph containing code modules that perform natural language processing tasks. These code modules operate on text documents. GGI has a suite of text visualisation tools that allows the user useful views of the annotation data that is produced by the modules in the executable graph. GGI forms part of the GATE natural language engineering system.

  13. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  14. Sensitometric and archival evaluation of Kodak RA films in dental automatic processing.

    Science.gov (United States)

    Thunthy, K H; Yeadon, W R; Winberg, R

    1994-04-01

    The Kodak Rapid Access System is an extension of the T-grain technology (T-Mat film). Unlike the T-Mat film, the T-Mat/Rapid Access film is forehardened by the addition of more hardener to the film emulsion. It can thus be processed rapidly in a Kodak Rapid Access medical processor in only 45 seconds dry-to-dry cycle by using the Kodak X-Omat RA/30 developer that does not contain a hardener. The absence of the hardener, glutaraldehyde, in the developer also makes this solution environmentally safer. In medical radiography, the T-Mat film has been replaced by the T-Mat/Rapid Access film. However, in dental extraoral radiography the T-Mat film is still being used because the processing solutions and the comparatively low temperatures of dental automatic processors are different from those used in medical radiography. This report shows that, for panoramic and other dental extraoral radiography, T-Mat films can be replaced by T-Mat/RA films by processing them in conventional Kodak X-Omat RP solutions using a dental automatic processing cycle. The differences in the sensitometric properties of the T-Mat and T-Mat/rapid Access films were negligible and therefore clinically insignificant. All films tested well for archival quality. PMID:8015810

  15. Design of a modern automatic control system for the activated sludge process in wastewater treatment

    Institute of Scientific and Technical Information of China (English)

    Alexandros D. Kotzapetros; Panayotis A. Paraskevas; Athanasios S. Stasinakis

    2015-01-01

    The Activated Sludge Process (ASP) exhibits highly nonlinear properties. The design of an automatic control system that is robust against disturbance of inlet wastewater flow rate and has short process settling times is a chal enging matter. The proposed control method is an I-P modified controller automatic control system with state variable feedback and control canonical form simulation diagram for the process. A more stable response is achieved with this type of modern control. Settling times of 0.48 days are achieved for the concentration of microorganisms, (reference value step increase of 50 mg·L−1) and 0.01 days for the concentration of oxygen (reference value step increase of 0.1 mg·L−1). Fluctuations of concentrations of oxygen and microorganisms after an inlet disturbance of 5 × 103m3·d−1 are smal . Changes in the reference values of oxygen and microorganisms (increases by 10%, 20%and 30%) show satisfactory response of the system in al cases. Changes in the value of inlet wastewater flow rate disturbance (increases by 10%, 25%, 50%and 100%) are stabilized by the control system in short time. Maximum percent overshoot is also taken in consideration in all cases and the largest value is 25%which is acceptable. The proposed method with I-P controller is better for disturbance rejection and process settling times compared to the same method using PI control er. This method can substitute optimal control systems in ASP.

  16. Data processing in radiology: Resume and prospects

    International Nuclear Information System (INIS)

    The technical aspects of radiology are particularly suitable for electronic data processing. In addition to automation of radiological apparatus and tumour registration, there are three areas in radiology particularly suitable for electronic data processing: treatment planning, dose calculations and supervision of radiotherapy techniques in radio-oncology. It can be used for work processing in the office and for documentation, both in diagnostic and therapeutic radiology, and digital techniques can be employed for image transmission, storage and manipulation. Computers for treatment planning and dose calculation are standard techniques and suitable computers allow one to spot occasional and systematic errors during radiation treation treatment and to eliminate these. They also provide for the automatic generation of the required protocols. Word processors have proved particularly valuable in private practice. They are valuable for composing reports from their basic elements, but less valuable for texts that are stereotypes. The most important developments are in digital imaging, image storage and image transmission. The storage of images on video discs, transmission through fibre-optic cables and computer manipulation of images are described and the consequences and problems, which may affect the radiologist, are discussed. (orig.)

  17. [Data processing in radiology: summary and prospects].

    Science.gov (United States)

    Heilmann, H P; Tiemann, J

    1985-12-01

    The technical aspects of radiology are particularly suitable for electronic data processing. In addition to automation of radiological apparatus and tumour registration, there are three areas in radiology particularly suitable for electronic data processing: treatment planning, dose calculations and supervision of radiotherapy techniques in radio-oncology. It can be used for word processing in the office and for documentation, both in diagnostic and therapeutic radiology, and digital techniques can be employed for image transmission, storage and manipulation. Computers for treatment planning and dose calculation are standard techniques and suitable computers allow one to spot occasional and systematic errors during radiation treatment and to eliminate these. They also provide for the automatic generation of the required protocols. Word processors have proved particularly valuable in private practice. They are valuable for composing reports from their basic elements, but less valuable for texts that are stereotypes. The most important developments are in digital imaging, image storage and image transmission. The storage of images on video discs, transmission through fibre-optic cables and computer manipulation of images are described and the consequences and problems, which may affect the radiologist, are discussed. PMID:3001861

  18. Visa: AN Automatic Aware and Visual Aids Mechanism for Improving the Correct Use of Geospatial Data

    Science.gov (United States)

    Hong, J. H.; Su, Y. T.

    2016-06-01

    With the fast growth of internet-based sharing mechanism and OpenGIS technology, users nowadays enjoy the luxury to quickly locate and access a variety of geospatial data for the tasks at hands. While this sharing innovation tremendously expand the possibility of application and reduce the development cost, users nevertheless have to deal with all kinds of "differences" implicitly hidden behind the acquired georesources. We argue the next generation of GIS-based environment, regardless internet-based or not, must have built-in knowledge to automatically and correctly assess the fitness of data use and present the analyzed results to users in an intuitive and meaningful way. The VISA approach proposed in this paper refer to four different types of visual aids that can be respectively used for addressing analyzed results, namely, virtual layer, informative window, symbol transformation and augmented TOC. The VISA-enabled interface works in an automatic-aware fashion, where the standardized metadata serve as the known facts about the selected geospatial resources, algorithms for analyzing the differences of temporality and quality of the geospatial resources were designed and the transformation of analyzed results into visual aids were automatically executed. It successfully presents a new way for bridging the communication gaps between systems and users. GIS has been long seen as a powerful integration tool, but its achievements would be highly restricted if it fails to provide a friendly and correct working platform.

  19. High speed television camera system processes photographic film data for digital computer analysis

    Science.gov (United States)

    Habbal, N. A.

    1970-01-01

    Data acquisition system translates and processes graphical information recorded on high speed photographic film. It automatically scans the film and stores the information with a minimal use of the computer memory.

  20. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  1. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information. PMID:26375031

  2. Digital Data Processing of Stilbene

    International Nuclear Information System (INIS)

    Stilbene is a proven spectrometric detector for mixed fields of neutrons and gamma rays. By digital processing of shape output pulses from the detector it is possible to obtain information about the energy of the interacting neutron / photon and distinguish which of these two particles interacts in the detector. Another numerical processing of digital data can finalize the energy spectrum of both components of the mixed field. The quality of the digitized data is highly dependent on the parameters of the hardware used for digitization and on the quality of software processing. Our results also show how the quality of the particle type identification depends on the sampling rate and as well as the method of processing of the sampled data. (authors)

  3. Automatic Rice Crop Height Measurement Using a Field Server and Digital Image Processing

    OpenAIRE

    Tanakorn Sritarapipat; Preesan Rakwatin; Teerasit Kasetkasem

    2014-01-01

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the hei...

  4. Choice architectural nudge interventions to promote vegetable consumption based on automatic processes decision-making

    DEFF Research Database (Denmark)

    Skov, Laurits Rohden; Friis Rasmussen, Rasmus; Møller Andersen, Pernille;

    2014-01-01

    Objective: To test the effectiveness of three types of choice architectural nudges to promote vegetable consumption among Danish people. The experiment aims at providing evidence on the influence of automatic processing system in the food choice situation in an all you can eat buffet serving.......001) but no significant change in vegetable in- take (p=0.16). Nudge 2 (N=33) found a significant increase in vegetable consumption (p=0.018) while Nudge 3 (N=32) found no impact on vegetable intake (p=0.56) but a decrease in total energy intake due to a decrease in meat intake (p

  5. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    OpenAIRE

    Liang Cheng; Lihua Tong; Manchun Li; Yongxue Liu

    2013-01-01

    Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP) method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is dev...

  6. Key issues in automatic classification of defects in post-inspection review process of photomasks

    Science.gov (United States)

    Pereira, Mark; Maji, Manabendra; Pai, Ravi R.; B. V. R., Samir; Seshadri, R.; Patil, Pradeepkumar

    2012-11-01

    very small real defects, registering grey level defect images with layout data base, automatically finding out maximum critical dimension (CD) variation for defective patterns (where patterns could have Manhattan as well as all angle edges), etc. This paper discusses about many such key issues and suggests strategies to address some of them based upon our experience while developing the NxADC and evaluating it on production mask defects.

  7. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  8. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. PMID:26518205

  9. Data Processing: Large Data Processing Class Leads to Innovations

    Science.gov (United States)

    Stair, Ralph M.; Render, Barry

    1977-01-01

    Experience with mass sections in the Introduction to Business Data Processing course at the University of New Orleans has been positive. The innovations described in this articles have not only helped to conserve scarce resources but have, in the author's opinion, provided the potential for a more effective and efficient course. (HD)

  10. Communicating Processes with Data for Supervisory Coordination

    Directory of Open Access Journals (Sweden)

    Jasen Markovski

    2012-08-01

    Full Text Available We employ supervisory controllers to safely coordinate high-level discrete(-event behavior of distributed components of complex systems. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of the supervisory controllers can be automatically synthesized based on formal models of the system components and a formalization of the safe coordination (control requirements. Based on the obtained models, code generation can be used to implement the supervisory controllers in software, on a PLC, or an embedded (microprocessor. In this article, we develop a process theory with data that supports a model-based systems engineering framework for supervisory coordination. We employ communication to distinguish between the different flows of information, i.e., observation and supervision, whereas we employ data to specify the coordination requirements more compactly, and to increase the expressivity of the framework. To illustrate the framework, we remodel an industrial case study involving coordination of maintenance procedures of a printing process of a high-tech Oce printer.

  11. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  12. The place-value of a digit in multi-digit numbers is processed automatically.

    Science.gov (United States)

    Kallai, Arava Y; Tzelgov, Joseph

    2012-09-01

    The automatic processing of the place-value of digits in a multi-digit number was investigated in 4 experiments. Experiment 1 and two control experiments employed a numerical comparison task in which the place-value of a non-zero digit was varied in a string composed of zeros. Experiment 2 employed a physical comparison task in which strings of digits varied in their physical sizes. In both types of tasks, the place-value of the non-zero digit in the string was irrelevant to the task performed. Interference of the place-value information was found in both tasks. When the non-zero digit occupied a lower place-value, it was recognized slower as a larger digit or as written in a larger font size. We concluded that place-value in a multi-digit number is processed automatically. These results support the notion of a decomposed representation of multi-digit numbers in memory. PMID:22449132

  13. Are scalar implicatures automatically processed and different for each individual? A mismatch negativity (MMN) study.

    Science.gov (United States)

    Zhao, Ming; Liu, Tao; Chen, Gang; Chen, Feiyan

    2015-03-01

    Scalar implicatures are ordinarily activated in human communication when the speaker uses a weak expression (e.g., some) from a set of stronger alternatives (e.g., many, all). It has been debated whether scalar inferences are generated by default. To clarify this issue and examine whether individual pragmatic ability will affect the mechanism of scalar inference processing, we performed experiment with an MMN paradigm to capture the neurophysiological indicators of automatic processing of spoken sentences and divided participants into high and low pragmatic ability groups. Experimental results showed that compared with the condition that an informative sentence ("Some animals have tails") is the deviant stimuli, when an underinformative sentence ("Some tigers have tails") is the deviant stimuli, the high pragmatic ability group induced mismatch negativity (MMN) and sustained negativity, while the low pragmatic ability group had no ERP effects. These results indicated that at least some people can automatically activate the scalar implicatures when encountering scalar trigger words, even in the inattentive status. PMID:25542387

  14. Abnormalities in Automatic Processing of Illness-Related Stimuli in Self-Rated Alexithymia.

    Directory of Open Access Journals (Sweden)

    Laura Brandt

    Full Text Available To investigate abnormalities in automatic information processing related to self- and observer-rated alexithymia, especially with regard to somatization, controlling for confounding variables such as depression and affect.89 healthy subjects (60% female, aged 19-71 years (M = 32.1. 58 subjects were additionally rated by an observer.Alexithymia (self-rating: TAS-20, observer rating: OAS; automatic information processing (priming task including verbal [illness-related, negative, positive, neutral] and facial [negative, positive, neutral] stimuli; somatoform symptoms (SOMS-7T; confounders: depression (BDI, affect (PANAS.Higher self-reported alexithymia scores were associated with lower reaction times for negative (r = .19, p < .10 and positive (r = .26, p < .05 verbal primes when the target was illness-related. Self-reported alexithymia was correlated with number (r = .42, p < .01 and intensity of current somatoform symptoms (r = .36, p < .01, but unrelated to observer-rated alexithymia (r = .11, p = .42.Results indicate a faster allocation of attentional resources away from task-irrelevant information towards illness-related stimuli in alexithymia. Considering the close relationship between alexithymia and somatization, these findings are compatible with the theoretical view that alexithymics focus strongly on bodily sensations of emotional arousal. A single observer rating (OAS does not seem to be an adequate alexithymia-measure in community samples.

  15. Automatic License Plate Recoganization System Based on Image Processing Using LabVIEW

    Directory of Open Access Journals (Sweden)

    Rachana Chahar

    2014-05-01

    Full Text Available Automatic License plate recognition (ALPR system is one kind of an intelligent transport system and is of considerable interest because of its potential applications in highway electronic toll collection and traffic monitoring systems. This allows traffic fines to be automatically generated and sent to the appropriate violator without the need for human intervention. An ALPR system can be located on the side of or above a roadway, at a toll booth, or at another type of entrance way. All ALPR systems follow a basic high level process. The process starts when a sensor detects the presence of a vehicle and signals the system camera to record an image of the passing vehicle. The image is passed on to a computer where software running on the computer extracts the license plate number from the image. License plate numbers can then be recorded in a database with other information such as time vehicle past and speed of vehicle. And finally, chain code concept with different parameter is used for recognition of the characters. The performance of the proposed algorithm has been tested on real images. The Proposed system has been implemented using Vision Assistant {&} LabVIEW

  16. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  17. Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet

    Directory of Open Access Journals (Sweden)

    Paweł Kędzia

    2015-06-01

    Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNetThe paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.

  18. Fast data processing with Spark

    CERN Document Server

    Sankar, Krishna

    2015-01-01

    Fast Data Processing with Spark - Second Edition is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too big to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

  19. Automatic Ethical Filtering using Semantic Vectors Creating Normative Tag Cloud from Big Data

    Directory of Open Access Journals (Sweden)

    Ahsan N. Khan

    2015-03-01

    Full Text Available Ethical filtering has been a painful and controversial issue seen by different angles worldwide. Stalwarts for freedom find newer methods to circumvent banned URLs while generative power of the Internet outpaces velocity of censorship. Hence, keeping online content safe from anti-religious and sexually provocative content is a growing issue in conservative countries in Asia and The Middle East. Solutions for online ethical filters are linearly upper bound given computation and big data growth scales. In this scenario, Semantic Vectors are applied as automatic ethical filters to calculate accuracy and efficiency metrics. The results show a normative tag cloud generated with superior performance to industry solutions.

  20. A framework for automatic segmentation in three dimensions of microstructural tomography data

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus; Bowen, Jacob R.

    2010-01-01

    segmentation schemes. We present here a framework for performing automatic segmentation of complex microstructures using a level set method. The technique is based on numerical approximations to partial differential equations to evolve a 3D surface to capture the phase boundaries. Vector fields derived from...... the experimentally acquired data are used as the driving forces. The framework performs the segmentation in 3D rather than on a slice by slice basis. It naturally supplies sub-voxel precision of segmented surfaces and allows constraints on the surface curvature to enforce a smooth surface in the...

  1. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2014-04-01

    Full Text Available Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging data. Using the ground height from a DEM (digital elevation model, the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness and roof plane extraction (about 80% correctness and completeness, when LIDAR point density is as low as four points/m2. Thus, the proposed method can be exploited in various applications.

  2. Automatic visualization of OpenDAP data resources using OGC services.

    Science.gov (United States)

    Plieger, M.

    2012-04-01

    More and more scientific datasets used in meteorology and climate sciences are becoming available by the Open-source project for a Network Data Access Protocol (OpenDAP). OpenDAP allows for browsing and accessing huge amounts of data over the internet, without the need for downloading the data itself. OpenDAP provides functionality to access, describe and subset large files without the need for downloading a full copy. OpenDAP does not provide for quick looks or visualizations over the internet itself. Within the IS-ENES project, we present a way to generate visualizations of OpenDAP services over the internet by using automatically configured Web Map Services (WMS), enabling visualization of interesting OpenDAP data in a web browser on users request. This functionality is achieved by passing OpenDAP URLs to the GetCapabilities request of an OGC service, allowing OpenDAP datasets to be visualized without any necessary configuration. The same method can also be used to configure an OGC Web Coverage Service (WCS), allowing data re-projection, sub setting and conversion to other formats. This chaining of services is achieved by using the ADAGUC OGC server as an OpenDAP client. ADAGUC uses the NetCDF C library to access data, which has built in support for OpenDAP. Currently the software is capable to provide visualizations of datasets in raster format described by the climate and forecast (CF) conventions. Legends and colours are selected according to CF standard names and units, e.g. temperature in Celsius is displayed with different colours than precipitation in kg/m2. Automatic generation of OGC services with OpenDAP as a resource, enables previewing interesting data in a web browser without the need to download the data itself. During this presentation this method is described in more detail and real-life examples are given.

  3. [Data integration, data mining and visualization analysis of traditional Chinese medicine manufacturing process].

    Science.gov (United States)

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    Huge amount of data becomes available from the pharmaceutical manufacturing process with wide application of in- dustrial automatic control technology in traditional Chinese medicine (TCM) industry. The industrial big data thus provides golden op- portunities to better understand the manufacturing process and improve the process performance. Therefore it is important to implement data integration and management systems in TCM plants to easily collect, integrate, store, analyze, communicate and visulize the data with high efficiency. It could break the data island and discover useful information and knowledge to improve the manufacturing process performance. The key supporting technologies for TCM manufacturing and industrial big data management were introduced in this paper, with a specific focus on data mining and visualization technologies. Using historic data collected from a manufacturing plant of Shengmai injection of SZYY group, we illustrated the usefulness and discussed future prospects of data mining and visualization technologies. PMID:25507568

  4. Image Processing and Data Analysis

    Science.gov (United States)

    Starck, Jean-Luc; Murtagh, Fionn D.; Bijaoui, Albert

    1998-07-01

    Powerful techniques have been developed in recent years for the analysis of digital data, especially the manipulation of images. This book provides an in-depth introduction to a range of these innovative, avant-garde data-processing techniques. It develops the reader's understanding of each technique and then shows with practical examples how they can be applied to improve the skills of graduate students and researchers in astronomy, electrical engineering, physics, geophysics and medical imaging. What sets this book apart from others on the subject is the complementary blend of theory and practical application. Throughout, it is copiously illustrated with real-world examples from astronomy, electrical engineering, remote sensing and medicine. It also shows how many, more traditional, methods can be enhanced by incorporating the new wavelet and multiscale methods into the processing. For graduate students and researchers already experienced in image processing and data analysis, this book provides an indispensable guide to a wide range of exciting and original data-analysis techniques.

  5. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    Directory of Open Access Journals (Sweden)

    R.Thilepa

    2011-02-01

    Full Text Available The main objective of this paper is to elaborate how defective fabric parts can beprocessed using Matlab with image processing techniques. In developing countries like Indiaespecially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields amajor income for the country. The city also employs either directly or indirectly more than 3lakhs of people and earns almost an income of 12, 000 crores per annum for the country in pastthree decades [2]. To upgrade this process the fabrics when processed in textiles the fault presenton the fabrics can be identified using Matlab with Image processing techniques. This imageprocessing technique is done using Matlab 7.3 and for the taken image, Noise Filtering,Histogram and Thresholding techniques are applied for the image and the output is obtained inthis paper. This research thus implements a textile defect detector with system visionmethodology in image processing.

  6. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    Science.gov (United States)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of

  7. Computerization of reporting and data storage using automatic coding method in the department of radiology

    International Nuclear Information System (INIS)

    The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available

  8. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    Science.gov (United States)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  9. An automatic detection method to the field wheat based on image processing

    Science.gov (United States)

    Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan

    2013-10-01

    The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.

  10. Automatic Estimation of Live Coffee Leaf Infection Based on Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Eric Hitimana

    2014-02-01

    Full Text Available Image segmentation is the most challenging issue in computer vision applications. And most difficulties for crops management in agriculture ar e the lack of appropriate methods for detecting the leaf damage for pests’ treatment. In this paper we proposed an automatic method for leaf damage detection and severity estimation o f coffee leaf by avoiding defoliation. After enhancing the contrast of the original image using LUT based gamma correction, the image is processed to remove the background, and the output leaf is clustered using Fuzzy c-means segmentation in V channel of YUV color space to max imize all leaf damage detection, and finally, the severity of leaf is estimated in terms of ratio for leaf pixel distribution between the normal and the detected leaf damage. The results in each proposed method was compared to the current researches and the accuracy is obvious either in the background removal or dama ge detection.

  11. Application of digital process controller for automatic pulse operation in the NSRR

    International Nuclear Information System (INIS)

    The NSRR at JAERI is a modified TRIGA Reactor. It was built for investigating reactor fuel behavior under reactivity initiated accident (RIA) conditions. Recently, there has been a need to improve the flexibility of pulsing operations in the NSRR to cover a wide range of accidental situations, including RIA events at elevated power levels, and various abnormal power transients. To satisfy this need, we developed a new reactor control system which allows us to perform 'Shaped Pulse Operation: SP' and 'Combined Pulse Operation: CP'. Quick, accurate and complicated manipulation of control rods was required to realize these operations. Therefore we installed a new reactor control system, which we call an automatic pulse control system. This control system is composed of digital processing controllers and other digital equipments, and is fully automated and highly accurate. (author)

  12. Raster Data Partitioning for Supporting Distributed GIS Processing

    Science.gov (United States)

    Nguyen Thai, B.; Olasz, A.

    2015-08-01

    In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms

  13. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    OpenAIRE

    R.Thilepa; M.THANIKACHALAM

    2015-01-01

    The main objective of this paper is to elaborate how defective fabric parts can be processed using Matlab with image processing techniques. In developing countries like India especially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields a major income for the country. The city also employs either directly or indirectly more than 3 lakhs of people and earns almost an income of 12, 000 crores per annum for the country in past three decades [2]. To u...

  14. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  15. Summer Student Work Project Report: SCADA Bridge Tool Development Automatically Capturing Data from SCADA to the Maintenance System

    CERN Document Server

    Alhambra-Moron, Alfonso

    2015-01-01

    The main purpose of this report is to summarize the work project I have been doing at CERN during the last 3 months as a Summer Student. My name is Alfonso Alhambra Morón and the 8th of June 2015 I joined the EN-HE-LM team as a summer student supervised by Damien Lafarge in order to collaborate in the automation of the transfer of meter readings from SCADA1 to Infor EAM2, the computerized maintenance management system at CERN. The main objective of my project was to enable the automatic updates of meters in Infor EAM fetching data from SCADA so as to automatize a process which was done manually before and consumed resources in terms of having to consult the meter physically, import this information to Infor EAM by hand and detecting and correcting the errors that can occur when doing all of this manually. This problem is shared by several other teams at CERN apart from the Lift Maintenance team and for this reason the main target I had when developing my solution was flexibility and scalability so as to make...

  16. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  17. Automatic processing of an orientation map into a finite element mesh that conforms to grain boundaries

    Science.gov (United States)

    Dancette, S.; Browet, A.; Martin, G.; Willemet, M.; Delannay, L.

    2016-06-01

    A new procedure for microstructure-based finite element modeling of polycrystalline aggregates is presented. The proposed method relies (i) on an efficient graph-based community detection algorithm for crystallographic data segmentation and feature contour extraction and (ii) on the generation of selectively refined meshes conforming to grain boundaries. It constitutes a versatile and close to automatic environment for meshing complex microstructures. The procedure is illustrated with polycrystal microstructures characterized by orientation imaging microscopy. Hot deformation of a Duplex stainless steel is investigated based on ex-situ EBSD measurements performed on the same region of interest before and after deformation. A finite element mesh representing the initial microstructure is generated and then used in a crystal plasticity simulation of the plane strain compression. Simulation results and experiments are in relatively good agreement, confirming a large potential for such directly coupled experimental and modeling analyses, which is facilitated by the present image-based meshing procedure.

  18. Automatic monitoring system for high-steep slope in open-pit mine based on GPS and data analysis

    Science.gov (United States)

    Zhou, Chunmei; Li, Xianfu; Qin, Sunwei; Qiu, Dandan; Wu, Yanlin; Xiao, Yun; Zhou, Jian

    2008-12-01

    Recently, GPS has been more and more applicative in open pit mine slope safety monitoring. Daye Iron Mine open pit high-steep slope automatic monitoring system mainly consists of three modules, namely, GPS data processing module, monitoring and warning module, emergency plans module. According to the rock mass structural feature and the side slope stability evaluation, it is arranged altogether to seven GPS distortion monitoring points on the sharp of Fault F9 at Daye iron Mine, adopted the combination of monofrequent static GPS receiver and data-transmission radio to carry on the observation, the data processing mainly uses three transect interpolation method to solve the questions of discontinuity and Effectiveness in the data succession. According to the displacement monitoring data from 1990 to 1996 of Daye Iron Mine East Open Pit Shizi mountain Landslide A2, researching the displacement criterion, rate criterion, acceleration criterion, creep curve tangent angle criterion etc of landslide failure, the result shows that the landslide A2 is the lapse type crag nature landslide whose movement in three phases, namely creep stage, accelerated phase, destruction stage. It is different of the failure criterion in different stages and different position that is at the rear, central, front margin of the landslide. It has important guiding significance to put forward the comprehensive failure criterion of seven new-settled monitoring points combining the slope deformation destruction and macroscopic evidence.

  19. Sla-Oriented Semi-Automatic Management Of Data Storage And Applications In Distributed Environments

    Directory of Open Access Journals (Sweden)

    Dariusz Król

    2010-01-01

    Full Text Available In this paper we describe a semi-automatic programming framework for supporting userswith managing the deployment of distributed applications along with storing large amountsof data in order to maintain Quality of Service in highly dynamic and distributed environments,e.g., Grid. The Polish national PL-GRID project aims to provide Polish science withboth hardware and software infrastructures which will allow scientists to perform complexsimulations and in-silico experiments on a scale greater than ever before. We highlight theissues and challenges related to data storage strategies that arise at the analysis stage ofuser requirements coming from different areas of science. Next we present a solution to thediscussed issues along with a description of sample usage scenarios. At the end we provideremarks on the current status of the implementation work and some results from the testsperformed.

  20. Automatic extraction of building boundaries using aerial LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  1. Real-time environmental radiation monitoring system with automatic restoration of backup data in site detector via communication using radio frequency

    International Nuclear Information System (INIS)

    An environmental radiation monitoring system based on high pressurized ionization chamber has been used for on-line gamma monitoring surrounding the KAERI (Korea Atomic Energy Research Institute), which transmits the dose data measured from ion chamber on the site via radio frequency to a central processing computer and stores the transmitted real-time data. Although communication using ratio frequency has several advantages such as effective and economical transmission, storage, and data process, there is one main disadvantage that data loss during transmission often happens because of unexpected communication problems. It is possible to restore the loss data by off-line such as floppy disk but the simultaneous process and display of current data as well as the backup data are very difficult in the present on-line system. In this work, a new electronic circuit board and the operation software applicable to the conventional environmental radiation monitoring system are developed and the automatical synchronization of the ion chamber unit and the central processing computer is carried out every day. This system is automatically able to restore the backup data within 34 hours without additional equipment and also display together the current data as well as the transmitted backup data after checking time flag

  2. Automatic Code Generation for Recurring Code Patterns in Web Based Applications and Increasing Efficiency of Data Access Code

    OpenAIRE

    Senthil, J; Arumugam, S.; S Margret Anouncia; Abhinav Kapoor

    2012-01-01

    Today, a lot of web applications and web sites are data driven. These web applications have all the static and dynamic data stored in relational databases. The aim of this thesis is to generate automatic code for data access located in relational databases in minimum time.

  3. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  4. Examining the influence of psychopathy, hostility biases, and automatic processing on criminal offenders' Theory of Mind.

    Science.gov (United States)

    Nentjes, Lieke; Bernstein, David; Arntz, Arnoud; van Breukelen, Gerard; Slaats, Mariëtte

    2015-01-01

    Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psychopathy. ToM abilities (as assessed with the Reading the Mind in the Eyes Test; RMET; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), was compared between 39 PCL-R diagnosed psychopathic offenders, 37 non-psychopathic offenders, and 26 nonoffender controls. Contrary to our hypothesis, psychopathic individuals presented with intact overall RMET performance when restrictions were imposed on how long task stimuli could be processed. In addition, psychopaths did not over-ascribe hostility to task stimuli (i.e., lack of hostility bias). However, there was a significant three-way interaction between hostility, processing speed, and psychopathy: when there was no time limit on stimulus presentation, psychopathic offenders made fewer errors in identifying more hostile eye stimuli compared to nonoffender controls, who seemed to be less accurate in detecting hostility. Psychopaths' more realistic appraisal of others' malevolent mental states is discussed in the light of theories that stress its potential adaptive function. PMID:25655653

  5. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety

    Directory of Open Access Journals (Sweden)

    Sylvia A. Morelli

    2013-05-01

    Full Text Available Although many studies have examined the neural basis of experiencing empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, while instructed to empathize, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; septal area, SA. Two key regions – the ventral AI and SA – were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching versus empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others’ emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy (DMPFC, MPFC, TPJ, amygdala and social cognition. The current results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  6. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety.

    Science.gov (United States)

    Morelli, Sylvia A; Lieberman, Matthew D

    2013-01-01

    Although many studies have examined the neural basis of empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. Thirty-two participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, actively empathizing, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; and septal area, SA). Two key regions-the ventral AI and SA-were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching vs. empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others' emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy and social cognition (DMPFC, MPFC, TPJ, and amygdala). The results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions. PMID:23658538

  7. Automatic processing of taxonomic and thematic relations in semantic priming - Differentiation by early N400 and late frontal negativity.

    Science.gov (United States)

    Chen, Qingfei; Ye, Chun; Liang, Xiuling; Cao, Bihua; Lei, Yi; Li, Hong

    2014-09-16

    Most current models of knowledge organization are based on hierarchical (plant-pine) or taxonomic categories (animal-plant). Another important organizational pattern is thematic categories, which performs external or complementary roles in the same scenario or event (bee-honey). The goal of this study was to explore the processing of hierarchical categories and thematic categories under automatic processing conditions that minimize strategic influences. The Evoked response potential (ERP) procedure was used to examine the time course of semantic priming for category members with a short stimulus onset asynchrony (SOA) of 300ms as participants performed a lexical decision task. Six experimental conditions were compared: hierarchical relations (offspring-grandson), internal features (gold-golden), productive relations (bee-honey), script relations (room-tenant), unrelated (star-spoon), and non-word trials (star-derf). We found faster reaction times for related prime-target pairs than unrelated pairs except for productive relations. The ERP data showed that an early N400 effect (200-400ms) was more negative for unrelated words than for all related words. Furthermore, a frontal negativity (400-550ms) elicited by productive relations was smaller (more positive) than other related words. We suggest that the smaller frontal negativity in the processing of productive relations indicates their increased salience in knowledge structure compared to less prominent hierarchical relations. Indeed, the allocation of attentional resources and subsequent recruitment of additional memory processing might be two of the hallmarks of thematic relations. PMID:25234647

  8. System of automatic control over data Acquisition and Transmission to IGR NNC RK Data Center

    International Nuclear Information System (INIS)

    Automated system for seismic and acoustic data acquisition and transmission in real time was established in Data Center IGR NNC RK, which functions very successively. The system monitors quality and volume of acquired information and also controls the status of the system and communication channels. Statistical data on system operation are accumulated in created database. Information on system status is reflected on the Center Web page. (author)

  9. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Bowler, Matthew W., E-mail: mbowler@embl.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 avenue des Martyrs, F-38042 Grenoble (France); Université Grenoble Alpes-EMBL-CNRS, 71 avenue des Martyrs, F-38042 Grenoble (France); Nurizzo, Didier, E-mail: mbowler@embl.fr; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine [European Synchrotron Radiation Facility, 71 avenue des Martyrs, F-38043 Grenoble (France)

    2015-10-03

    MASSIF-1 (ID30A-1) is a new beamline dedicated to the completely automatic characterization and data collection from crystals of biological macromolecules. MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined.

  10. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    International Nuclear Information System (INIS)

    MASSIF-1 (ID30A-1) is a new beamline dedicated to the completely automatic characterization and data collection from crystals of biological macromolecules. MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined

  11. Automatic data distribution and load balancing with space-filling curves: implementation in CONQUEST

    International Nuclear Information System (INIS)

    We present an automatic, spatially local data distribution and load balancing scheme applicable to many-body problems running on parallel architectures. The particle distribution is based on spatial decomposition of the simulation cell. A one-dimensional Hilbert curve is mapped onto the three-dimensional real space cell, which reduces the dimensionality of the problem and provides a way to assign different spatially local parts of the cell to each processor. The scheme is independent of the number of processors. It can be used for both ordered and disordered structures and does not depend on the dimensionality or shape of the system. Details of implementation in the linear-scaling density functional code CONQUEST, as well as several case studies of systems of various complexity, containing up to 55 755 particles, are given

  12. ANIGAM: a computer code for the automatic calculation of nuclear group data

    International Nuclear Information System (INIS)

    The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)

  13. INfluence of vinasse on water movement in soil, using automatic acquisition and handling data system

    International Nuclear Information System (INIS)

    The vinasse, by-product of ethylic alcohol industry from the sugar cane juice or molasses yeast fermentation, has been incorporated in the soil as fertilizer, due to the its hight organic matter (2-6%), potassium and sulphate (0,1-0,5%) and other nutrient contents. By employing monoenergetic gamma-ray beam attenuation technique (241Am; 59,5 keV; 100 mCi) the influence of vinasse on the water movement in the soil was studied. For this, an automatic acquisition and handling data system was used, based in multichannel analyser, multi-scaling mode operated, coupled to a personal microcomputer and plotter. Despite the small depth studied (6 cm), it was observed that vinasse decreases the water infiltration velocity in the soil. (Author)

  14. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    Science.gov (United States)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  15. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  16. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  17. A comparison of density of Insight and Ektaspeed plus dental x-ray films using automatic and manual processing

    International Nuclear Information System (INIS)

    To compare the film density of Insight dental X-ray film (Eastman Kodak Co., Rochester, NY, USA) with that of Ektaspeed Plus film (Eastman Kodak) under manual and automatic processing conditions. Insight and wedge on the film under the three different exposure times. The exposed films were processed by both manual and automatic ways. The Base plus fog density and the optical density and the optical density made by exposing step wedge were calculated using a digital densitometer (model 07-443, Victoreen Inc, Cleveland, Ohio, USA). The optical densities of the Insight and Ektaspeed film versus thickness of aluminum wedge at the same exposure time were plotted on the graphs. Statistical analyses were applied for comparing the optical densities of the two films. The film density of both Insight films and Ektaspeed Plus films under automatic processing condition was significantly higher over the manual processing. The film density of Insight over Ektaspeed Plus film. To take the full advantage of reducing exposure time, Insight film should be processed automatically

  18. Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus.

    Science.gov (United States)

    Giordano, Bruno L; Pernet, Cyril; Charest, Ian; Belizaire, Guylaine; Zatorre, Robert J; Belin, Pascal

    2014-09-01

    Identifying sound sources is fundamental to developing a stable representation of the environment in the face of variable auditory information. The cortical processes underlying this ability have received little attention. In two fMRI experiments, we investigated passive adaptation to (Exp. 1) and explicit discrimination of (Exp. 2) source identities for different categories of auditory objects (voices, musical instruments, environmental sounds). All cortical effects of source identity were independent of high-level category information, and were accounted for by sound-to-sound differences in low-level structure (e.g., loudness). A conjunction analysis revealed that the left posterior middle frontal gyrus (pMFG) adapted to identity repetitions during both passive listening and active discrimination tasks. These results indicate that the comparison of sound source identities in a stream of auditory stimulation recruits the pMFG in a domain-general way, i.e., independent of the sound category, based on information contained in the low-level acoustical structure. pMFG recruitment during both passive listening and explicit identity comparison tasks also suggests its automatic engagement in sound source identity processing. PMID:25038309

  19. Automatic registration of UAV-borne sequent images and LiDAR data

    Science.gov (United States)

    Yang, Bisheng; Chen, Chi

    2015-03-01

    Use of direct geo-referencing data leads to registration failure between sequent images and LiDAR data captured by mini-UAV platforms because of low-cost sensors. This paper therefore proposes a novel automatic registration method for sequent images and LiDAR data captured by mini-UAVs. First, the proposed method extracts building outlines from LiDAR data and images and estimates the exterior orientation parameters (EoPs) of the images with building objects in the LiDAR data coordinate framework based on corresponding corner points derived indirectly by using linear features. Second, the EoPs of the sequent images in the image coordinate framework are recovered using a structure from motion (SfM) technique, and the transformation matrices between the LiDAR coordinate and image coordinate frameworks are calculated using corresponding EoPs, resulting in a coarse registration between the images and the LiDAR data. Finally, 3D points are generated from sequent images by multi-view stereo (MVS) algorithms. Then the EoPs of the sequent images are further refined by registering the LiDAR data and the 3D points using an iterative closest-point (ICP) algorithm with the initial results from coarse registration, resulting in a fine registration between sequent images and LiDAR data. Experiments were performed to check the validity and effectiveness of the proposed method. The results show that the proposed method achieves high-precision robust co-registration of sequent images and LiDAR data captured by mini-UAVs.

  20. ARCPAS - Automatic radiation control point access system an automated data collection terminal for radiation dose and access control

    International Nuclear Information System (INIS)

    Nuclear facilities such as nuclear power plants or fuel processing facilities are required to maintain accurate records of personnel access, exposure and work performed. Most facilities today have some sort of computerized data collection system for radiation dose and access control. The great majority rely on handwritten records, i.e., dose card or sign-in sheet which in turn are transferred to a computerized records management system manually. The ARCPAS terminal provides a method for automating personnel exposure data collection and processing. The terminal is a user interactive device which contains a unit for automatically reading and zeroing pocket dosemeters, a security badge reader for personnel identification, a 16 digit key pad for RWP information entry, a high resolution color CRT for interactive communication and a high speed tape printer providing an entry chit. The chit provides the individual worker with a record of the transaction including an individual identifying number, remaining dose for the quarter or period and RWP under which the worker entered the controlled area. The purpose of automating the access control is to provide fast, accurate, realtime data to the records management system. A secondary purpose is to relieve trained health physics technicians of control point duties so that their training and skills can be utilized more effectively in a facility's health physics program

  1. Automatic registration of optical imagery with 3d lidar data using local combined mutual information

    Science.gov (United States)

    Parmehr, E. G.; Fraser, C. S.; Zhang, C.; Leach, J.

    2013-10-01

    Automatic registration of multi-sensor data is a basic step in data fusion for photogrammetric and remote sensing applications. The effectiveness of intensity-based methods such as Mutual Information (MI) for automated registration of multi-sensor image has been previously reported for medical and remote sensing applications. In this paper, a new multivariable MI approach that exploits complementary information of inherently registered LiDAR DSM and intensity data to improve the robustness of registering optical imagery and LiDAR point cloud, is presented. LiDAR DSM and intensity information has been utilised in measuring the similarity of LiDAR and optical imagery via the Combined MI. An effective histogramming technique is adopted to facilitate estimation of a 3D probability density function (pdf). In addition, a local similarity measure is introduced to decrease the complexity of optimisation at higher dimensions and computation cost. Therefore, the reliability of registration is improved due to the use of redundant observations of similarity. The performance of the proposed method for registration of satellite and aerial images with LiDAR data in urban and rural areas is experimentally evaluated and the results obtained are discussed.

  2. Iqpc 2015 Track: Evaluation of Automatically Generated 2d Footprints from Urban LIDAR Data

    Science.gov (United States)

    Truong-Hong, L.; Laefer, D.; Bisheng, Y.; Ronggang, H.; Jianping, L.

    2015-08-01

    Over the last decade, several automatic approaches have been proposed to extract and reconstruct 2D building footprints and 2D road profiles from ALS data, satellite images, and/or aerial imagery. Since these methods have to date been applied to various data sets and assessed through a variety of different quality indicators and ground truths, comparing the relative effectiveness of the techniques and identifying their strengths and short-comings has not been possible in a systematic way. This contest as part of IQPC15 was designed to determine pros and cons of submitted approaches in generating 2D footprint of a city region from ALS data. Specifically, participants were asked to submit 2D footprints (building outlines and road profiles) derived from ALS data from a highly dense dataset (approximately 225 points/m2) across a 1km2 of Dublin, Ireland's city centre. The proposed evaluation strategies were designed to measure not only the capacity of each method to detect and reconstruct 2D buildings and roads but also the quality of the reconstructed building and road models in terms of shape similarity and positional accuracy.

  3. Automatic and robust method for registration of optical imagery with point cloud data

    Science.gov (United States)

    Wu, Yingdan; Ming, Yang

    2015-12-01

    Aim to the difficulty of automatic and robust registration of optical imagery with point cloud data, this paper propose a new method based on SIFT and Mutual Information (MI). The SIFT features are firstly extracted and matched, whose result is used to derive the coarse geometric relationship between the optical imagery and the point cloud data. Secondly, the MI-based similarity measure is used to derive the conjugate points. And then the RANSAC algorithm is adopted to eliminate the erroneous matching points. Repeating the procedure of MI matching and mismatching points deletion until the finest pyramid image level. Using the matching results, the transform model is determined. The experiments have been made and they demonstrate the potential of the MI-based measure for the registration of optical imagery with the point cloud data, and this highlight the feasibility and robustness of the method proposed in this paper to automated registration of multi-modal, multi-temporal remote sensing data for a wide range of applications.

  4. On the Automatic Generation of Plans for Life Cycle Assembly Processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-01-01

    Designing products for easy assembly and disassembly during their entire life cycles for purposes including product assembly, product upgrade, product servicing and repair, and product disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and manufacturing plan selection criteria, as compared to initial assembly, require re-visiting significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or to applied studies of life cycle assembly processes that give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for, optimize, and analyze the cycle assembly processes. The study of assembly planning is at the very heart of manufacturing research facilities and academic engineering institutions; and, in recent years a number of significant advances in the field of assembly planning have been made. These advances have ranged from the development of automated assembly planning systems, such as Sandia's Automated Assembly Analysis System Archimedes 3.0{copyright}, to the startling revolution in microprocessors and computer-controlled production tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), flexible manufacturing systems (EMS), and computer-integrated manufacturing (CIM). These results have kindled considerable interest in the study of algorithms for life cycle related assembly processes and have blossomed into a field of intense interest. The intent of this manuscript is to bring together the fundamental results in this area, so that the unifying principles and underlying concepts of algorithm design may more easily be implemented in practice.

  5. Data taking and processing system for nuclear experimental physics study

    International Nuclear Information System (INIS)

    A multi input, multi mode, multi user data taking and processing system was developed. This system has following special features. 1) It is multi computer system which is constitute with two special processors and two mini computers. 2) The pseudo devices are introduced to make operating procedurs simply and easily. Especially, the selection or modification of 1 - 8 coincidence mode can be done very easily and quickly. 3) A 16 Kch spectrum storage has 8 partitions. Every partitions having floating size are handled automatically by the data taking software SHINE. 4) On line real time data processing can be done. Useing the FORTRAN language, user may prepare the processing software apart from the data taking software. Under the RSX-11D system software, this software runs concurrently with the data taking software by a multi programming mode. 5) The data communication between arbitraly external devices and this system can be done. With this communication procedures, not only the data transfer between computers, but also the control of the experimental devices are realized. Like the real time processing software, this software can be prepared by users and be ran concurrently with other softwares. 6) For data monitoring, two different graphic displays are used complementally. One is a refresh typed high speed display. The other is a storage typed large screen display. Raw datas are displayed on the former. Processed datas or multi parametric large volume datas are displayed on the later one. (author)

  6. Children's and Adults' Automatic Processing of Proportion in a Stroop-Like Task

    Science.gov (United States)

    Yang, Ying; Hu, Qingfen; Wu, Di; Yang, Shuqi

    2015-01-01

    This current study examined human children's and adults' automatic processing of proportion using a Stroop-like paradigm. Preschool children and university students compared the areas of two sectors that varied not only in absolute areas but also in the proportions they occupied in their original rounds. A congruity effect was found in…

  7. An Investigation of the Stroop Effect among Deaf Signers in English and Japanese: Automatic Processing or Memory Retrieval?

    Science.gov (United States)

    Flaherty, Mary; Moran, Aidan

    2007-01-01

    Most studies on the Stroop effect (unintentional automatic word processing) have been restricted to English speakers using vocal responses. Little is known about this effect with deaf signers. The study compared Stroop task responses among four different samples: deaf participants from a Japanese-language environment and from an English-language…

  8. Comparison of edge detection techniques for the automatic information extraction of Lidar data

    Science.gov (United States)

    Li, H.; di, L.; Huang, X.; Li, D.

    2008-05-01

    In recent years, there has been much interest in information extraction from Lidar point cloud data. Many automatic edge detection algorithms have been applied to extracting information from Lidar data. Generally they can be divided as three major categories: early vision gradient operators, optimal detectors and operators using parametric fitting models. Lidar point cloud includes the intensity information and the geographic information. Thus, traditional edge detectors used in remote sensed images can take advantage with the coordination information provided by point data. However, derivation of complex terrain features from Lidar data points depends on the intensity properties and topographic relief of each scene. Take road for example, in some urban area, road has the alike intensity as buildings, but the topographic relationship of road is distinct. The edge detector for road in urban area is different from the detector for buildings. Therefore, in Lidar extraction, each kind of scene has its own suitable edge detector. This paper compares application of the different edge detectors from the previous paragraph to various terrain areas, in order to figure out the proper algorithm for respective terrain type. The Canny, EDISON and SUSAN algorithms were applied to data points with the intensity character and topographic relationship of Lidar data. The Lidar data for test are over different terrain areas, such as an urban area with a mass of buildings, a rural area with vegetation, an area with slope, or an area with a bridge, etc. Results using these edge detectors are compared to determine which algorithm is suitable for a specific terrain area. Key words: Edge detector, Extraction, Lidar, Point data

  9. A Fast Algorithm for Automatic Detection of Ionospheric Disturbances Using GPS Slant Total Electron Content Data

    Science.gov (United States)

    Efendi, Emre; Arikan, Feza; Yarici, Aysenur

    2016-07-01

    Solar, geomagnetic, gravitational and seismic activities cause disturbances in the ionospheric region of upper atmosphere for space based communication, navigation and positioning systems. These disturbances can be categorized with respect to their amplitude, duration and frequency. Typically in the literature, ionospheric disturbances are investigated with gradient based methods on Total Electron Content (TEC) data estimated from ground based dual frequency Global Positioning System (GPS) receivers. In this study, a detection algorithm is developed to determine the variability in Slant TEC (STEC) data. The developed method, namely Differential Rate of TEC (DRoT), is based on Rate of Tec (RoT) method that is widely used in the literature. RoT is usually applied to Vertical TEC (VTEC) and it can be defined as normalized derivative of VTEC. Unfortunately, the resultant data obtained from the application of RoT on VTEC suffer from inaccuracies due to mapping function and the resultant values are very noisy which make it difficult to automatically detect the disturbance due to variability in the ionosphere. The developed DRoT method can be defined as the normalized metric norm (L2) between the RoT and its baseband trend structure. In this study, the error performance of DRoT is determined using synthetic data with variable bounds on the parameter set of amplitude, frequency and period of disturbance. It is observed that DRoT method can detect disturbances in three categories. For DRoT values less than 50%, there is no significant disturbance in STEC data. For DRoT values between 50 to 70 %, a medium scale disturbance can be observed. For DROT values over 70 %, severe disturbances such Large Scale Travelling Ionospheric Disturbances (TID) or plasma bubbles can be observed. When DRoT is applied to the GPS-STECdata for stations in high latitude, equatorial and mid-latitude regions, it is observed that disturbances with amplitudes larger than 10% of the difference between

  10. Composable Data Processing in Environmental Science - A Process View

    OpenAIRE

    Wombacher, A.

    2008-01-01

    Data processing in environmental science is essential for doing science. The heterogeneity of data sources, data processing operations and infrastructures results in a lot of manual data and process integration work done by each scientist individually. This is very inefficient and time consuming. The aim is to provide a view based approach on accessing and processing data supporting a more generic infrastructure to integrate processing steps from different organizations, systems and libraries...

  11. Hardware and Software platform for Real-Time Processing and Visualization of Automatic Ultrasonic Signal Evaluation in NPP

    International Nuclear Information System (INIS)

    In this paper the architecture of a hardware and software platform for ultrasonic wave evaluation is presented, which is under developing as a part of the project for the development of automatic ultrasonic wave acquisition and analysis and evaluation system. The platform, used in conjunction with an analog frontend hardware for driving the ultrasonic transducers of any commercial device(μ-Plus II and MDU II, Veritec, England), having the radiofrequency echo signal access, makes it possible to dispose a powerful ultrasonic system for experimenting any process technique. The platform will be more efficiency system through tuning of the automatic ultrasonic hardware and evaluation software. The evaluation software is supporting tool for efficiency Automatic Ultrasonic Wave Signals

  12. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    International Nuclear Information System (INIS)

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System

  13. SU-D-BRD-07: Automatic Patient Data Audit and Plan Quality Check to Support ARIA and Eclipse

    Energy Technology Data Exchange (ETDEWEB)

    Li, X; Li, H; Wu, Y; Mutic, S; Yang, D [Washington University School of Medicine, St. Louis, MO (United States)

    2014-06-01

    Purpose: To ensure patient safety and treatment quality in RT departments that use Varian ARIA and Eclipse, we developed a computer software system and interface functions that allow previously developed electron chart checking (EcCk) methodologies to support these Varian systems. Methods: ARIA and Eclipse store most patient information in its MSSQL database. We studied the contents in the hundreds database tables and identified the data elements used for patient treatment management and treatment planning. Interface functions were developed in both c-sharp and MATLAB to support data access from ARIA and Eclipse servers using SQL queries. These functions and additional data processing functions allowed the existing rules and logics from EcCk to support ARIA and Eclipse. Dose and structure information are important for plan quality check, however they are not stored in the MSSQL database but as files in Varian private formats, and cannot be processed by external programs. We have therefore implemented a service program, which uses the DB Daemon and File Daemon services on ARIA server to automatically and seamlessly retrieve dose and structure data as DICOM files. This service was designed to 1) consistently monitor the data access requests from EcCk programs, 2) translate the requests for ARIA daemon services to obtain dose and structure DICOM files, and 3) monitor the process and return the obtained DICOM files back to EcCk programs for plan quality check purposes. Results: EcCk, which was previously designed to only support MOSAIQ TMS and Pinnacle TPS, can now support Varian ARIA and Eclipse. The new EcCk software has been tested and worked well in physics new start plan check, IMRT plan integrity and plan quality checks. Conclusion: Methods and computer programs have been implemented to allow EcCk to support Varian ARIA and Eclipse systems. This project was supported by a research grant from Varian Medical System.

  14. An Automatic Data-Logging System for Meteorological Studies in Reactor Environments

    International Nuclear Information System (INIS)

    An automatic data-logging system has been designed for meteorological studies for the Tarapur power reactor. The system is designed to log data from 256 sensors divided into two groups of 128 each. The outputs from the sensors in analog form, varying from 0 to 100 mV can be scanned sequentially. The scanning unit, used for time multiplexing, consists of a bank of 256 pairs of reed relays. It connects sequentially the outputs from the two groups of sensors to two chopper-modulated d.c. amplifiers. The output from the chopper-modulated d.c. amplifier varies from -4 to -10 V. A linear and highly stable A-D converter connected alternately to the chopper-modulated d. c. amplifiers digitizes the amplified outputs. The digitized data are stored in a ferrite core memory with a capacity of 256 5-digit words. The data are handled in the binary-coded decimal form. Each memory location corresponds to a particular input sensor. When sensor Mi is selected by the scanning unit, its digitized output is added to the previously stored data in the Mith memory location and the result is stored back in the same location. The Mi + 1 sensor is next selected. The scanning unit selects all the sensors every second. At the end of 10 min the memory locations contain the averages of outputs of all the sensors. This data' is punched on a paper tape in the next 2 min. The sensors are scanned again after clearing the memory. The logical operations are controlled with a 100-kc/s crystal controlled time clock. The data are fed to a digital computer for analysis. (author)

  15. The new automatic precipitation phase distinction algorithm for OceanRAIN data over the global ocean

    Science.gov (United States)

    Burdanowitz, Jörg; Klepp, Christian; Bakan, Stephan

    2015-04-01

    The hitherto lack of surface precipitation data over the global ocean limits the capabilities to validate recent and future precipitation satellite retrievals. The first systematic ship-based surface precipitation data set OceanRAIN (Ocean Rain And Ice-phase precipitation measurement Network) aims at providing in-situ precipitation data through particle size distributions (PSDs) from optical disdrometers deployed on research vessels (RVs). From the RV Polarstern, OceanRAIN currently contains more than four years of 1-minute resolution precipitation data, which corresponds to more than 200,000 minutes of precipitation. The calculation of the precipitation rate requires to know the precipitation phase (PP) of the falling particles. We develop a novel algorithm to automatically retrieve the PP using OceanRAIN data and ancillary meteorological measurements from RVs. The main objective is to improve accuracy and efficiency of the current time-consuming manual method of discriminating liquid and solid precipitation particles. The new PP distinction algorithm is based on the relation of air temperature and relative humidity (T-rH) with respect to PP. For first-time usage over oceanic areas, the land-retrieved coefficients of this empirical relationship are adjusted to OceanRAIN data. The measured PSD supports determining the PP in certain cases where large snow aggregates exist at distinctly positive air temperatures. The classification, based on T-rH and PSD, is statistically exploited and weighed with respect to the current weather conditions to obtain an overall PP probability at 1-minute resolution. The new PP distinction algorithm agrees in more than 92% (94% excl. mixed-phase) of precipitating cases with the manually-determined PP in the RV Polarstern data. The PP distinction algorithm complements the valuable information of OceanRAIN surface precipitation over the ocean.

  16. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity. PMID:26357393

  17. Automatic delineation of tumor volumes by co-segmentation of combined PET/MR data

    International Nuclear Information System (INIS)

    Combined PET/MRI may be highly beneficial for radiotherapy treatment planning in terms of tumor delineation and characterization. To standardize tumor volume delineation, an automatic algorithm for the co-segmentation of head and neck (HN) tumors based on PET/MR data was developed. Ten HN patient datasets acquired in a combined PET/MR system were available for this study. The proposed algorithm uses both the anatomical T2-weighted MR and FDG-PET data. For both imaging modalities tumor probability maps were derived, assigning each voxel a probability of being cancerous based on its signal intensity. A combination of these maps was subsequently segmented using a threshold level set algorithm. To validate the method, tumor delineations from three radiation oncologists were available. Inter-observer variabilities and variabilities between the algorithm and each observer were quantified by means of the Dice similarity index and a distance measure. Inter-observer variabilities and variabilities between observers and algorithm were found to be comparable, suggesting that the proposed algorithm is adequate for PET/MR co-segmentation. Moreover, taking into account combined PET/MR data resulted in more consistent tumor delineations compared to MR information only. (paper)

  18. Automatic cardiac gating of small-animal PET from list-mode data

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Udias, J.M. [Universidad Complutense de Madrid Univ. (Spain). Grupo de Fisica Nuclear; Vaquero, J.J.; Desco, M. [Universidad Carlos III de Madrid (Spain). Dept. de Bioingenieria e Ingenieria Aeroespacial; Cusso, L. [Hospital General Universitario Gregorio Maranon, Madrid (Spain). Unidad de Medicina y Cirugia Experimental

    2011-07-01

    This work presents a method to obtain automatically the cardiac gating signal in a PET study of rats, by employing the variation with time of the counts in the cardiac region, that can be extracted from list-mode data. In an initial step, the cardiac region is identified in the image space by backward-projecting a small fraction of the acquired data and studying the variation with time of the counts in each voxel inside said region, with frequencies within 2 and 8 Hz. The region obtained corresponds accurately to the left-ventricle of the heart of the rat. In a second step, the lines-of-response (LORs) connected with this region are found by forward-projecting this region. The time variation of the number of counts in these LORs contains the cardiac motion information that we want to extract. This variation of counts with time is band-pass filtered to reduce noise, and the time signal so obtained is used to create the gating signal. The result was compared with a cardiac gating signal obtained from an ECG acquired simultaneously to the PET study. Reconstructed gated images obtained from both gating information are similar. The method proposed demonstrates that valid cardiac gating signals can be obtained for rats from PET list-mode data. (orig.)

  19. Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology

    OpenAIRE

    Jun Zhang; Jui-Hua Hsieh; Hao Zhu

    2014-01-01

    In vitro bioassays have been developed and are currently being evaluated as potential alternatives to traditional animal toxicity models. Already, the progress of high throughput screening techniques has resulted in an enormous amount of publicly available bioassay data having been generated for a large collection of compounds. When a compound is tested using a collection of various bioassays, all the testing results can be considered as providing a unique bio-profile for this compound, which...

  20. Critical properties of the diffusive epidemic process obtained via an automatic search technique

    International Nuclear Information System (INIS)

    The diffusive epidemic process DEP is composed of A and B species that independently diffuse on a lattice with diffusion rates DA and DB and follow the probabilistic dynamical rule A+B→2B and B→A. This model belongs to the category of non-equilibrium systems with an absorbing state and a phase transition between active and inactive states. We investigate the critical behavior of the one-dimensional DEP using an auto-adaptive algorithm to find critical points: the method of automatic searching for critical points MASCP. We compare our results with the literature and we find that the MASCP successfully finds the critical exponents 1/ν and 1/zν in all the cases DA = DB, DA B and DA > DB. The simulations show that the DEP has the same critical exponents as are expected from field-theoretical arguments. Moreover, we find that, contrary to a renormalization group prediction, the system does not show a discontinuous phase transition in the regime of DA > DB

  1. Automatic polishing process of plastic injection molds on a 5-axis milling center

    CERN Document Server

    Pessoles, Xavier; 10.1016/j.jmatprotec.2008.08.034

    2010-01-01

    The plastic injection mold manufacturing process includes polishing operations when surface roughness is critical or mirror effect is required to produce transparent parts. This polishing operation is mainly carried out manually by skilled workers of subcontractor companies. In this paper, we propose an automatic polishing technique on a 5-axis milling center in order to use the same means of production from machining to polishing and reduce the costs. We develop special algorithms to compute 5-axis cutter locations on free-form cavities in order to imitate the skills of the workers. These are based on both filling curves and trochoidal curves. The polishing force is ensured by the compliance of the passive tool itself and set-up by calibration between displacement and force based on a force sensor. The compliance of the tool helps to avoid kinematical error effects on the part during 5-axis tool movements. The effectiveness of the method in terms of the surface roughness quality and the simplicity of impleme...

  2. Automatic Assessment of Acquisition and Transmission Losses in Indian Remote Sensing Satellite Data

    Science.gov (United States)

    Roy, D.; Purna Kumari, B.; Manju Sarma, M.; Aparna, N.; Gopal Krishna, B.

    2016-06-01

    The quality of Remote Sensing data is an important parameter that defines the extent of its usability in various applications. The data from Remote Sensing satellites is received as raw data frames at the ground station. This data may be corrupted with data losses due to interferences during data transmission, data acquisition and sensor anomalies. Thus it is important to assess the quality of the raw data before product generation for early anomaly detection, faster corrective actions and product rejection minimization. Manual screening of raw images is a time consuming process and not very accurate. In this paper, an automated process for identification and quantification of losses in raw data like pixel drop out, line loss and data loss due to sensor anomalies is discussed. Quality assessment of raw scenes based on these losses is also explained. This process is introduced in the data pre-processing stage and gives crucial data quality information to users at the time of browsing data for product ordering. It has also improved the product generation workflow by enabling faster and more accurate quality estimation.

  3. Development of Web Tools for the automatic Upload of Calibration Data into the CMS Condition Data

    Science.gov (United States)

    di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2010-04-01

    This article explains the recent evolution of Condition Database Application Service. The Condition Database Application Service is part of the condition database system of the CMS experiment, and it is used for handling and monitoring the CMS detector condition data, and the corresponding computing resources like Oracle Databases, storage service and network devices. We deployed a service, the offline Dropbox service, that will be used by Alignment and Calibration Group in order to upload from the offline network (GPN) the calibration constants produced by running offline analysis.

  4. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  5. Automated Data Processing as an AI Planning Problem

    Science.gov (United States)

    Golden, Keith; Pang, Wanlin; Nemani, Ramakrishna; Votava, Petr

    2003-01-01

    NASA s vision for Earth Science is to build a "sensor web"; an adaptive array of heterogeneous satellites and other sensors that will track important events, such as storms, and provide real-time information about the state of the Earth to a wide variety of customers. Achieving his vision will require automation not only in the scheduling of the observations but also in the processing af tee resulting data. Ta address this need, we have developed a planner-based agent to automatically generate and execute data-flow programs to produce the requested data products. Data processing domains are substantially different from other planning domains that have been explored, and this has led us to substantially different choices in terms of representation and algorithms. We discuss some of these differences and discuss the approach we have adopted.

  6. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan;

    2016-01-01

    create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which, by......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...... automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark...

  7. 13C-detected NMR experiments for automatic resonance assignment of IDPs and multiple-fixing SMFT processing

    International Nuclear Information System (INIS)

    Intrinsically disordered proteins (IDPs) have recently attracted much interest, due to their role in many biological processes, including signaling and regulation mechanisms. High-dimensional 13C direct-detected NMR experiments have proven exceptionally useful in case of IDPs, providing spectra with superior peak dispersion. Here, two such novel experiments recorded with non-uniform sampling are introduced, these are 5D HabCabCO(CA)NCO and 5D HNCO(CA)NCO. Together with the 4D (HACA)CON(CA)NCO, an extension of the previously published 3D experiments (Pantoja-Uceda and Santoro in J Biomol NMR 59:43–50, 2014. doi: 10.1007/s10858-014-9827-1 10.1007/s10858-014-9827-1 ), they form a set allowing for complete and reliable resonance assignment of difficult IDPs. The processing is performed with sparse multidimensional Fourier transform based on the concept of restricting (fixing) some of spectral dimensions to a priori known resonance frequencies. In our study, a multiple-fixing method was developed, that allows easy access to spectral data. The experiments were tested on a resolution-demanding alpha-synuclein sample. Due to superior peak dispersion in high-dimensional spectrum and availability of the sequential connectivities between four consecutive residues, the overwhelming majority of resonances could be assigned automatically using the TSAR program

  8. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  9. Simple Approaches to Improve the Automatic Inventory of ZEBRA Crossing from Mls Data

    Science.gov (United States)

    Arias, P.; Riveiro, B.; Soilán, M.; Díaz-Vilariño, L.; Martínez-Sánchez, J.

    2015-08-01

    The city management is increasingly supported by information technologies, leading to paradigms such as smart cities, where decision-makers, companies and citizens are continuously interconnected. 3D modelling turns of great relevance when the city has to be managed making use of geospatial databases or Geographic Information Systems. On the other hand, laser scanning technology has experienced a significant growth in the last years, and particularly, terrestrial mobile laser scanning platforms are being more and more used with inventory purposes in both cities and road environments. Consequently, large datasets are available to produce the geometric basis for the city model; however, this data is not directly exploitable by management systems constraining the implementation of the technology for such applications. This paper presents a new algorithm for the automatic detection of zebra crossing. The algorithm is divided in three main steps: road segmentation (based on a PCA analysis of the points contained in each cycle of collected by a mobile laser system), rasterization (conversion of the point cloud to a raster image coloured as a function of intensity data), and zebra crossing detection (using the Hough Transform and logical constrains for line classification). After evaluating different datasets collected in three cities located in Northwest Spain (comprising 25 strips with 30 visible zebra crossings) a completeness of 83% was achieved.

  10. An automatic locating and data logging system for radiological walkover surveys

    International Nuclear Information System (INIS)

    Oak Ridge National Laboratory has developed an Ultrasonic Ranging and Data System (USRADS) to track a radiation surveyor in the field, to log his instrument's reading automatically, and to provide tabular and graphical data display in the field or in the office. Once each second, USRADS computes the position of the radiation surveyor by using the time-of-flight of an ultrasonic chirp, emitted by a transducer carried in a backpack, to stationary receivers deployed in the field. When the ultrasonic transducer is pulsed, a microprocessor in the backpack radios the start time and survey instrument's reading to the master receiver at the base station (a van or truck). A portable computer connected to the master receiver plots the surveyor's position on the display, and stores his position and instrument reading. The CHEMRAD Corporation has just completed a survey of the ORNL main plant area using two radiation survey instruments simultaneously: a ratemeter connected to a NaI crystal that is swung in a arc near the ground, to look for surface contamination; and a small pressurized ionization chamber (PIC), attached to the backpack frame at a height of 3 ft, to measure the exposure rate. 3 refs., 5 figs

  11. Advancements in Big Data Processing in the ATLAS and CMS Experiments

    CERN Document Server

    Vaniachine, A

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for distributed computing and Grid technologies. The emerging Big Data revolution drives exploration in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable six sigma production quality in petascale data ...

  12. Automatic Speech Segmentation Based on HMM

    OpenAIRE

    M. Kroul

    2007-01-01

    This contribution deals with the problem of automatic phoneme segmentation using HMMs. Automatization of speech segmentation task is important for applications, where large amount of data is needed to process, so manual segmentation is out of the question. In this paper we focus on automatic segmentation of recordings, which will be used for triphone synthesis unit database creation. For speech synthesis, the speech unit quality is a crucial aspect, so the maximal accuracy in segmentation is ...

  13. Visualization process of Temporal Data

    OpenAIRE

    Daassi, Chaouki; Nigay, Laurence; Fauvet, Marie-Christine

    2004-01-01

    International audience Temporal data are abundantly present in many application domains such as banking, financial, clinical, geographical applications and so on. Temporal data have been extensively studied from data mining and database perspectives. Complementary to these studies, our work focuses on the visualization techniques of temporal data: a wide range of visualization techniques have been designed to assist the users to visually analyze and manipulate temporal data. All the techni...

  14. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  15. The speed of magnitude processing and executive functions in controlled and automatic number comparison in children: an electro-encephalography study

    Directory of Open Access Journals (Sweden)

    Jármi Éva

    2007-04-01

    Full Text Available Abstract Background In the numerical Stroop paradigm (NSP participants decide whether a digit is numerically or physically larger than another simultaneously presented digit. This paradigm is frequently used to assess the automatic number processing abilities of children. Currently it is unclear whether an equally refined evaluation of numerical magnitude occurs in both controlled (the numerical comparison task of the NSP and automatic (the physical comparison task of the NSP numerical comparison in both children and adults. One of our objectives was to respond this question by measuring the speed of controlled and automatic magnitude processing in children and adults in the NSP. Another objective was to determine how the immature executive functions of children affect their cognitive functions relative to adults in numerical comparison. Methods and results The speed of numerical comparison was determined by monitoring the electro-encephalographic (EEG numerical distance effect: The amplitude of EEG measures is modulated as a function of numerical distance between the to-be-compared digits. EEG numerical distance effects occurred between 140–320 ms after stimulus presentation in both controlled and automatic numerical comparison in all age groups. Executive functions were assessed by analyzing facilitation and interference effects on the latency of the P3b event-related potential component and the lateralized readiness potential (LRP. Interference effects were more related to response than to stimulus processing in children as compared with adults. The LRP revealed that the difficulty to inhibit irrelevant response tendencies was a major factor behind interference in the numerical task in children. Conclusion The timing of the EEG distance effect suggests that a refined evaluation of numerical magnitude happened at a similar speed in each age group during both controlled and automatic magnitude processing. The larger response interference in

  16. An automatic sample changer and microprocessor-controlled data router for a small bulk-sample counter

    International Nuclear Information System (INIS)

    Bulk-samples with volumes in the range 500-2000 cm2 are too large for commercial counters with automatic sample changers. A gamma ray counting system for small bulk-samples, incorporating an automatic sample changer and multiple data-output devices has been designed. It includes an inexpensive microprocessor and is constructed mainly from commonly available materials. The changer is designed to take one pint straight-sided cardboard food containers which will accommodate most routinely measured samples. Larger samples with volumes up to 50,000 cm2 can be counted by demounting the removable sample positioner. (author)

  17. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    Science.gov (United States)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  18. Semi-Automatic Detection of Swimming Pools from Aerial High-Resolution Images and LIDAR Data

    Directory of Open Access Journals (Sweden)

    Borja Rodríguez-Cuenca

    2014-03-01

    Full Text Available Bodies of water, particularly swimming pools, are land covers of high interest. Their maintenance involves energy costs that authorities must take into consideration. In addition, swimming pools are important water sources for firefighting. However, they also provide a habitat for mosquitoes to breed, potentially posing a serious health threat of mosquito-borne disease. This paper presents a novel semi-automatic method of detecting swimming pools in urban environments from aerial images and LIDAR data. A new index for detecting swimming pools is presented (Normalized Difference Swimming Pools Index that is combined with three other decision indices using the Dempster–Shafer theory to determine the locations of swimming pools. The proposed method was tested in an urban area of the city of Alcalá de Henares in Madrid, Spain. The method detected all existing swimming pools in the studied area with an overall accuracy of 99.86%, similar to the results obtained by support vector machines (SVM supervised classification.

  19. Automatic Thickness and Volume Estimation of Sprayed Concrete on Anchored Retaining Walls from Terrestrial LIDAR Data

    Science.gov (United States)

    Martínez-Sánchez, J.; Puente, I.; GonzálezJorge, H.; Riveiro, B.; Arias, P.

    2016-06-01

    When ground conditions are weak, particularly in free formed tunnel linings or retaining walls, sprayed concrete can be applied on the exposed surfaces immediately after excavation for shotcreting rock outcrops. In these situations, shotcrete is normally applied conjointly with rock bolts and mesh, thereby supporting the loose material that causes many of the small ground falls. On the other hand, contractors want to determine the thickness and volume of sprayed concrete for both technical and economic reasons: to guarantee their structural strength but also, to not deliver excess material that they will not be paid for. In this paper, we first introduce a terrestrial LiDAR-based method for the automatic detection of rock bolts, as typically used in anchored retaining walls. These ground support elements are segmented based on their geometry and they will serve as control points for the co-registration of two successive scans, before and after shotcreting. Then we compare both point clouds to estimate the sprayed concrete thickness and the expending volume on the wall. This novel methodology is demonstrated on repeated scan data from a retaining wall in the city of Vigo (Spain), resulting in a rock bolts detection rate of 91%, that permits to obtain a detailed information of the thickness and calculate a total volume of 3597 litres of concrete. These results have verified the effectiveness of the developed approach by increasing productivity and improving previous empirical proposals for real time thickness estimation.

  20. Processing multidimensional nuclear physics data

    Energy Technology Data Exchange (ETDEWEB)

    Becker, J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Modern Ge detector arrays for gamma-ray spectroscopy are producing data sets unprecedented in size and event multiplicity. Gammasphere, the DOE sponsored array, has the following characteristics: (1) High granularity (110 detectors); (2) High efficiency (10%); and (3) Precision energy measurements (Delta EE = 0.2%). Characteristics of detector line shape, the data set, and the standard practice in the nuclear physics community to the nuclear gamma-ray cascades from the 4096 times 4096 times 4096 data cube will be discussed.

  1. Automatic calibration of a global hydrological model using satellite data as a proxy for stream flow data

    Science.gov (United States)

    Revilla-Romero, B.; Beck, H.; Salamon, P.; Burek, P.; Thielen, J.; de Roo, A.

    2014-12-01

    Model calibration and validation are commonly restricted due to the limited availability of historical in situ observational data. Several studies have demonstrated that using complementary remotely sensed datasets such as soil moisture for model calibration have led to improvements. The aim of this study was to evaluate the use of remotely sensed signal of the Global Flood Detection System (GFDS) as a proxy for stream flow data to calibrate a global hydrological model used in operational flood forecasting. This is done in different river basins located in Africa, South and North America for the time period 1998-2010 by comparing model calibration using the raw satellite signal as a proxy for river discharge with a model calibration using in situ stream flow observations. River flow is simulated using the LISFLOOD hydrological model for the flow routing in the river network and the groundwater mass balance. The model is set up on global coverage with horizontal grid resolution of 0.1 degree and daily time step for input/output data. Based on prior tests, a set of seven model parameters was used for calibration. The parameter space was defined by specifying lower and upper limits on each parameter. The objective functions considered were Pearson correlation (R), Nash-Sutcliffe Efficiency log (NSlog) and Kling-Gupta Efficiency (KGE') where both single- and multi-objective functions were employed. After multiple iterations, for each catchment, the algorithm generated a set of Pareto-optimal front of solutions. A single parameter set was selected which had the lowest distance to R=1 for the single-objective and NSlog=1 and KGE'=1 for the multi-objective function. The results of the different test river basins are compared against the performance obtained using the same objective functions by in situ discharge observations. Automatic calibration strategies of the global hydrological model using satellite data as a proxy for stream flow data are outlined and discussed.

  2. Pipeline Processing of VLBI Data

    CERN Document Server

    Reynolds, C; Garrett, M

    2002-01-01

    As part of an on-going effort to simplify the data analysis path for VLBI experiments, a pipeline procedure has been developed at JIVE to carry out much of the data reduction required for EVN experiments in an automated fashion. This pipeline procedure runs entirely within AIPS, the standard data reduction package used in astronomical VLBI, and is used to provide preliminary calibration of EVN experiments correlated at the EVN MkIV data processor. As well as simplifying the analysis for EVN users, the pipeline reduces the delay in providing information on the data quality to participating telescopes, hence improving the overall performance of the array. A description of this pipeline is presented here.

  3. Data warehouse building process based on data transformation templates

    OpenAIRE

    Paulavičiūtė, Kristina

    2006-01-01

    Growing amount of data and needs of data analysis starts needs of data warehouses. A lot of organizations operational data cumulates in OLTP DBMS databases. Organization historical data are cumulating in data warehouses. These data are adjusted for data analysis. DBMS ETL tools don’t have good data warehouse building opportunities. Created ETL tool for MS SQL Server makes data warehouse building process easier and speedier.

  4. Front-end data processing the SLD data acquisition system

    International Nuclear Information System (INIS)

    The data acquisition system for the SLD detector will make extensive use of parallel at the front-end level. Fastbus acquisition modules are being built with powerful processing capabilities for calibration, data reduction and further pre-processing of the large amount of analog data handled by each module. This paper describes the read-out electronics chain and data pre-processing system adapted for most of the detector channels, exemplified by the central drift chamber waveform digitization and processing system

  5. Automatic processing of facial affects in patients with borderline personality disorder: associations with symptomatology and comorbid disorders

    OpenAIRE

    Donges, Uta-Susan; Dukalski, Bibiana; Kersting, Anette; Suslow, Thomas

    2015-01-01

    Background Instability of affects and interpersonal relations are important features of borderline personality disorder (BPD). Interpersonal problems of individuals suffering from BPD might develop based on abnormalities in the processing of facial affects and high sensitivity to negative affective expressions. The aims of the present study were to examine automatic evaluative shifts and latencies as a function of masked facial affects in patients with BPD compared to healthy individuals. As ...

  6. Morpho-syntactic post-processing of N-best lists for improved French automatic speech recognition

    OpenAIRE

    Huet, Stéphane; Gravier, Guillaume; Sébillot, Pascale

    2010-01-01

    Abstract Many automatic speech recognition (ASR) systems rely on the sole pronunciation dictionaries and language models to take into account information about language. Implicitly, morphology and syntax are to a certain extent embedded in the language models but the richness of such linguistic knowledge is not exploited. This paper studies the use of morpho-syntactic (MS) information in a post-processing stage of an ASR system, by reordering N-best lists. Each sentence hypothesis ...

  7. AUTOMATIC UNSUPERVISED CLASSIFICATION OF ALL SLOAN DIGITAL SKY SURVEY DATA RELEASE 7 GALAXY SPECTRA

    International Nuclear Information System (INIS)

    Using the k-means cluster analysis algorithm, we carry out an unsupervised classification of all galaxy spectra in the seventh and final Sloan Digital Sky Survey data release (SDSS/DR7). Except for the shift to rest-frame wavelengths and the normalization to the g-band flux, no manipulation is applied to the original spectra. The algorithm guarantees that galaxies with similar spectra belong to the same class. We find that 99% of the galaxies can be assigned to only 17 major classes, with 11 additional minor classes including the remaining 1%. The classification is not unique since many galaxies appear in between classes; however, our rendering of the algorithm overcomes this weakness with a tool to identify borderline galaxies. Each class is characterized by a template spectrum, which is the average of all the spectra of the galaxies in the class. These low-noise template spectra vary smoothly and continuously along a sequence labeled from 0 to 27, from the reddest class to the bluest class. Our Automatic Spectroscopic K-means-based (ASK) classification separates galaxies in colors, with classes characteristic of the red sequence, the blue cloud, as well as the green valley. When red sequence galaxies and green valley galaxies present emission lines, they are characteristic of active galactic nucleus activity. Blue galaxy classes have emission lines corresponding to star formation regions. We find the expected correlation between spectroscopic class and Hubble type, but this relationship exhibits a high intrinsic scatter. Several potential uses of the ASK classification are identified and sketched, including fast determination of physical properties by interpolation, classes as templates in redshift determinations, and target selection in follow-up works (we find classes of Seyfert galaxies, green valley galaxies, as well as a significant number of outliers). The ASK classification is publicly accessible through various Web sites.

  8. Designing a Method for AN Automatic Earthquake Intensities Calculation System Based on Data Mining and On-Line Polls

    Science.gov (United States)

    Liendo Sanchez, A. K.; Rojas, R.

    2013-05-01

    Seismic intensities can be calculated using the Modified Mercalli Intensity (MMI) scale or the European Macroseismic Scale (EMS-98), among others, which are based on a serie of qualitative aspects related to a group of subjective factors that describe human perception, effects on nature or objects and structural damage due to the occurrence of an earthquake. On-line polls allow experts to get an overview of the consequences of an earthquake, without going to the locations affected. However, this could be a hard work if the polls are not properly automated. Taking into account that the answers given to these polls are subjective and there is a number of them that have already been classified for some past earthquakes, it is possible to use data mining techniques in order to automate this process and to obtain preliminary results based on the on-line polls. In order to achieve these goal, a predictive model has been used, using a classifier based on a supervised learning techniques such as decision tree algorithm and a group of polls based on the MMI and EMS-98 scales. It summarized the most important questions of the poll, and recursive divides the instance space corresponding to each question (nodes), while each node splits the space depending on the possible answers. Its implementation was done with Weka, a collection of machine learning algorithms for data mining tasks, using the J48 algorithm which is an implementation of the C4.5 algorithm for decision tree models. By doing this, it was possible to obtain a preliminary model able to identify up to 4 different seismic intensities with 73% correctly classified polls. The error obtained is rather high, therefore, we will update the on-line poll in order to improve the results, based on just one scale, for instance the MMI. Besides, the integration of automatic seismic intensities methodology with a low error probability and a basic georeferencing system, will allow to generate preliminary isoseismal maps

  9. Data-Driven Multimodal Sleep Apnea Events Detection : Synchrosquezing Transform Processing and Riemannian Geometry Classification Approaches.

    Science.gov (United States)

    Rutkowski, Tomasz M

    2016-07-01

    A novel multimodal and bio-inspired approach to biomedical signal processing and classification is presented in the paper. This approach allows for an automatic semantic labeling (interpretation) of sleep apnea events based the proposed data-driven biomedical signal processing and classification. The presented signal processing and classification methods have been already successfully applied to real-time unimodal brainwaves (EEG only) decoding in brain-computer interfaces developed by the author. In the current project the very encouraging results are obtained using multimodal biomedical (brainwaves and peripheral physiological) signals in a unified processing approach allowing for the automatic semantic data description. The results thus support a hypothesis of the data-driven and bio-inspired signal processing approach validity for medical data semantic interpretation based on the sleep apnea events machine-learning-related classification. PMID:27194241

  10. Evaluation and processing of covariance data

    International Nuclear Information System (INIS)

    These proceedings of a specialists'meeting on evaluation and processing of covariance data is divided into 4 parts bearing on: part 1- Needs for evaluated covariance data (2 Papers), part 2- generation of covariance data (15 Papers), part 3- Processing of covariance files (2 Papers), part 4-Experience in the use of evaluated covariance data (2 Papers)

  11. Dosimetric data processing in TRIGA-Nr 1 building

    International Nuclear Information System (INIS)

    Due to sources of radiations in the reactor core, in the auxiliary circuits, in the irradiation devices and in the adjacent rooms a background of gamma radiations may be present in these areas as well as a slight contamination of the air. The monitoring of radiations is permanently made by the system of stationary dosemeters. Data storage and processing is made by the CORAL computer. This paper gives the details of the works effected for data storing and processing with the computer. The computer processed data are needed in controlling the reactor operation. In the normal operation conditions, when all reactor and linked equipment parameters do not exceed the limits and the technical operating requirements, the dosimetric data acquisition is done once at every 10 minutes, but when a single parameter exceeds the normal operation value, acquisition is done once a minute. This allows the automatically recording of data to be used to find time evolution of unusual operation regime leading to a rise of the dosimetric values. So, the subsequent safety analysis in damaged equipment and devices are better done. The stored dosimetric data are part of operation documents (records) and are reported to the regulatory body, if approved limits and technical conditions are exceeded. By computer recording and processing, one may easily analyse the radiation status in the reactor building, Also, the radioactive release to the environment, in accidental situations, may be evaluated with higher precision

  12. Automatic Estimation of Excavation Volume from Laser Mobile Mapping Data for Mountain Road Widening

    Directory of Open Access Journals (Sweden)

    Massimo Menenti

    2013-09-01

    Full Text Available Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as

  13. Sentinel-1 automatic processing chain for volcanic and seismic areas monitoring within the Geohazards Exploitation Platform (GEP)

    Science.gov (United States)

    De Luca, Claudio; Zinno, Ivana; Manunta, Michele; Lanari, Riccardo; Casu, Francesco

    2016-04-01

    these issues, ESA recently funded the development of the Geohazards Exploitation Platform (GEP), a project aimed at putting together data, processing tools and results to make them accessible to the EO scientific community, with particular emphasis to the Geohazard Supersites & Natural Laboratories and the CEOS Seismic Hazards and Volcanoes Pilots. In this work we present the integration of the parallel version of a well-known DInSAR algorithm referred to as Small BAseline Subset (P-SBAS) within the GEP platform for processing Sentinel-1 data. The integration allowed us to set up an operational on-demand web tool, open to every user, aimed at automatically processing S1A data for the generation of SBAS displacement time-series. Main characteristics as well as a number of experimental results obtained by using the implemented web tool will be also shown. This work is partially supported by: the RITMARE project of Italian MIUR, the DPC-CNR agreement and the ESA GEP project.

  14. On the meaning of meaning when being mean: commentary on Berkowitz's "on the consideration of automatic as well as controlled psychological processes in aggression".

    Science.gov (United States)

    Dodge, Kenneth A

    2008-01-01

    Berkowitz (this issue) makes a cogent case for his cognitive neo-associationist (CNA) model that some aggressive behaviors occur automatically, emotionally, and through conditioned association with other stimuli. He also proposes that they can occur without "processing," that is, without meaning. He contrasts his position with that of social information processing (SIP) models, which he casts as positing only controlled processing mechanisms for aggressive behavior. However, both CNA and SIP models posit automatic as well as controlled processes in aggressive behavior. Most aggressive behaviors occur through automatic processes, which are nonetheless rule governed. SIP models differ from the CNA model in asserting the essential role of meaning (often through nonconscious, automatic, and emotional processes) in mediating the link between a stimulus and an angry aggressive behavioral response. PMID:18203196

  15. Machine Beats Experts: Automatic Discovery of Skill Models for Data-Driven Online Course Refinement

    Science.gov (United States)

    Matsuda, Noboru; Furukawa, Tadanobu; Bier, Norman; Faloutsos, Christos

    2015-01-01

    How can we automatically determine which skills must be mastered for the successful completion of an online course? Large-scale online courses (e.g., MOOCs) often contain a broad range of contents frequently intended to be a semester's worth of materials; this breadth often makes it difficult to articulate an accurate set of skills and knowledge…

  16. Automatic Building of an Ontology from a Corpus of Text Documents Using Data Mining Tools

    Directory of Open Access Journals (Sweden)

    J. I. Toledo-Alvarado

    2012-06-01

    Full Text Available In this paper we show a procedure to build automatically an ontology from a corpus of text documents without externalhelp such as dictionaries or thesauri. The method proposed finds relevant concepts in the form of multi-words in thecorpus and non-hierarchical relations between them in an unsupervised manner.

  17. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    Science.gov (United States)

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  18. Fuzzy logic and image processing techniques for the interpretation of seismic data

    International Nuclear Information System (INIS)

    Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation

  19. daptive Filter Used as a Dynamic Compensator in Automatic Gauge Control of Strip Rolling Processes

    Directory of Open Access Journals (Sweden)

    N. ROMAN

    2010-12-01

    Full Text Available The paper deals with a control structure of the strip thickness in a rolling mill of quarto type (AGC – Automatic Gauge Control. It performs two functions: the compensation of errors induced by unideal dynamics of the tracking systems lead by AGC system and the control adaptation to the change of dynamic properties of the tracking systems. The compensation of dynamical errors is achieved through inverse models of the tracking system, implemented as adaptive filters.

  20. Process monitoring using automatic physical measurement based on electrical and physical variability analysis

    Science.gov (United States)

    Shauly, Eitan N.; Levi, Shimon; Schwarzband, Ishai; Adan, Ofer; Latinsky, Sergey

    2015-04-01

    A fully automated silicon-based methodology for systematic analysis of electrical features is shown. The system was developed for process monitoring and electrical variability reduction. A mapping step was created by dedicated structures such as static-random-access-memory (SRAM) array or standard cell library, or by using a simple design rule checking run-set. The resulting database was then used as an input for choosing locations for critical dimension scanning electron microscope images and for specific layout parameter extraction then was input to SPICE compact modeling simulation. Based on the experimental data, we identified two items that must be checked and monitored using the method described here: transistor's sensitivity to the distance between the poly end cap and edge of active area (AA) due to AA rounding, and SRAM leakage due to a too close N-well to P-well. Based on this example, for process monitoring and variability analyses, we extensively used this method to analyze transistor gates having different shapes. In addition, analysis for a large area of high density standard cell library was done. Another set of monitoring focused on a high density SRAM array is also presented. These examples provided information on the poly and AA layers, using transistor parameters such as leakage current and drive current. We successfully define "robust" and "less-robust" transistor configurations included in the library and identified unsymmetrical transistors in the SRAM bit-cells. These data were compared to data extracted from the same devices at the end of the line. Another set of analyses was done to samples after Cu M1 etch. Process monitoring information on M1 enclosed contact was extracted based on contact resistance as a feedback. Guidelines for the optimal M1 space for different layout configurations were also extracted. All these data showed the successful in-field implementation of our methodology as a useful process monitoring method.

  1. Detection of pneumoconiosis opacities on X-ray images by contour line processing and its application to automatic diagnosis

    International Nuclear Information System (INIS)

    This paper presents a study on automatic diagnosis of pneumoconiosis by X-ray image processing. Contour line processing method for identifying small opacities of pneumoconiosis is proposed and a new feature vector for classifying the profusion of small opacities is also proposed. This method is superior to the methods which are based on texture analysis because it is robust against variations of film quality and individual differences of structural patterns such as ribs and blood vessels. ILO standard films and 140 CR (computed radiography) images were used to test the performance of the proposed method. Experimental results show the effectiveness of the proposed method. (author)

  2. Automatic Bayesian classification of healthy controls, bipolar disorder and schizophrenia using intrinsic connectivity maps from fMRI data

    OpenAIRE

    Arribas, Juan I.; Calhoun, Vince D.; Adalı, Tülay

    2010-01-01

    We present a method for supervised, automatic and reliable classification of healthy controls, patients with bipolar disorder and patients with schizophrenia using brain imaging data. The method uses four supervised classification learning machines trained with a stochastic gradient learning rule based on the minimization of Kullback-Leibler divergence and an optimal model complexity search through posterior probability estimation. Prior to classification, given the high dimensionality of fun...

  3. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  4. Study of Automatic Forest Road Design Model Considering Shallow Landslides with LiDAR Data of Funyu Experimental Forest

    OpenAIRE

    Saito, Masashi; Goshima, Msahiro; Aruga, Kazuhiro; Matsue, Keigo; Shuin, Yasuhiro; Tasaka, Toshiaki

    2013-01-01

    In this study, a model to automatically design a forest road considering shallow landslides using LiDAR data was examined. First, in order to develop a shallow landslide risk map of the Funyu Experimental Forest, a slope stability analysis was carried out using the infinite slope stability analysis formula. The soil depth was surveyed at 167 points using simple penetration tests, and the frequency distributions of the soil depth were estimated as logarithmic normal distributions. A soil depth...

  5. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    OpenAIRE

    Du, Q.; Xie, D; Sun, Y.

    2015-01-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other o...

  6. Automatically Building Diagnostic Bayesian Networks from On-line Data Sources and the SMILE Web-based Interface

    OpenAIRE

    Tungkasthan, Anucha; Jongsawat, Nipat; Poompuang, Pittaya; Intarasema, Sarayut; Premchaiswadi, Wichian

    2010-01-01

    This paper presented a practical framework for automating the building of diagnostic BN models from data sources obtained from the WWW and demonstrates the use of a SMILE web-based interface to represent them. The framework consists of the following components: RSS agent, transformation/conversion tool, core reasoning engine, and the SMILE web-based interface. The RSS agent automatically collects and reads the provided RSS feeds according to the agent's predefined URLs. A transformation/conve...

  7. Automatic Spectroscopic Data Categorization by Clustering Analysis (ASCLAN): A Data-Driven Approach for Distinguishing Discriminatory Metabolites for Phenotypic Subclasses.

    Science.gov (United States)

    Zou, Xin; Holmes, Elaine; Nicholson, Jeremy K; Loo, Ruey Leng

    2016-06-01

    We propose a novel data-driven approach aiming to reliably distinguish discriminatory metabolites from nondiscriminatory metabolites for a given spectroscopic data set containing two biological phenotypic subclasses. The automatic spectroscopic data categorization by clustering analysis (ASCLAN) algorithm aims to categorize spectral variables within a data set into three clusters corresponding to noise, nondiscriminatory and discriminatory metabolites regions. This is achieved by clustering each spectral variable based on the r(2) value representing the loading weight of each spectral variable as extracted from a orthogonal partial least-squares discriminant (OPLS-DA) model of the data set. The variables are ranked according to r(2) values and a series of principal component analysis (PCA) models are then built for subsets of these spectral data corresponding to ranges of r(2) values. The Q(2)X value for each PCA model is extracted. K-means clustering is then applied to the Q(2)X values to generate two clusters based on minimum Euclidean distance criterion. The cluster consisting of lower Q(2)X values is deemed devoid of metabolic information (noise), while the cluster consists of higher Q(2)X values is then further subclustered into two groups based on the r(2) values. We considered the cluster with high Q(2)X but low r(2) values as nondiscriminatory, while the cluster with high Q(2)X and r(2) values as discriminatory variables. The boundaries between these three clusters of spectral variables, on the basis of the r(2) values were considered as the cut off values for defining the noise, nondiscriminatory and discriminatory variables. We evaluated the ASCLAN algorithm using six simulated (1)H NMR spectroscopic data sets representing small, medium and large data sets (N = 50, 500, and 1000 samples per group, respectively), each with a reduced and full resolution set of variables (0.005 and 0.0005 ppm, respectively). ASCLAN correctly identified all discriminatory

  8. Simplified Processing Method for Meter Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Colotelo, Alison H. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Downs, Janelle L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ham, Kenneth D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Henderson, Jordan W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Montgomery, Sadie A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vernon, Christopher R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Parker, Steven A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-11-01

    Simple/Quick metered data processing method that can be used for Army Metered Data Management System (MDMS) and Logistics Innovation Agency data, but may also be useful for other large data sets. Intended for large data sets when analyst has little information about the buildings.

  9. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  10. Seeing race: N170 responses to race and their relation to automatic racial attitudes and controlled processing.

    Science.gov (United States)

    Ofan, Renana H; Rubin, Nava; Amodio, David M

    2011-10-01

    We examined the relation between neural activity reflecting early face perception processes and automatic and controlled responses to race. Participants completed a sequential evaluative priming task, in which two-tone images of Black faces, White faces, and cars appeared as primes, followed by target words categorized as pleasant or unpleasant, while encephalography was recorded. Half of these participants were alerted that the task assessed racial prejudice and could reveal their personal bias ("alerted" condition). To assess face perception processes, the N170 component of the ERP was examined. For all participants, stronger automatic pro-White bias was associated with larger N170 amplitudes to Black than White faces. For participants in the alerted condition only, larger N170 amplitudes to Black versus White faces were also associated with less controlled processing on the word categorization task. These findings suggest that preexisting racial attitudes affect early face processing and that situational factors moderate the link between early face processing and behavior. PMID:21452950

  11. An original method for optimization of external radiotherapy using a data-processing system

    International Nuclear Information System (INIS)

    The authors describe a method for automatic optimization of doses using a small data-processing system specifically adapted for dosimetry. Optimization is obtained by using the traditional technique of linear programmation which enables the most uniform distribution of doses possible to the target-volume. The operator's part in the procedure has been reduced to a minimum

  12. Discovery of mass spectral characteristics and automatic identification of wax esters from gas chromatography mass spectrometry data.

    Science.gov (United States)

    Zhang, Liang-xiao; Yun, Yi-feng; Liang, Yi-zeng; Cao, Dong-sheng

    2010-06-01

    The mass spectral characteristics of wax esters were systemically summarized and interpreted through data mining of their standard mass spectra taken from NIST standard mass spectral library. Combining with the rules of retention indices described in the previous study, an automatic system was subsequently developed to identify the structural information for wax esters from GC/MS data. After tested and illustrated by both simulated and real GC/MS data, the results indicate that this system could identify wax esters except the polyunsaturated ones and the mass spectral characteristics are useful and effective information for identification of wax esters. PMID:20417935

  13. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  14. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  15. AFIDS, a Precision Automatic Co-Registration Process for Spacecraft Sensors

    Science.gov (United States)

    Bryant, N. A.; Zobrist, A. L.; Logan, T. L.; Bunch, W. L.

    2004-12-01

    AFIDS is the acronym for the Automated Fusion of Image Data System developed recently at JPL under funding from ESTO and other sources, and currently being distributed to interested users in the US government. Automated sub-pixel co-registration and ortho-rectification of satellite imagery is required for precise change detection and analysis of low- (e.g. 1-4km weather satellite), moderate- (e.g. 30m Landsat) and high- resolution (e.g. Ikonos and Quickbird) space sensors. The procedure is "automated" in the sense that human-initiated tiepoint selection is not required, but ephemeris information associated with an image is relied upon to initiate the co-registration process. The methodology employs the additive composition of all pertinent dependent and independent parameters contributing to image-to-image tiepoint misregistration within a satellite scene. Mapping and orthorectification (correction for elevation effects) of satellite imagery defies exact projective solutions because the data are not obtained from a single point (like a camera), but as a continuous process from the orbital path. Standard image processing techniques can apply approximate solutions with sufficient accuracy, but some advances in the state-of-the-art had to be made for precision change-detection and time-series applications where relief offsets become a controlling factor. The basic technique first involves correlation and warping of raw satellite data points to an orthorectified Landsat (30m) or Controlled Image Base (1 or 5m) database to give an approximate mapping. Then digital elevation models are used to correct perspective shifts due to height and view-angle. This image processing approach requires from two (e.g. geosynchronous weather satellite imagery) to four (e.g. polar weather satellite imagery) sequential processing steps that warp the dataset by resampling pixel values. To avoid degradation of the data by multiple resampling, each warp is represented by an ultra-fine grid

  16. Data near processing support for climate data analysis

    Science.gov (United States)

    Kindermann, Stephan; Ehbrecht, Carsten; Hempelmann, Nils

    2016-04-01

    Climate data repositories grow in size exponentially. Scalable data near processing capabilities are required to meet future data analysis requirements and to replace current "data download and process at home" workflows and approaches. On one hand side, these processing capabilities should be accessible via standardized interfaces (e.g. OGC WPS), on the other side a large variety of processing tools, toolboxes and deployment alternatives have to be supported and maintained at the data/processing center. We present a community approach of a modular and flexible system supporting the development, deployment and maintenace of OGC-WPS based web processing services. This approach is organized in an open source github project (called "bird-house") supporting individual processing services ("birds", e.g. climate index calculations, model data ensemble calculations), which rely on basic common infrastructural components (e.g. installation and deployment recipes, analysis code dependencies management). To support easy deployment at data centers as well as home institutes (e.g. for testing and development) the system supports the management of the often very complex package dependency chain of climate data analysis packages as well as docker based packaging and installation. We present a concrete deployment scenario at the German Climate Computing Center (DKRZ). The DKRZ one hand side hosts a multi-petabyte climate archive which is integrated e.g. into the european ENES and worldwide ESGF data infrastructure, and on the other hand hosts an HPC center supporting (model) data production and data analysis. The deployment scenario also includes openstack based data cloud services to support data import and data distribution for bird-house based WPS web processing services. Current challenges for inter-institutionnal deployments of web processing services supporting the european and international climate modeling community as well as the climate impact community are highlighted

  17. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  18. Data processing framework for decision making

    DEFF Research Database (Denmark)

    Larsen, Jan

    The aim of the talk is * to provide insight into some of the issues in data processing and detection systems * to hint at possible solutions using statistical signal processing and machine learning methodologies...

  19. Gaussian process regression analysis for functional data

    CERN Document Server

    Shi, Jian Qing

    2011-01-01

    Gaussian Process Regression Analysis for Functional Data presents nonparametric statistical methods for functional regression analysis, specifically the methods based on a Gaussian process prior in a functional space. The authors focus on problems involving functional response variables and mixed covariates of functional and scalar variables.Covering the basics of Gaussian process regression, the first several chapters discuss functional data analysis, theoretical aspects based on the asymptotic properties of Gaussian process regression models, and new methodological developments for high dime

  20. Multidimensional data modeling for business process analysis

    OpenAIRE

    Mansmann, Svetlana; Neumuth, Thomas; Scholl, Marc H.

    2007-01-01

    The emerging area of business process intelligence attempts to enhance the analytical capabilities of business process management systems by employing data warehousing and mining technologies. This paper presents an approach to re-engineering the business process modeling in conformity with the multidimensional data model. Since the business process and the multidimensional model are driven by rather different objectives and assumptions, there is no straightforward solution to converging thes...

  1. Multidimensional Data Modeling for Business Process Analysis

    Science.gov (United States)

    Mansmann, Svetlana; Neumuth, Thomas; Scholl, Marc H.

    The emerging area of business process intelligence attempts to enhance the analytical capabilities of business process management systems by employing data warehousing and mining technologies. This paper presents an approach to re-engineering the business process modeling in conformity with the multidimensional data model. Since the business process and the multidimensional model are driven by rather different objectives and assumptions, there is no straightforward solution to converging these models.

  2. THE METHOD OF DATA PROCESSING OF THE ELECTRICAL SURVEYING AND THE PROGRAM SYSTEM USED ON MICROCOMPUTER

    Institute of Scientific and Technical Information of China (English)

    李志聃; 高绋麟

    1990-01-01

    The ESS software package is prepared for electrical data processing in the fields of coal prospecting, hydrogeological engineering, and can be used in the other fields of electrical data processing. It can be operated on any kind of microcomputer which has an internal memories of moro than 512kB. The ESS software package would be leading the office operation to an automatic data processing period and the field work free from the tedious, repeated data treating and mapping, so that the engineers would have more time to analyse and interpret field data. Undoubtedly, it is of benefit to improving the relibility of the geological evaluation.

  3. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  4. Utilization of a genetic algorithm for the automatic detection of oil spill from RADARSAT-2 SAR satellite data

    International Nuclear Information System (INIS)

    Highlights: • An oil platform located 70 km from the coast of Louisiana sank on Thursday. • Oil spill has backscatter values of −25 dB in RADARSAT-2 SAR. • Oil spill is portrayed in SCNB mode by shallower incidence angle. • Ideal detection of oil spills in SAR images requires moderate wind speeds. • Genetic algorithm is excellent tool for automatic detection of oil spill in RADARSAT-2 SAR data. - Abstract: In this work, a genetic algorithm is applied for the automatic detection of oil spills. The procedure is implemented using sequences from RADARSAT-2 SAR ScanSAR Narrow single-beam data acquired in the Gulf of Mexico. The study demonstrates that the implementation of crossover allows for the generation of an accurate oil spill pattern. This conclusion is confirmed by the receiver-operating characteristic (ROC) curve. The ROC curve indicates that the existence of oil slick footprints can be identified using the area between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills, and the ScanSAR Narrow single-beam mode serves as an excellent sensor for oil spill detection and survey

  5. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data

    OpenAIRE

    Maria Vakalopoulou; Konstantinos Karantzalos

    2014-01-01

    Frame hyperspectral sensors, in contrast to push-broom or line-scanning ones, produce hyperspectral datasets with, in general, better geometry but with unregistered spectral bands. Being acquired at different instances and due to platform motion and movements (UAVs, aircrafts, etc.), every spectral band is displaced and acquired with a different geometry. The automatic and accurate registration of hyperspectral datasets from frame sensors remains a challenge. Powerful local feature descriptor...

  6. Automatic electricity markets data extraction for realistic multi-agent simulations

    DEFF Research Database (Denmark)

    Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel;

    2014-01-01

    different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database with a...... scenarios generation tool, based on knowledge discovery techniques, provides a framework to study real market scenarios allowing simulators improvement and validation....

  7. Automatic Incident Classification for Big Traffic Data by Adaptive Boosting SVM

    OpenAIRE

    Wang, Li-li; Ngan, Henry Y. T.; Yung, Nelson H. C.

    2015-01-01

    Modern cities experience heavy traffic flows and congestions regularly across space and time. Monitoring traffic situations becomes an important challenge for the Traffic Control and Surveillance Systems (TCSS). In advanced TCSS, it is helpful to automatically detect and classify different traffic incidents such as severity of congestion, abnormal driving pattern, abrupt or illegal stop on road, etc. Although most TCSS are equipped with basic incident detection algorithms, they are however cr...

  8. Semi-Automatic Detection of Swimming Pools from Aerial High-Resolution Images and LIDAR Data

    OpenAIRE

    Borja Rodríguez-Cuenca; Maria C. Alonso

    2014-01-01

    Bodies of water, particularly swimming pools, are land covers of high interest. Their maintenance involves energy costs that authorities must take into consideration. In addition, swimming pools are important water sources for firefighting. However, they also provide a habitat for mosquitoes to breed, potentially posing a serious health threat of mosquito-borne disease. This paper presents a novel semi-automatic method of detecting swimming pools in urban environments from aerial images and L...

  9. Data assimilation techniques in modeling ocean processes

    Digital Repository Service at National Institute of Oceanography (India)

    Mahadevan, R.; Fernandes, A.A.; Naqvi, S.W.A.

    Three main classes of procedures in data analysis and assimilation viz. Objective Analysis, Optimal Interpolation and variational method, used to process the observed data on atmospheric and ocean parameters are briefly reviewed. The variational...

  10. Age effects shrink when motor learning is predominantly supported by nondeclarative, automatic memory processes: evidence from golf putting.

    Science.gov (United States)

    Chauvel, Guillaume; Maquestiaux, François; Hartley, Alan A; Joubert, Sven; Didierjean, André; Masters, Rich S W

    2012-01-01

    Can motor learning be equivalent in younger and older adults? To address this question, 48 younger (M = 23.5 years) and 48 older (M = 65.0 years) participants learned to perform a golf-putting task in two different motor learning situations: one that resulted in infrequent errors or one that resulted in frequent errors. The results demonstrated that infrequent-error learning predominantly relied on nondeclarative, automatic memory processes whereas frequent-error learning predominantly relied on declarative, effortful memory processes: After learning, infrequent-error learners verbalized fewer strategies than frequent-error learners; at transfer, a concurrent, attention-demanding secondary task (tone counting) left motor performance of infrequent-error learners unaffected but impaired that of frequent-error learners. The results showed age-equivalent motor performance in infrequent-error learning but age deficits in frequent-error learning. Motor performance of frequent-error learners required more attention with age, as evidenced by an age deficit on the attention-demanding secondary task. The disappearance of age effects when nondeclarative, automatic memory processes predominated suggests that these processes are preserved with age and are available even early in motor learning. PMID:21736434

  11. Embed XRF Data Processing System Development

    International Nuclear Information System (INIS)

    This paper introduced a project of XRF data processing system. The project adopted embed processor LPC2148 as the core of the data processing. This System has equipped graph LCD and the number of dots is 320 x 240. The large capacity Secure Digital Memory Card has been used as Data memory. It could exchange data with PC by USB interface. Also, we have made some amelioration on the function of XRF data processing. This system running stably, capability credibility and using conveniently, so it has good prospect of application and extension. (authors)

  12. Apache Flink: Distributed Stream Data Processing

    CERN Document Server

    Jacobs, Kevin; CERN. Geneva. IT Department

    2016-01-01

    The amount of data is growing significantly over the past few years. Therefore, the need for distributed data processing frameworks is growing. Currently, there are two well-known data processing frameworks with an API for data batches and an API for data streams which are named Apache Flink and Apache Spark. Both Apache Spark and Apache Flink are improving upon the MapReduce implementation of the Apache Hadoop framework. MapReduce is the first programming model for distributed processing on large scale that is available in Apache Hadoop. This report compares the Stream API and the Batch API for both frameworks.

  13. ACRF Data Collection and Processing Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, M; Egan, D

    2004-12-01

    We present a description of the data flow from measurement to long-term archive. We also discuss data communications infrastructure. The data handling processes presented include collection, transfer, ingest, quality control, creation of Value-Added Products (VAP), and data archiving.

  14. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  15. ARP: Automatic rapid processing for the generation of problem dependent SAS2H/ORIGEN-s cross section libraries

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Hermann, O.W.; Bowman, S.M.; Parks, C.V.

    1998-04-01

    In this report, a methodology is described which serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. ARP, Automatic Rapid Processing, is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent fuel isotopic compositions for PWR and BWR systems.

  16. Automatic rapid process for the generation of problem-dependent SAS2H/ORIGEN-S cross-section libraries

    International Nuclear Information System (INIS)

    A methodology is described that serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. Automatic Rapid Processing (ARP) is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent-fuel isotopic compositions for pressurized water reactor and boiling water reactor systems

  17. EPR design tools. Integrated data processing tools

    International Nuclear Information System (INIS)

    In all technical areas, planning and design have been supported by electronic data processing for many years. New data processing tools had to be developed for the European Pressurized Water Reactor (EPR). The work to be performed was split between KWU and Framatome and laid down in the Basic Design contract. The entire plant was reduced to a logical data structure; the circuit diagrams and flowsheets of the systems were drafted, the central data pool was established, the outlines of building structures were defined, the layout of plant components was planned, and the electrical systems were documented. Also building construction engineering was supported by data processing. The tasks laid down in the Basic Design were completed as so-called milestones. Additional data processing tools also based on the central data pool are required for the phases following after the Basic Design phase, i.e Basic Design Optimization; Detailed Design; Management; Construction, and Commissioning. (orig.)

  18. Satellite radar altimetry over ice. Volume 1: Processing and corrections of Seasat data over Greenland

    Science.gov (United States)

    Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.

    1990-01-01

    The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.

  19. Shadow-Based Hierarchical Matching for the Automatic Registration of Airborne LiDAR Data and Space Imagery

    OpenAIRE

    Alireza Safdarinezhad; Mehdi Mokhtarzade; Mohammad Javad Valadan Zoej

    2016-01-01

    The automatic registration of LiDAR data and optical images, which are heterogeneous data sources, has been a major research challenge in recent years. In this paper, a novel hierarchical method is proposed in which the least amount of interaction of a skilled operator is required. Thereby, two shadow extraction schemes, one from LiDAR and the other from high-resolution satellite images, were used, and the obtained 2D shadow maps were then considered as prospective matching entities. Taken as...

  20. Automatic quality assurance in cutting and machining

    International Nuclear Information System (INIS)

    Requirements, economics, and possibility of automatic data acquisition and processing are discussed for different production stages. Which of the stages of materials and measuring equipment handling data acquisition, and data processing is to have priority in automation depends on the time requirements of these stages. (orig.)

  1. Data processing and visualisation in the Rosetta Science Ground Segment

    Science.gov (United States)

    Geiger, Bernhard

    2016-09-01

    Rosetta is the first space mission to rendezvous with a comet. The spacecraft encountered its target 67P/Churyumov-Gerasimenko in 2014 and currently escorts the comet through a complete activity cycle during perihelion passage. The Rosetta Science Ground Segment (RSGS) is in charge of planning and coordinating the observations carried out by the scientific instruments on board the Rosetta spacecraft. We describe the data processing system implemented at the RSGS in order to support data analysis and science operations planning. The system automatically retrieves and processes telemetry data in near real-time. The generated products include spacecraft and instrument housekeeping parameters, scientific data for some instruments, and derived quantities. Based on spacecraft and comet trajectory information a series of geometric variables are calculated in order to assess the conditions for scheduling the observations of the scientific instruments and analyse the respective measurements obtained. Images acquired by the Rosetta Navigation Camera are processed and distributed in near real-time to the instrument team community. A quicklook web-page displaying the images allows the RSGS team to monitor the state of the comet and the correct acquisition and downlink of the images. Consolidated datasets are later delivered to the long-term archive.

  2. Telemetry Data Processing Methodology: An ASLV Experience

    OpenAIRE

    R. Varaprasad

    1998-01-01

    In any launch vehicle mission, post -flight analysis (pFA ) of vehicle telemetry data turns outto be all important, because it helps in evaluating detailed in-flight perfonnance of the varioussub systems of the vehicle. An integrated processing methodology was adopted and a generalised software was developed for processing the telemetry data of augmented satellite launch vehicle (ASLV).

  3. Business Data Processing: A Teacher's Guide.

    Science.gov (United States)

    Virginia State Dept. of Education, Richmond. Business Education Service.

    The curriculum guide, which was prepared to serve as an aid to all teachers of business data processing, gives a complete outline for a high-school level course in both Common Business Oriented Language (COBOL) and Report Program Generator (RPG). Parts one and two of the guide together comprise an introduction to data processing, which deals with…

  4. Data acquisition system for TRIGA Mark I nuclear reactor and a proposal for its automatic operation

    International Nuclear Information System (INIS)

    The TRIGA IPR-R1 Nuclear Research Reactor, located at the Nuclear Technology Development Center (CDTN/CNEN) in Belo Horizonte, Brazil, is being operated since 44 years ago. During these years the main operational parameters were monitored by analog recorders and counters located in the reactor control console. The most important operational parameters and data in the reactor logbook were registered by the reactor operators. This process is quite useful, but it can involve some human errors. It is also impossible for the operators to take notes of all variables involving the process mainly during fast power transients operations. A PC-based Data Acquisition was developed for the reactor that allows on line monitoring, through graphic interfaces, and shows operational parameters evolution to the operators. Some parameters that never were measured on line, like the thermal power and the coolant flow rate at the primary loop, are monitored now in the computer video monitor. The developed system allows measure out all parameters in a frequency up to 1 kHz. These data is also recorded in text files available for consults and analysis. (author)

  5. Data processing device for computed tomography system

    International Nuclear Information System (INIS)

    A data processing device applied to a computed tomography system which examines a living body utilizing radiation of X-rays is disclosed. The X-rays which have penetrated the living body are converted into electric signals in a detecting section. The electric signals are acquired and converted from an analog form into a digital form in a data acquisition section, and then supplied to a matrix data-generating section included in the data processing device. By this matrix data-generating section are generated matrix data which correspond to a plurality of projection data. These matrix data are supplied to a partial sum-producing section. The partial sums respectively corresponding to groups of the matrix data are calculated in this partial sum-producing section and then supplied to an accumulation section. In this accumulation section, the final value corresponding to the total sum of the matrix data is calculated, whereby the calculation for image reconstruction is performed

  6. Process mining data science in action

    CERN Document Server

    van der Aalst, Wil

    2016-01-01

    The first to cover this missing link between data mining and process modeling, this book provides real-world techniques for monitoring and analyzing processes in real time. It is a powerful new tool destined to play a key role in business process management.

  7. Data processing at the European Space Agency

    International Nuclear Information System (INIS)

    A short introduction explains ESA's activity and programmes. Satellites are seen as generators of space data. The aim of ESA missions is to deliver data to the end users. The end-to-end system satellite-end user is examined in order to understand where and how the data processing activity is performed. Centralized processing done by the Agency vs decentralized done by the end user is analysed. A concrete example of the data processing chain for an ESA Scientific Satellite is presented in order to understand the main characteristics of ESA Data Processing Systems. These characteristics require a rigorous software engineering approach, which is enforced at ESA through standard practices issued by a Board for Software Standardization. The main features of the standard practices are presented. Finally some ideas are presented concerning future standardized means for interchange of data. (orig.)

  8. An ontology driven data mining process

    OpenAIRE

    BRISSON, Laurent; Collard, Martine

    2008-01-01

    This paper deals with knowledge integration in a data mining process. We suggest to model domain knowledge during business understanding and data understanding steps in order to build an ontology driven information system (ODIS). We present the KEOPS Methodology based on this approach. In KEOPS, the ODIS is dedicated to data mining tasks. It allows using expert knowledge for efficient data selection, data preparation and model interpretation. In this paper, we detail each of these ontology dr...

  9. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  10. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  11. Environmental monitoring based on automatic change detection from remotely sensed data: kernel-based approach

    Science.gov (United States)

    Shah-Hosseini, Reza; Homayouni, Saeid; Safari, Abdolreza

    2015-01-01

    In the event of a natural disaster, such as a flood or earthquake, using fast and efficient methods for estimating the extent of the damage is critical. Automatic change mapping and estimating are important in order to monitor environmental changes, e.g., deforestation. Traditional change detection (CD) approaches are time consuming, user dependent, and strongly influenced by noise and/or complex spectral classes in a region. Change maps obtained by these methods usually suffer from isolated changed pixels and have low accuracy. To deal with this, an automatic CD framework-which is based on the integration of change vector analysis (CVA) technique, kernel-based C-means clustering (KCMC), and kernel-based minimum distance (KBMD) classifier-is proposed. In parallel with the proposed algorithm, a support vector machine (SVM) CD method is presented and analyzed. In the first step, a differential image is generated via two approaches in high dimensional Hilbert space. Next, by using CVA and automatically determining a threshold, the pseudo-training samples of the change and no-change classes are extracted. These training samples are used for determining the initial value of KCMC parameters and training the SVM-based CD method. Then optimizing a cost function with the nature of geometrical and spectral similarity in the kernel space is employed in order to estimate the KCMC parameters and to select the precise training samples. These training samples are used to train the KBMD classifier. Last, the class label of each unknown pixel is determined using the KBMD classifier and SVM-based CD method. In order to evaluate the efficiency of the proposed algorithm for various remote sensing images and applications, two different datasets acquired by Quickbird and Landsat TM/ETM+ are used. The results show a good flexibility and effectiveness of this automatic CD method for environmental change monitoring. In addition, the comparative analysis of results from the proposed method

  12. Automatic first-break picking using the instantaneous traveltime attribute

    KAUST Repository

    Saragiotis, Christos

    2012-01-01

    Picking the first breaks is an important step in seismic processing. The large volume of the seismic data calls for automatic and objective picking. We introduce a new automatic first-break picker, which uses specifically designed time windows and an iterative procedure based on the instantaneous traveltime attribute. The method is fast as it only uses a few FFT\\'s per trace. We demonstrate the effectiveness of this automatic method by applying it on real test data.

  13. Automatic Detection and Tracking of CMEs II: Multiscale Filtering of Coronagraph Data

    CERN Document Server

    Byrne, Jason P; Habbal, Shadia R; Gallagher, Peter T; 10.1088/0004-637X/752/2/145

    2012-01-01

    Studying CMEs in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the SOHO, STEREO, and future coronagraph missions, also makes manual cataloguing of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cata- loguing packages such as CACTus, SEEDS and ARTEMIS. Here we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of current catalogues. It works by first employing the dynamic CME separation technique outlined in a companion paper, and then characterising CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane-of-sky. The effectiveness of the...

  14. Input pf graphic and alphabetic-numeric data into a computer during dosimetric planning of irradiation by means of the semi-automatic data coding device

    International Nuclear Information System (INIS)

    The system developed for input of the data necessary for dosimetric planning of patient irradiation into the ES-1022 computer memory is described. The data is obtained by means of the semi-automatic data coding device. The above system can be used for dosimetric planning under 2 modes: under the mode of direct calculation of dose distributions and irradiation conditions as well as under the mode of archivization of the initial data about a patient on a magnetic tape or any other external carrier

  15. Initial borehole acoustic televiewer data processing algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Moore, T.K.

    1988-06-01

    With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.

  16. Semantic process mining of enterprise transaction data

    OpenAIRE

    Ingvaldsen, Jon Espen

    2011-01-01

    Process mining technologies provide capabilities for discovering and describing multiple perspectives of the real business process flows in an organization. Enterprise Resource Planning (ERP) systems are commonly stated in research as promising areas for process mining. ERP systems are application packages that have received wide industrial adoption, and they contain extensive amounts of data related to business process performance. However, very little research work describes actual experien...

  17. Efficient Organization of Collective Data-Processing

    OpenAIRE

    Cukrowski, Jacek; Manfred M. Fischer

    1998-01-01

    The paper examines the application of the concept of economic efficiency to organizational issues of collective information processing in decision making. Information processing is modeled in the framework of the dynamic parallel-processing model of associative computation with an endogenous set-up cost of the processors. The model is extended to include the specific features of collective information processing in the team of decision makers which could cause an error in data ...

  18. Automatic Segmentation of Drosophila Neural Compartments Using GAL4 Expression Data Reveals Novel Visual Pathways.

    Science.gov (United States)

    Panser, Karin; Tirian, Laszlo; Schulze, Florian; Villalba, Santiago; Jefferis, Gregory S X E; Bühler, Katja; Straw, Andrew D

    2016-08-01

    Identifying distinct anatomical structures within the brain and developing genetic tools to target them are fundamental steps for understanding brain function. We hypothesize that enhancer expression patterns can be used to automatically identify functional units such as neuropils and fiber tracts. We used two recent, genome-scale Drosophila GAL4 libraries and associated confocal image datasets to segment large brain regions into smaller subvolumes. Our results (available at https://strawlab.org/braincode) support this hypothesis because regions with well-known anatomy, namely the antennal lobes and central complex, were automatically segmented into familiar compartments. The basis for the structural assignment is clustering of voxels based on patterns of enhancer expression. These initial clusters are agglomerated to make hierarchical predictions of structure. We applied the algorithm to central brain regions receiving input from the optic lobes. Based on the automated segmentation and manual validation, we can identify and provide promising driver lines for 11 previously identified and 14 novel types of visual projection neurons and their associated optic glomeruli. The same strategy can be used in other brain regions and likely other species, including vertebrates. PMID:27426516

  19. Virtual endoscopy post-processing of helical CT data sets

    International Nuclear Information System (INIS)

    Purpose: The purpose of this work was to test a newly developed, post-processing software for virtual CT endoscopic methods. Virtual endoscopic images were generated from helical CT data sets in the region of the shoulder joint (n=2), the tracheobronchial system (n=3), the nasal sinuses (n=2), the colon (n=2), and the common carotid artery (n=1). Software developed specifically for virtual endoscopy ('Navigator') was used which, after a previous threshold value selection, makes the reconstruction of internal body surfaces possible by an automatic segmentation process. We have evaluated the usage of the software, the reconstruction time for individual images and sequences of images as well as the quality of the reconstruction. All pathological findings of the virtual endoscopy were confirmed by surgery. Results: The post-processing program is easy to use and provides virtual endoscopic images within 50 seconds. Depending of the extent of the data set, virtual tracheobronchoscopy as a cine loop sequence required about 15 minutes. Thorugh use of the threshold value-dependent surface reconstruction the demands on the computer configuration are limited; however, this also created quality problems in image calculation as a consequence of the accompanying loss of data. Conclusions: The Navigator software enables the calculation of virtual endoscopic models with only moderate demands on the hardware. (orig.)

  20. A Domain Description Language for Data Processing

    Science.gov (United States)

    Golden, Keith

    2003-01-01

    We discuss an application of planning to data processing, a planning problem which poses unique challenges for domain description languages. We discuss these challenges and why the current PDDL standard does not meet them. We discuss DPADL (Data Processing Action Description Language), a language for describing planning domains that involve data processing. DPADL is a declarative, object-oriented language that supports constraints and embedded Java code, object creation and copying, explicit inputs and outputs for actions, and metadata descriptions of existing and desired data. DPADL is supported by the IMAGEbot system, which we are using to provide automation for an ecological forecasting application. We compare DPADL to PDDL and discuss changes that could be made to PDDL to make it more suitable for representing planning domains that involve data processing actions.

  1. Linking DICOM pixel data with radiology reports using automatic semantic annotation

    Science.gov (United States)

    Pathak, Sayan D.; Kim, Woojin; Munasinghe, Indeera; Criminisi, Antonio; White, Steve; Siddiqui, Khan

    2012-02-01

    Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists' PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists' reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physician's advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.

  2. Information-management data base for fusion-target fabrication processes

    International Nuclear Information System (INIS)

    A computer-based data-management system has been developed to handle data associated with target-fabrication processes including glass microballoon characterization, gas filling, materials coating, and storage locations. The system provides automatic data storage and computation, flexible data-entry procedures, fast access, automated report generation, and secure data transfer. It resides on a CDC CYBER 175 computer and is compatible with the CDC data-base-language Query Update, but is based on custom FORTRAN software interacting directly with the CYBER's file-management system. The described data base maintains detailed, accurate, and readily available records of fusion targets information

  3. Automatic disease screening method using image processing for dried blood microfluidic drop stain pattern recognition.

    Science.gov (United States)

    Sikarwar, Basant S; Roy, Mukesh; Ranjan, Priya; Goyal, Ayush

    2016-07-01

    This paper examines programmed automatic recognition of infection from samples of dried stains of micro-scale drops of patient blood. This technique has the upside of being low-cost and less-intrusive and not requiring puncturing the patient with a needle for drawing blood, which is especially critical for infants and the matured. It also does not require expensive pathological blood test laboratory equipment. The method is shown in this work to be successful for ailment identification in patients suffering from tuberculosis and anaemia. Illness affects the physical properties of blood, which thus influence the samples of dried micro-scale blood drop stains. For instance, if a patient has a severe drop in platelet count, which is often the case of dengue or malaria patients, the blood's physical property of viscosity drops substantially, i.e. the blood is thinner. Thus, the blood micro-scale drop stain samples can be utilised for diagnosing maladies. This paper presents programmed automatic examination of the dried micro-scale drop blood stain designs utilising an algorithm based on pattern recognition. The samples of micro-scale blood drop stains of ordinary non-infected people are clearly recognisable as well as the samples of micro-scale blood drop stains of sick people, due to key distinguishing features. As a contextual analysis, the micro-scale blood drop stains of patients infected with tuberculosis have been contrasted with the micro-scale blood drop stains of typical normal healthy people. The paper dives into the fundamental flow mechanics behind how the samples of the dried micro-scale blood drop stain is shaped. What has been found is a thick ring like feature in the dried micro-scale blood drop stains of non-ailing people and thin shape like lines in the dried micro-scale blood drop stains of patients with anaemia or tuberculosis disease. The ring like feature at the periphery is caused by an outward stream conveying suspended particles to the edge

  4. Improvements of the tubes ut inspection using a rotating probe and data processing

    International Nuclear Information System (INIS)

    An ultrasonic method has been developed to test straight or bend steam generator tubes in power plant. A new type of rotating probes for cracks and wall thickness measurements have been built up and successfully tested. The data acquisition and processing system SPARTACUS is used. It allows high frequency digitalization and powerful signal processing using automatic reporting. The actual performances were tested on natural and artificial defects under representative operating conditions

  5. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  6. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    International Nuclear Information System (INIS)

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks

  7. Knowledge Management Based on Big Data Processing

    Directory of Open Access Journals (Sweden)

    Li Baoan

    2014-01-01

    Full Text Available At present, many large enterprises, like oil industry accumulated a large amount of data with a range of potential value of knowledge in their value activities over the years. How to help them to put these data into wealth are common problems faced by IT industry and academia. This study analyzed the five key problems of big data processing and knowledge management in-depth and then explained the composition and technical characteristics of knowledge management system based on big data processing. It explored the new approach of knowledge management which can adapt to the ever-change demands of enterprises.

  8. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  9. Automatic spline-smoothing approach applied to denoise Moroccan resistivity data phosphate deposit “disturbances” map

    Directory of Open Access Journals (Sweden)

    Saad Bakkali

    2010-04-01

    Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.

  10. US Air Force Data Processing Manuals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data Processing Reference manual for United States Air Force surface stations, circa 1960s. TDF-13 stands for Tape Deck Format number 13, the format in which the...

  11. Lobster Processing and Sales Trip Report Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a federally mandated log which is required to be mailed in to NMFS after a fishing trip. This data set includes lobster processing and sales information...

  12. A practical data processing workflow for multi-OMICS projects.

    Science.gov (United States)

    Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin

    2014-01-01

    Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post

  13. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    Science.gov (United States)

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180

  14. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  15. Utilizing Linked Open Data Sources for Automatic Generation of Semantic Metadata

    Science.gov (United States)

    Nummiaho, Antti; Vainikainen, Sari; Melin, Magnus

    In this paper we present an application that can be used to automatically generate semantic metadata for tags given as simple keywords. The application that we have implemented in Java programming language creates the semantic metadata by linking the tags to concepts in different semantic knowledge bases (CrunchBase, DBpedia, Freebase, KOKO, Opencyc, Umbel and/or WordNet). The steps that our application takes in doing so include detecting possible languages, finding spelling suggestions and finding meanings from amongst the proper nouns and common nouns separately. Currently, our application supports English, Finnish and Swedish words, but other languages could be included easily if the required lexical tools (spellcheckers, etc.) are available. The created semantic metadata can be of great use in, e.g., finding and combining similar contents, creating recommendations and targeting advertisements.

  16. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  17. Modular Toolkit for Data Processing (MDP): A Python Data Processing Framework

    OpenAIRE

    Zito, Tiziano; Wilbert, Niko; Wiskott, Laurenz; Berkes, Pietro

    2009-01-01

    Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory requirements. From the scientific developer's perspective, MDP is a modular framework, which can eas...

  18. Automatic Building Process of Self-Closed Modified N-tree

    Directory of Open Access Journals (Sweden)

    Yu Li

    2013-07-01

    Full Text Available Some features of prevailed workflow like Petri net and Grid workflow make them cannot adapt to the dynamic operation. So, we proposed a modified N-tree model to control a workflow. Modified N-tree model can remedy some problems exist in these prevailed workflow models. Firstly, we approve the proposed modified N-tree model is self-closed. This feature makes sure that this workflow can accomplish its tasks, when we change nodes of a well-running modified N-tree workflow before or while its execution. It is the prerequisite of dynamic characteristics of modified N-tree model. And, then we give a method to change this tree dynamically based on the self-closed merit. Finally, based on the dynamic characteristics of this model, we give a method to build on this N-tree workflow model automatically by using left root (LR analysis method proposed by Mr. D.Knuth. This is the most important performance of this model..  

  19. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    Science.gov (United States)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  20. 12 CFR 7.5006 - Data processing.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 1 2010-01-01 2010-01-01 false Data processing. 7.5006 Section 7.5006 Banks and Banking COMPTROLLER OF THE CURRENCY, DEPARTMENT OF THE TREASURY BANK ACTIVITIES AND OPERATIONS Electronic Activities § 7.5006 Data processing. (a) Eligible activities. It is part of the business of banking under 12 U.S.C. 24(Seventh) for a...