WorldWideScience

Sample records for automatic data processing

  1. Guidelines for Automatic Data Processing Physical Security and Risk Management. Federal Information Processing Standards Publication 31.

    Science.gov (United States)

    National Bureau of Standards (DOC), Washington, DC.

    These guidelines provide a handbook for use by federal organizations in structuring physical security and risk management programs for their automatic data processing facilities. This publication discusses security analysis, natural disasters, supporting utilities, system reliability, procedural measures and controls, off-site facilities,…

  2. A Handbook for Automatic Data Processing Equipment Acquisition.

    Science.gov (United States)

    1981-12-01

    Data transmission or communicatins equipment, including front-end processors, terminals, sensors, and other similar devices, designed primarily for use...needs at the lowest overall operation reliability, supplemented to owned Government corporation ) or (I cost to the Government. price and other the extent

  3. Discrete regulation of transfer function of a circuit in experimental data automatic collection and processing systems

    Science.gov (United States)

    Lyubashevskiy, G. S.; Petrov, A. A.; Sanayev, I. A.; Frishberg, V. E.

    1973-01-01

    A device for discrete control of the circuit transfer function in automatic analog data processing systems is reported that coordinates the dynamic range of the vibration level change with the signal range of the processing device output. Experimental verification of the device demonstrates that its maximum control speed does not exceed 0.5 sec for a frequency nonuniformity of about 10%.

  4. 10 CFR 95.49 - Security of automatic data processing (ADP) systems.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Security of automatic data processing (ADP) systems. 95.49 Section 95.49 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) FACILITY SECURITY CLEARANCE AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION AND RESTRICTED DATA Control of Information § 95.49 Security...

  5. Automatic Data Processing, 4-1. Military Curriculum Materials for Vocational and Technical Education.

    Science.gov (United States)

    Army Ordnance Center and School, Aberdeen Proving Ground, MD.

    These two texts and student workbook for a secondary/postsecondary-level correspondence course in automatic data processing comprise one of a number of military-developed curriculum packages selected for adaptation to vocational instruction and curriculum development in a civilian setting. The purpose stated for the individualized, self-paced…

  6. [Increasing effectiveness of the use of laboratory data in the therapeutic-diagnostic process through automatization].

    Science.gov (United States)

    Makarovskiĭ, V V; Shcherbatkin, D D; Nazarov, G D

    1989-01-01

    Introduction of the complex computer-aided mechanization and automatization into the laboratory process and their integration with other automated information hospital systems significantly raise efficacy of laboratory data application in treatment and diagnosis, thus reducing work losses of the medical staff. The structure of biochemical research for clinical therapeutic and surgical departments is presented along with the main biochemical diagnostic programmes for some diseases.

  7. preAssemble: a tool for automatic sequencer trace data processing

    Directory of Open Access Journals (Sweden)

    Laerdahl Jon K

    2006-01-01

    Full Text Available Abstract Background Trace or chromatogram files (raw data are produced by automatic nucleic acid sequencing equipment or sequencers. Each file contains information which can be interpreted by specialised software to reveal the sequence (base calling. This is done by the sequencer proprietary software or publicly available programs. Depending on the size of a sequencing project the number of trace files can vary from just a few to thousands of files. Sequencing quality assessment on various criteria is important at the stage preceding clustering and contig assembly. Two major publicly available packages – Phred and Staden are used by preAssemble to perform sequence quality processing. Results The preAssemble pre-assembly sequence processing pipeline has been developed for small to large scale automatic processing of DNA sequencer chromatogram (trace data. The Staden Package Pregap4 module and base-calling program Phred are utilized in the pipeline, which produces detailed and self-explanatory output that can be displayed with a web browser. preAssemble can be used successfully with very little previous experience, however options for parameter tuning are provided for advanced users. preAssemble runs under UNIX and LINUX operating systems. It is available for downloading and will run as stand-alone software. It can also be accessed on the Norwegian Salmon Genome Project web site where preAssemble jobs can be run on the project server. Conclusion preAssemble is a tool allowing to perform quality assessment of sequences generated by automatic sequencing equipment. preAssemble is flexible since both interactive jobs on the preAssemble server and the stand alone downloadable version are available. Virtually no previous experience is necessary to run a default preAssemble job, on the other hand options for parameter tuning are provided. Consequently preAssemble can be used as efficiently for just several trace files as for large scale sequence

  8. Automatic processing of high-rate, high-density multibeam echosounder data

    Science.gov (United States)

    Calder, B. R.; Mayer, L. A.

    2003-06-01

    Multibeam echosounders (MBES) are currently the best way to determine the bathymetry of large regions of the seabed with high accuracy. They are becoming the standard instrument for hydrographic surveying and are also used in geological studies, mineral exploration and scientific investigation of the earth's crustal deformations and life cycle. The significantly increased data density provided by an MBES has significant advantages in accurately delineating the morphology of the seabed, but comes with the attendant disadvantage of having to handle and process a much greater volume of data. Current data processing approaches typically involve (computer aided) human inspection of all data, with time-consuming and subjective assessment of all data points. As data rates increase with each new generation of instrument and required turn-around times decrease, manual approaches become unwieldy and automatic methods of processing essential. We propose a new method for automatically processing MBES data that attempts to address concerns of efficiency, objectivity, robustness and accuracy. The method attributes each sounding with an estimate of vertical and horizontal error, and then uses a model of information propagation to transfer information about the depth from each sounding to its local neighborhood. Embedded in the survey area are estimation nodes that aim to determine the true depth at an absolutely defined location, along with its associated uncertainty. As soon as soundings are made available, the nodes independently assimilate propagated information to form depth hypotheses which are then tracked and updated on-line as more data is gathered. Consequently, we can extract at any time a "current-best" estimate for all nodes, plus co-located uncertainties and other metrics. The method can assimilate data from multiple surveys, multiple instruments or repeated passes of the same instrument in real-time as data is being gathered. The data assimilation scheme is

  9. Automatic Scaling Hadoop in the Cloud for Efficient Process of Big Geospatial Data

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    2016-09-01

    Full Text Available Efficient processing of big geospatial data is crucial for tackling global and regional challenges such as climate change and natural disasters, but it is challenging not only due to the massive data volume but also due to the intrinsic complexity and high dimensions of the geospatial datasets. While traditional computing infrastructure does not scale well with the rapidly increasing data volume, Hadoop has attracted increasing attention in geoscience communities for handling big geospatial data. Recently, many studies were carried out to investigate adopting Hadoop for processing big geospatial data, but how to adjust the computing resources to efficiently handle the dynamic geoprocessing workload was barely explored. To bridge this gap, we propose a novel framework to automatically scale the Hadoop cluster in the cloud environment to allocate the right amount of computing resources based on the dynamic geoprocessing workload. The framework and auto-scaling algorithms are introduced, and a prototype system was developed to demonstrate the feasibility and efficiency of the proposed scaling mechanism using Digital Elevation Model (DEM interpolation as an example. Experimental results show that this auto-scaling framework could (1 significantly reduce the computing resource utilization (by 80% in our example while delivering similar performance as a full-powered cluster; and (2 effectively handle the spike processing workload by automatically increasing the computing resources to ensure the processing is finished within an acceptable time. Such an auto-scaling approach provides a valuable reference to optimize the performance of geospatial applications to address data- and computational-intensity challenges in GIScience in a more cost-efficient manner.

  10. Automatic procedure for quasi-real time seismic data processing at Campi Flegrei (Italy)

    Science.gov (United States)

    Capuano, Paolo; Ciaramella, Angelo; De Lauro, Enza; De Martino, Salvatore; Falanga, Mariarosaria; Petrosino, Simona

    2014-05-01

    The accuracy of automatic procedures for detecting seismic events and locating their sources is influenced by several factors such as errors in picking seismic phases often buried in the high-level ambient noise, network geometry and modelling errors. fundamental objective is the improvement of these procedures by developing accurate algorithms for quasi-real time seismic data processing, easily managed in observatory practice. Recently a robust automatic procedure has been implemented for detecting, onset picking and identifying signal phases in continuous seismic signal with an application at the seismicity recorded at Campi Flegrei Caldera (Italy) during the 2006 ground uplift (Ciaramella et al. 2011). An Independent Component Analysis based approach for the Blind Source Separation of convolutive mixtures (CICA) has been adopted to obtain a clear separation of low-energy Long Period events (LPs) from the high-level ambient noise allowing to compile a complete seismic catalogue and better quantify the seismic energy release. In this work, we apply CICA at the seismic signal continuously recorded during the entire 2006 at Campi Flegrei. First, we have performed tests on synthetic data in order to improve the reliability and the accuracy of the procedure. The performance test using very noisy synthetic data shows that the method works even in case of very poor quality data characterized by very low signal to noise ratio (SNR). Second, we have improved CICA automatic procedure recovering the information on the amplitudes of the extracted independent components. This is crucial for further analysis, starting from a prompt estimate of magnitude/energy of the highlighted events. Data used for the present analysis were collected by four broadband three-component seismic stations (ASB2, AMS2, TAGG, BGNG) belonging to the Campi Flegrei seismic monitoring network, managed by the 'Istituto Nazionale di Geofisica e Vulcanologia-Osservatorio Vesuviano (INGV-OV)' (see for

  11. Automatic layout of ventilation systems by means of electronic data processing

    Energy Technology Data Exchange (ETDEWEB)

    Altena, H.; Priess, H.; Fries, E.; Hoffmann, G.

    1982-12-09

    A working group developed a mehtod for the automatic projection of ventilation systems by means of electronic data processing. The purpose of this was to increase the information content of this document and to obtain a useful tool for ventilation planning while reducing the efforts required for elaboration of ventilation plans. A program system was developed by means of which ventilation plans can be plotted in consideration of the regulations set by the mining authorities. The program system was applied for the first time at Osterfeld mine. The plan is clearly organized, accurate, and easy to understand. This positive experience suggests that computer-aided plans should be more widely applied. The mining authorities support this view.

  12. Forest point processes for the automatic extraction of networks in raster data

    Science.gov (United States)

    Schmidt, Alena; Lafarge, Florent; Brenner, Claus; Rottensteiner, Franz; Heipke, Christian

    2017-04-01

    In this paper, we propose a new stochastic approach for the automatic detection of network structures in raster data. We represent a network as a set of trees with acyclic planar graphs. We embed this model in the probabilistic framework of spatial point processes and determine the most probable configuration of trees by stochastic sampling. That is, different configurations are constructed randomly by modifying the graph parameters and by adding or removing nodes and edges to/ from the current trees. Each configuration is evaluated based on the probabilities for these changes and an energy function describing the conformity with a predefined model. By using the Reversible jump Markov chain Monte Carlo sampler, an approximation of the global optimum of the energy function is iteratively reached. Although our main target application is the extraction of rivers and tidal channels in digital terrain models, experiments with other types of networks in images show the transferability to further applications. Qualitative and quantitative evaluations demonstrate the competitiveness of our approach with respect to existing algorithms.

  13. Exploring Automatization Processes.

    Science.gov (United States)

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  14. Automatic data processing and analysis system for monitoring region around a planned nuclear power plant

    Science.gov (United States)

    Kortström, Jari; Tiira, Timo; Kaisko, Outi

    2016-03-01

    The Institute of Seismology of University of Helsinki is building a new local seismic network, called OBF network, around planned nuclear power plant in Northern Ostrobothnia, Finland. The network will consist of nine new stations and one existing station. The network should be dense enough to provide azimuthal coverage better than 180° and automatic detection capability down to ML -0.1 within a radius of 25 km from the site.The network construction work began in 2012 and the first four stations started operation at the end of May 2013. We applied an automatic seismic signal detection and event location system to a network of 13 stations consisting of the four new stations and the nearest stations of Finnish and Swedish national seismic networks. Between the end of May and December 2013 the network detected 214 events inside the predefined area of 50 km radius surrounding the planned nuclear power plant site. Of those detections, 120 were identified as spurious events. A total of 74 events were associated with known quarries and mining areas. The average location error, calculated as a difference between the announced location from environment authorities and companies and the automatic location, was 2.9 km. During the same time period eight earthquakes between magnitude range 0.1-1.0 occurred within the area. Of these seven could be automatically detected. The results from the phase 1 stations of the OBF network indicates that the planned network can achieve its goals.

  15. Process automatization in system administration

    OpenAIRE

    Petauer, Janja

    2013-01-01

    The aim of the thesis is to present automatization of user management in company Studio Moderna. The company has grown exponentially in recent years, that is why we needed to find faster, easier and cheaper way of man- aging user accounts. We automatized processes of creating, changing and removing user accounts within Active Directory. We prepared user interface inside of existing application, used Java Script for drop down menus, wrote script in scripting programming langu...

  16. Automatic segmentation of blood vessels from retinal fundus images through image processing and data mining techniques

    Indian Academy of Sciences (India)

    R Geetharamani; Lakshmi Balasubramanian

    2015-09-01

    Machine Learning techniques have been useful in almost every field of concern. Data Mining, a branch of Machine Learning is one of the most extensively used techniques. The ever-increasing demands in the field of medicine are being addressed by computational approaches in which Big Data analysis, image processing and data mining are on top priority. These techniques have been exploited in the domain of ophthalmology for better retinal fundus image analysis. Blood vessels, one of the most significant retinal anatomical structures are analysed for diagnosis of many diseases like retinopathy, occlusion and many other vision threatening diseases. Vessel segmentation can also be a pre-processing step for segmentation of other retinal structures like optic disc, fovea, microneurysms, etc. In this paper, blood vessel segmentation is attempted through image processing and data mining techniques. The retinal blood vessels were segmented through color space conversion and color channel extraction, image pre-processing, Gabor filtering, image postprocessing, feature construction through application of principal component analysis, k-means clustering and first level classification using Naïve–Bayes classification algorithm and second level classification using C4.5 enhanced with bagging techniques. Association of every pixel against the feature vector necessitates Big Data analysis. The proposed methodology was evaluated on a publicly available database, STARE. The results reported 95.05% accuracy on entire dataset; however the accuracy was 95.20% on normal images and 94.89% on pathological images. A comparison of these results with the existing methodologies is also reported. This methodology can help ophthalmologists in better and faster analysis and hence early treatment to the patients.

  17. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  18. Automatic defect detection for TFT-LCD array process using quasiconformal kernel support vector data description.

    Science.gov (United States)

    Liu, Yi-Hung; Chen, Yan-Jen

    2011-01-01

    Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD) manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD) has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD) to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  19. Automatic mapping of monitoring data

    DEFF Research Database (Denmark)

    Lophaven, Søren; Nielsen, Hans Bruun; Søndergaard, Jacob

    2005-01-01

    This paper presents an approach, based on universal kriging, for automatic mapping of monitoring data. The performance of the mapping approach is tested on two data-sets containing daily mean gamma dose rates in Germany reported by means of the national automatic monitoring network (IMIS......). In the second dataset an accidental release of radioactivity in the environment was simulated in the South-Western corner of the monitored area. The approach has a tendency to smooth the actual data values, and therefore it underestimates extreme values, as seen in the second dataset. However, it is capable...

  20. The Role of Feature Enhanced Processing for Automatic Target Recognition using High Resolution Polarimetric SAR Data

    NARCIS (Netherlands)

    Broek, A.C. van den; Steeghs, T.P.H.; Dekker, R.J.

    2005-01-01

    We have studied the effect of feature enhanced processing on the discrimination of targets in highresolution polarimetric ISAR and SAR images. This is done by comparing feature-based classification results for original images and images which have been pre-processed to enhance target features. The d

  1. Overview of the SOFIA Data Processing System: A generalized system for manual and automatic data processing at the SOFIA Science Center

    CERN Document Server

    Shuping, R Y; Vacca, W D; Charcos-Llorens, M; Reach, W T; Alles, R; Clarke, M; Melchiorri, R; Radomski, J; Shenoy, S; Sandel, D; Omelian, E B

    2014-01-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) is an airborne astronomical observatory comprised of a 2.5-meter telescope mounted in the aft section of a Boeing 747SP aircraft. During routine operations, several instruments will be available to the astronomical community including cameras and spectrographs in the near- to far-IR. Raw data obtained in-flight require a significant amount of processing to correct for background emission (from both the telescope and atmosphere), remove instrumental artifacts, correct for atmospheric absorption, and apply both wavelength and flux calibration. In general, this processing is highly specific to the instrument and telescope. In order to maximize the scientific output of the observatory, the SOFIA Science Center must provide these post-processed data sets to Guest Investigators in a timely manner. To meet this requirement, we have designed and built the SOFIA Data Processing System (DPS): an in-house set of tools and services that can be used in both auto...

  2. Automatic gray scale correction of video data

    Science.gov (United States)

    Chochia, Pavel A.

    1995-01-01

    Automatic gray scale correction of captured video data (both still and moving images) is one of the least researched questions in the image processing area, in spite of this the question is touched almost in every book concerned with image processing. Classically it is related to the image enhancement, and frequently is classified as histogram modification techniques. Traditionally used algorithms, based on analysis of the image histogram, are not able to decide the problem properly. The investigating difficulties are associated with the absence of a formal quantitative estimate of image quality -- till now the most often used criteria are human visual perception and experience. Hence, the problem of finding out some measurable properties of real images, which might be the basis for automatic building of gray scale correction function (sometimes identified also as gamma-correction function), is still unsolved. In the paper we try to discern some common properties of real images that could help us to evaluate the gray scale image distortion, and, finally, to construct the appropriate correction function to enhance an image. Such a method might be sufficiently used for automatic image processing procedures, like enhancing of medical images, reproducing of pictures in the publishing industry, correcting of remote sensing images, preprocessing of captured data in the computer vision area, and for many other applications. The question of complexity of analysis procedure becomes important when an algorithm is realized in real-time (for example in video input devices, like video cameras).

  3. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  4. 基于PL/SQL的自动数据处理方法研究%Automatic data processing method based on PL / SQL

    Institute of Scientific and Technical Information of China (English)

    孔明华

    2013-01-01

    On the basis of oilfield development data management and application work, this paper proposed data processing method that automatically set by Oracle timed task execute PL/SQL stored procedures. Through the application of a variety of key technologies in summing up the work, it presents the application of this method of automatic processing and example. The method can also be extended to other applications as the work of the various types of data processing of the data management server, Oracle, as a reference to improve work efficiency and the speed of data processing.%在油田开发数据管理与应用工作的基础上,提出了一套通过Oracle定时任务自动执行PL/SQL存储过程的数据处理方法.通过总结工作中应用的各种关键技术,详细介绍了这种自动处理方法的应用及示例.这套方法还可推广到其它应用Oracle作为数据管理服务器的各类数据处理工作中,作为改善工作效率,提高数据处理速度的参考.

  5. A novel GIS-based tool for estimating present-day ocean reference depth using automatically processed gridded bathymetry data

    Science.gov (United States)

    Jurecka, Mirosława; Niedzielski, Tomasz; Migoń, Piotr

    2016-05-01

    This paper presents a new method for computing the present-day value of the reference depth (dr) which is an essential input information for assessment of past sea-level changes. The method applies a novel automatic geoprocessing tool developed using Python script and ArcGIS, and uses recent data about ocean floor depth, sediment thickness, and age of oceanic crust. The procedure is multi-step and involves creation of a bathymetric dataset corrected for sediment loading and isostasy, delineation of subduction zones, computation of perpendicular sea-floor profiles, and statistical analysis of these profiles versus crust age. The analysis of site-specific situations near the subduction zones all around the world shows a number of instances where the depth of the oceanic crust stabilizes at a certain level before reaching the subduction zone, and this occurs at depths much lower than proposed in previous approaches to the reference depth issue. An analysis of Jurassic and Cretaceous oceanic lithosphere shows that the most probable interval at which the reference depth occurs is 5300-5800 m. This interval is broadly consistent with dr estimates determined using the Global Depth-Heatflow model (GDH1), but is significantly lower than dr estimates calculated on a basis of the Parsons-Sclater Model (PSM).

  6. Automaticity in social-cognitive processes.

    Science.gov (United States)

    Bargh, John A; Schwader, Kay L; Hailey, Sarah E; Dyer, Rebecca L; Boothby, Erica J

    2012-12-01

    Over the past several years, the concept of automaticity of higher cognitive processes has permeated nearly all domains of psychological research. In this review, we highlight insights arising from studies in decision-making, moral judgments, close relationships, emotional processes, face perception and social judgment, motivation and goal pursuit, conformity and behavioral contagion, embodied cognition, and the emergence of higher-level automatic processes in early childhood. Taken together, recent work in these domains demonstrates that automaticity does not result exclusively from a process of skill acquisition (in which a process always begins as a conscious and deliberate one, becoming capable of automatic operation only with frequent use) - there are evolved substrates and early childhood learning mechanisms involved as well.

  7. Automatic and strategic processes in advertising effects

    DEFF Research Database (Denmark)

    Grunert, Klaus G.

    1996-01-01

    , and can easily be adapted to situational circumstances. Both the perception of advertising and the way advertising influences brand evaluation involves both processes. Automatic processes govern the recognition of advertising stimuli, the relevance decision which determines further higher-level processing...... are at variance with current notions about advertising effects. For example, the att span problem will be relevant only for strategic processes, not for automatic processes, a certain amount of learning can occur with very little conscious effort, and advertising's effect on brand evaluation may be more stable...

  8. Conscious and Automatic Processes in Language Learning.

    Science.gov (United States)

    Carroll, John B.

    1981-01-01

    Proposes theory that the learning processes of first- and second-language learners are fundamentally the same, differing only in kinds of information used by both kinds of learners and the degree of automatization attained. Suggests designing second-language learning processes to simulate those occurring in natural settings. (Author/BK)

  9. A CityGML extension for traffic-sign objects that guides the automatic processing of data collected using Mobile Mapping technology

    Science.gov (United States)

    Varela-González, M.; Riveiro, B.; Arias-Sánchez, P.; González-Jorge, H.; Martínez-Sánchez, J.

    2014-11-01

    The rapid evolution of integral schemes, accounting for geometric and semantic data, has been importantly motivated by the advances in the last decade in mobile laser scanning technology; automation in data processing has also recently influenced the expansion of the new model concepts. This paper reviews some important issues involved in the new paradigms of city 3D modelling: an interoperable schema for city 3D modelling (cityGML) and mobile mapping technology to provide the features that composing the city model. This paper focuses in traffic signs, discussing their characterization using cityGML in order to ease the implementation of LiDAR technology in road management software, as well as analysing some limitations of the current technology in the labour of automatic detection and classification.

  10. Data processing

    CERN Document Server

    Fry, T F

    2013-01-01

    Data Processing discusses the principles, practices, and associated tools in data processing. The book is comprised of 17 chapters that are organized into three parts. The first part covers the characteristics, systems, and methods of data processing. Part 2 deals with the data processing practice; this part discusses the data input, output, and storage. The last part discusses topics related to systems and software in data processing, which include checks and controls, computer language and programs, and program elements and structures. The text will be useful to practitioners of computer-rel

  11. Automatic recognition of lactating sow behaviors through depth image processing

    Science.gov (United States)

    Manual observation and classification of animal behaviors is laborious, time-consuming, and of limited ability to process large amount of data. A computer vision-based system was developed that automatically recognizes sow behaviors (lying, sitting, standing, kneeling, feeding, drinking, and shiftin...

  12. Towards automatic planning for manufacturing generative processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-05-24

    Generative process planning describes methods process engineers use to modify manufacturing/process plans after designs are complete. A completed design may be the result from the introduction of a new product based on an old design, an assembly upgrade, or modified product designs used for a family of similar products. An engineer designs an assembly and then creates plans capturing manufacturing processes, including assembly sequences, component joining methods, part costs, labor costs, etc. When new products originate as a result of an upgrade, component geometry may change, and/or additional components and subassemblies may be added to or are omitted from the original design. As a result process engineers are forced to create new plans. This is further complicated by the fact that the process engineer is forced to manually generate these plans for each product upgrade. To generate new assembly plans for product upgrades, engineers must manually re-specify the manufacturing plan selection criteria and re-run the planners. To remedy this problem, special-purpose assembly planning algorithms have been developed to automatically recognize design modifications and automatically apply previously defined manufacturing plan selection criteria and constraints.

  13. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  14. [Use of the Elektronika-T3-16M special-purpose computer for the automatic processing of cytophotometric and cytofluorimetric data].

    Science.gov (United States)

    Loktionov, A S; Prianishnikov, V A

    1981-05-01

    A system has been proposed to provide the automatic analysis of data on: a) point cytophotometry, b) two-wave cytophotometry, c) cytofluorimetry. The system provides the input of the data from a photomultiplier to a specialized computer "Electronica-T3-16M" in addition to the simultaneous statistical analysis of these. The information on the programs used is presented. The advantages of the system, compared with some commercially available cytophotometers, are indicated.

  15. Modeling of a data exchange process in the Automatic Process Control System on the base of the universal SCADA-system

    Science.gov (United States)

    Topolskiy, D.; Topolskiy, N.; Solomin, E.; Topolskaya, I.

    2016-04-01

    In the present paper the authors discuss some ways of solving energy saving problems in mechanical engineering. In authors' opinion one of the ways of solving this problem is integrated modernization of power engineering objects of mechanical engineering companies, which should be intended for the energy supply control efficiency increase and electric energy commercial accounting improvement. The author have proposed the usage of digital current and voltage transformers for these purposes. To check the compliance of this equipment with the IEC 61850 International Standard, we have built a mathematic model of the data exchange process between measuring transformers and a universal SCADA-system. The results of modeling show that the discussed equipment corresponds to the mentioned Standard requirements and the usage of the universal SCADA-system for these purposes is preferable and economically reasonable. In modeling the authors have used the following software: MasterScada, Master OPC_DI_61850, OPNET.

  16. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  17. Automatic Data Normalization and Parameterization for Optical Motion Tracking

    Directory of Open Access Journals (Sweden)

    Leif Kobbelt

    2006-09-01

    Full Text Available Methods for optical motion capture often require time-consuming manual processing before the data can be used for subsequent tasks such as retargeting or character animation. These processing steps restrict the applicability of motion capturing especially for dynamic VR-environments with real time requirements. To solve these problems, we present two additional, fast and automatic processing stages based on our motion capture pipeline presented in [ HSK05 ]. A normalization step aligns the recorded coordinate systems with the skeleton structure to yield a common and intuitive data basis across different recording sessions. A second step computes a parameterization based on automatically extracted main movement axes to generate a compact motion description. Our method does not restrict the placement of marker bodies nor the recording setup, and only requires a short calibration phase.

  18. Word Processing in Dyslexics: An Automatic Decoding Deficit?

    Science.gov (United States)

    Yap, Regina; Van Der Leu, Aryan

    1993-01-01

    Compares dyslexic children with normal readers on measures of phonological decoding and automatic word processing. Finds that dyslexics have a deficit in automatic phonological decoding skills. Discusses results within the framework of the phonological deficit and the automatization deficit hypotheses. (RS)

  19. The automatic calibration of Korean VLBI Network data

    CERN Document Server

    Hodgson, Jeffrey A; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-01-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  20. The Automatic Calibration of Korean VLBI Network Data

    Science.gov (United States)

    Hodgson, Jeffrey A.; Lee, Sang-Sung; Zhao, Guang-Yao; Algaba, Juan-Carlos; Yun, Youngjoo; Jung, Taehyun; Byun, Do-Young

    2016-08-01

    The calibration of Very Long Baseline Interferometry (VLBI) data has long been a time consuming process. The Korean VLBI Network (KVN) is a simple array consisting of three identical antennas. Because four frequencies are observed simultaneously, phase solutions can be transferred from lower frequencies to higher frequencies in order to improve phase coherence and hence sensitivity at higher frequencies. Due to the homogeneous nature of the array, the KVN is also well suited for automatic calibration. In this paper we describe the automatic calibration of single-polarisation KVN data using the KVN Pipeline and comparing the results against VLBI data that has been manually reduced. We find that the pipelined data using phase transfer produces better results than a manually reduced dataset not using the phase transfer. Additionally we compared the pipeline results with a manually reduced phase-transferred dataset and found the results to be identical.

  1. Spectral Curve Fitting for Automatic Hyperspectral Data Analysis

    CERN Document Server

    Brown, Adrian J

    2014-01-01

    Automatic discovery and curve fitting of absorption bands in hyperspectral data can enable the analyst to identify materials present in a scene by comparison with library spectra. This procedure is common in laboratory spectra, but is challenging for sparse hyperspectral data. A procedure for robust discovery of overlapping bands in hyperspectral data is described in this paper. The method is capable of automatically discovering and fitting symmetric absorption bands, can separate overlapping absorption bands in a stable manner, and has relatively low sensitivity to noise. A comparison with techniques already available in the literature is presented using simulated spectra. An application is demonstrated utilizing the shortwave infrared (2.0-2.5 micron or 5000-4000 cm-1) region. A small hyperspectral scene is processed to demonstrate the ability of the method to detect small shifts in absorption wavelength caused by varying white mica chemistry in a natural setting.

  2. Iterative Strategies for Aftershock Classification in Automatic Seismic Processing Pipelines

    Science.gov (United States)

    Gibbons, Steven J.; Kværna, Tormod; Harris, David B.; Dodge, Douglas A.

    2016-04-01

    Aftershock sequences following very large earthquakes present enormous challenges to near-realtime generation of seismic bulletins. The increase in analyst resources needed to relocate an inflated number of events is compounded by failures of phase association algorithms and a significant deterioration in the quality of underlying fully automatic event bulletins. Current processing pipelines were designed a generation ago and, due to computational limitations of the time, are usually limited to single passes over the raw data. With current processing capability, multiple passes over the data are feasible. Processing the raw data at each station currently generates parametric data streams which are then scanned by a phase association algorithm to form event hypotheses. We consider the scenario where a large earthquake has occurred and propose to define a region of likely aftershock activity in which events are detected and accurately located using a separate specially targeted semi-automatic process. This effort may focus on so-called pattern detectors, but here we demonstrate a more general grid search algorithm which may cover wider source regions without requiring waveform similarity. Given many well-located aftershocks within our source region, we may remove all associated phases from the original detection lists prior to a new iteration of the phase association algorithm. We provide a proof-of-concept example for the 2015 Gorkha sequence, Nepal, recorded on seismic arrays of the International Monitoring System. Even with very conservative conditions for defining event hypotheses within the aftershock source region, we can automatically remove over half of the original detections which could have been generated by Nepal earthquakes and reduce the likelihood of false associations and spurious event hypotheses. Further reductions in the number of detections in the parametric data streams are likely using correlation and subspace detectors and/or empirical matched

  3. [Automatic analysis pipeline of next-generation sequencing data].

    Science.gov (United States)

    Wenke, Li; Fengyu, Li; Siyao, Zhang; Bin, Cai; Na, Zheng; Yu, Nie; Dao, Zhou; Qian, Zhao

    2014-06-01

    The development of next-generation sequencing has generated high demand for data processing and analysis. Although there are a lot of software for analyzing next-generation sequencing data, most of them are designed for one specific function (e.g., alignment, variant calling or annotation). Therefore, it is necessary to combine them together for data analysis and to generate interpretable results for biologists. This study designed a pipeline to process Illumina sequencing data based on Perl programming language and SGE system. The pipeline takes original sequence data (fastq format) as input, calls the standard data processing software (e.g., BWA, Samtools, GATK, and Annovar), and finally outputs a list of annotated variants that researchers can further analyze. The pipeline simplifies the manual operation and improves the efficiency by automatization and parallel computation. Users can easily run the pipeline by editing the configuration file or clicking the graphical interface. Our work will facilitate the research projects using the sequencing technology.

  4. Automatic rebalancing of data in ATLAS distributed data management

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration

    2017-01-01

    The ATLAS Distributed Data Management system stores more than 220PB of physics data across more than 130 sites globally. Rucio, the next generation data management system of the ATLAS collaboration has now been successfully operated for two years. However, with the increasing workload and utilization, more automated and advanced methods of managing the data are needed. In this article we present an extension to the data management system, which is in charge of detecting and foreseeing storage elements reaching and surpassing their capacity limit. The system automatically and dynamically rebalances the data to other storage elements, while respecting and guaranteeing data distribution policies and ensuring the availability of the data. This concept not only lowers the operational burden, as these cumbersome procedures had previously to be done manually, but it also enables the system to use its distributed resources more efficiently, which not only affects the data management system itself, but in consequence ...

  5. Automatic rebalancing of data in ATLAS distributed data management

    CERN Document Server

    Barisits, Martin-Stefan; The ATLAS collaboration; Garonne, Vincent; Lassnig, Mario; Beermann, Thomas

    2016-01-01

    The ATLAS Distributed Data Management system stores more than 220PB of physics data across more than 130 sites globally. Rucio, the next generation data management system of the ATLAS collaboration has now been successfully operated for over a year. However, with the forthcoming start of run-2 and its expected workload and utilization, more automated and advanced methods of managing the data are needed. In this article we present an extension to the data management system, which is in charge of detecting and foreseeing data imbalances as well as storage elements reaching and surpassing their capacity limit. The system automatically and dynamically rebalances the data to other storage elements, while respecting and guaranteeing data distribution policies and ensuring the availability of the data. This concept not only lowers the operational burden, as these cumbersome procedures had previously to be done manually, but it also enables the system to use its distributed resources more efficiently, which not only ...

  6. Scheduling algorithms for automatic control systems for technological processes

    Science.gov (United States)

    Chernigovskiy, A. S.; Tsarev, R. Yu; Kapulin, D. V.

    2017-01-01

    Wide use of automatic process control systems and the usage of high-performance systems containing a number of computers (processors) give opportunities for creation of high-quality and fast production that increases competitiveness of an enterprise. Exact and fast calculations, control computation, and processing of the big data arrays – all of this requires the high level of productivity and, at the same time, minimum time of data handling and result receiving. In order to reach the best time, it is necessary not only to use computing resources optimally, but also to design and develop the software so that time gain will be maximal. For this purpose task (jobs or operations), scheduling techniques for the multi-machine/multiprocessor systems are applied. Some of basic task scheduling methods for the multi-machine process control systems are considered in this paper, their advantages and disadvantages come to light, and also some usage considerations, in case of the software for automatic process control systems developing, are made.

  7. Advancements in Big Data Processing

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for Distributed Computing and Grid-technologies. The emerging Big Data revolution drives new discoveries in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields by pushing Bid Data limits enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable Six Sigma produc...

  8. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; De Vries, A.P.; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  9. Data Processing

    Science.gov (United States)

    Grangeat, P.

    A new area of biology has been opened up by nanoscale exploration of the living world. This has been made possible by technological progress, which has provided the tools needed to make devices that can measure things on such length and time scales. In a sense, this is a new window upon the living world, so rich and so diverse. Many of the investigative methods described in this book seek to obtain complementary physical, chemical, and biological data to understand the way it works and the way it is organised. At these length and time scales, only dedicated instrumentation could apprehend the relevant phenomena. There is no way for our senses to observe these things directly. One important field of application is molecular medicine, which aims to explain the mechanisms of life and disease by the presence and quantification of specific molecular entities. This involves combining information about genes, proteins, cells, and organs. This in turn requires the association of instruments for molecular diagnosis, either in vitro, e.g., the microarray or the lab-on-a-chip, or in vivo, e.g., probes for molecular biopsy, and tools for molecular imaging, used to localise molecular information in living organisms in a non-invasive way. These considerations concern both preclinical research for drug design and human medical applications. With the development of DNA and RNA chips [1], genomics has revolutionised investigative methods for cells and cell processes [2,3]. By sequencing the human genome, new ways have been found for understanding the fundamental mechanisms of life [4]. A revolution is currently under way with the analysis of the proteome [5-8], i.e., the complete set of proteins that can be found in some given biological medium, such as the blood plasma. The goal is to characterise certain diseases by recognisable signatures in the proteomic profile, as determined from a blood sample or a biopsy, for example [9-13]. What is at stake is the early detection of

  10. Refinements to the Boolean approach to automatic data editing

    Energy Technology Data Exchange (ETDEWEB)

    Liepins, G.E.

    1980-09-01

    Automatic data editing consists of three components: identification of erroneous records, identification of most likely erroneous fields within an erroneous record (fields to impute), and assignment of acceptable values to failing records. Moreover the types of data considered naturally fall into three categories: coded (categorical) data, continuous data, and mixed data (both coded and continuous). For the case of coded data, a natural way to approach automatic data is commonly referred to as the Boolean approach, first developed by Fellegi and Holt. For the fields to impute problem, central to the operation of the Fellegi-Holt approach is the explicit recognition of certain implied edits; Fellegi and Holt orginally required a complete set of edits, and their algorithm to generate this complete set has occasionally had the distinct disadvantage of failing to converge within reasonable time. The primary results of this paper is an algorithm that significantly prunes the Fellegi-Holt edit generation process, yet, nonetheless, generates a sufficient collection of implied edits adequate for the solution of the fields to impute problem. 3 figures.

  11. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.

  12. Automatic Information Processing and High Performance Skills

    Science.gov (United States)

    1992-10-01

    Society Twenty-Sixth Annual Meeting (pp. 10-14). Santa Monica, CA: Human Factors Society. Shiffrin , R. M. (1988). Attention. In R. C. Atkinson , R. J...Learning. Memory . and Cognition, 1A, 562-569. Shiffrin , R. M., and Dumais, S. T. (1981). The development of automatism. In J. R. Anderson (Ed.), Cognitive...Change and Skill Acquisition in Visual Search ................................... 43 iii TABLE OF CONTENTS (Continued) Consistent Memory and Visual Search

  13. AUTOCLASS III - AUTOMATIC CLASS DISCOVERY FROM DATA

    Science.gov (United States)

    Cheeseman, P. C.

    1994-01-01

    The program AUTOCLASS III, Automatic Class Discovery from Data, uses Bayesian probability theory to provide a simple and extensible approach to problems such as classification and general mixture separation. Its theoretical basis is free from ad hoc quantities, and in particular free of any measures which alter the data to suit the needs of the program. As a result, the elementary classification model used lends itself easily to extensions. The standard approach to classification in much of artificial intelligence and statistical pattern recognition research involves partitioning of the data into separate subsets, known as classes. AUTOCLASS III uses the Bayesian approach in which classes are described by probability distributions over the attributes of the objects, specified by a model function and its parameters. The calculation of the probability of each object's membership in each class provides a more intuitive classification than absolute partitioning techniques. AUTOCLASS III is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. The user specifies a class probability distribution function by associating attribute sets with supplied likelihood function terms. AUTOCLASS then searches in the space of class numbers and parameters for the maximally probable combination. It returns the set of class probability function parameters, and the class membership probabilities for each data instance. AUTOCLASS III is written in Common Lisp, and is designed to be platform independent. This program has been successfully run on Symbolics and Explorer Lisp machines. It has been successfully used with the following implementations of Common LISP on the Sun: Franz Allegro CL, Lucid Common Lisp, and Austin Kyoto Common Lisp and similar UNIX platforms; under the Lucid Common Lisp implementations on VAX/VMS v5

  14. Beyond behaviorism: on the automaticity of higher mental processes.

    Science.gov (United States)

    Bargh, J A; Ferguson, M J

    2000-11-01

    The first 100 years of experimental psychology were dominated by 2 major schools of thought: behaviorism and cognitive science. Here the authors consider the common philosophical commitment to determinism by both schools, and how the radical behaviorists' thesis of the determined nature of higher mental processes is being pursued today in social cognition research on automaticity. In harmony with "dual process" models in contemporary cognitive science, which equate determined processes with those that are automatic and which require no intervening conscious choice or guidance, as opposed to "controlled" processes which do, the social cognition research on the automaticity of higher mental processes provides compelling evidence for the determinism of those processes. This research has revealed that social interaction, evaluation and judgment, and the operation of internal goal structures can all proceed without the intervention of conscious acts of will and guidance of the process.

  15. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  16. Image analysis techniques associated with automatic data base generation.

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  17. Automatization techniques for processing biomedical signals using machine learning methods

    OpenAIRE

    Artés Rodríguez, Antonio

    2008-01-01

    The Signal Processing Group (Department of Signal Theory and Communications, University Carlos III, Madrid, Spain) offers the expertise of its members in the automatic processing of biomedical signals. The main advantages in this technology are the decreased cost, the time saved and the increased reliability of the results. Technical cooperation for the research and development with internal and external funding is sought.

  18. Automatic/Control Processing and Attention.

    Science.gov (United States)

    1982-04-01

    experiments. American Scientist, 1969, 57, 421-457. (a) Sternberg, S. The discovery of processing stages: Extensions of Donder’s method . In W. G. Koster...Research Institute, Alexandria, VA H. O’Neil, Army Research Institute, Alexandria, VA R. Sasmor, Army Reseach Institute, Alexandria, VA J. Ward, U.S...DARPA, Arlington, VA P. Chapin, Linguistics Program, NSF, Washington, DC S. Chipman, National Institute of Education, Washington, DC W. McLaurin, Camp

  19. Automatic data acquisition of anthropological measurements

    DEFF Research Database (Denmark)

    Lynnerup, N; Lynnerup, O

    1993-01-01

    A computer program in BASIC is presented which enables the input of measurement data from a caliper directly into specific records in a dBASE IV or PARADOX database. The program circumvents the tedious procedure of first recording measurement data manually and then entering the data into a computer....... Thus much time can be saved and the risk of wrong data entry is lowered. The program was easy to use, and no significant problems were encountered. Necessary hardware is a standard IBM compatible desktop computer, Mitotoyu Digimatic (TM) calipers and a Mitotoyu Digimatic MUX-10 Multiplexer (TM)....

  20. A Simple Blueprint for Automatic Boolean Query Processing.

    Science.gov (United States)

    Salton, G.

    1988-01-01

    Describes a new Boolean retrieval environment in which an extended soft Boolean logic is used to automatically construct queries from original natural language formulations provided by users. Experimental results that compare the retrieval effectiveness of this method to conventional Boolean and vector processing are discussed. (27 references)…

  1. The development of automatic associative processes and children's false memories.

    Science.gov (United States)

    Wimmer, Marina C; Howe, Mark L

    2009-12-01

    We investigated children's ability to generate associations and how automaticity of associative activation unfolds developmentally. Children generated associative responses using a single associate paradigm (Experiment 1) or a Deese/Roediger-McDermott (DRM)-like multiple associates paradigm (Experiment 2). The results indicated that children's ability to generate meaningful word associates, and the automaticity with which they were generated, increased between 5, 7, and 11 years of age. These findings suggest that children's domain-specific knowledge base and the associative connections among related concepts are present and continue to develop from a very early age. Moreover, there is an increase in how these concepts are automatically activated with age, something that results from domain-general developments in speed of processing. These changes are consistent with the neurodevelopmental literature and together may provide a more complete explanation of the development of memory illusions.

  2. On the Control of Automatic Processes: A Parallel Distributed Processing Account of the Stroop Effect.

    Science.gov (United States)

    Cohen, Jonathan D.; And Others

    1990-01-01

    It is proposed that attributes of automatization depend on the strength of a processing pathway, and that strength increases with training. With the Stroop effect as an example, automatic processes are shown through simulation to be continuous and to emerge gradually with practice. (SLD)

  3. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  4. From Automatic to Adaptive Data Acquisition

    DEFF Research Database (Denmark)

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yet the main driving force behind these deployments are still computer scien- tists. The denser sampling and added modalities oered by sensornets could drive these elds in new directions, but not until the domain scientists be......- come familiar with sensornets and use them as any other instrument in their toolbox. We explore three dierent directions in which sensornets can become easier to deploy, collect data of higher quality, and oer more exibility, and we postulate that sensornets should be instruments for domain scientists...... the exibility of sensornets and reduce the complexity for the domain scientist, we developed an AI-based controller to act as a proxy between the scientist and sensornet. This controller is driven by the scientist's requirements to the collected data, and uses adaptive sampling in order to reach these goals....

  5. Towards Automatic Processing of Virtual City Models for Simulations

    Science.gov (United States)

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2016-10-01

    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  6. Robust indexing for automatic data collection

    Energy Technology Data Exchange (ETDEWEB)

    Sauter, Nicholas K.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.

    2003-12-09

    We present improved methods for indexing diffraction patterns from macromolecular crystals. The novel procedures include a more robust way to verify the position of the incident X-ray beam on the detector, an algorithm to verify that the deduced lattice basis is consistent with the observations, and an alternative approach to identify the metric symmetry of the lattice. These methods help to correct failures commonly experienced during indexing, and increase the overall success rate of the process. Rapid indexing, without the need for visual inspection, will play an important role as beamlines at synchrotron sources prepare for high-throughput automation.

  7. An Automatic Number Plate Recognition System under Image Processing

    OpenAIRE

    Sarbjit Kaur

    2016-01-01

    Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whol...

  8. From Automatic to Adaptive Data Acquisition

    DEFF Research Database (Denmark)

    Chang, Marcus

    2009-01-01

    Sensornets have been used for ecological monitoring the past decade, yet the main driving force behind these deployments are still computer scien- tists. The denser sampling and added modalities oered by sensornets could drive these elds in new directions, but not until the domain scientists be...... the exibility of sensornets and reduce the complexity for the domain scientist, we developed an AI-based controller to act as a proxy between the scientist and sensornet. This controller is driven by the scientist's requirements to the collected data, and uses adaptive sampling in order to reach these goals....

  9. Automatic retrieval of bone fracture knowledge using natural language processing.

    Science.gov (United States)

    Do, Bao H; Wu, Andrew S; Maley, Joan; Biswal, Sandip

    2013-08-01

    Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time.

  10. Automatic Beam Path Analysis of Laser Wakefield Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver; Geddes, Cameron G.R.; Cormier-Michel, Estelle; Wu, Kesheng; Prabhat,; Weber, Gunther H.; Ushizima, Daniela M.; Messmer, Peter; Hagen, Hans; Hamann, Bernd; Bethel, E. Wes

    2009-10-19

    Numerical simulations of laser wakefield particle accelerators play a key role in the understanding of the complex acceleration process and in the design of expensive experimental facilities. As the size and complexity of simulation output grows, an increasingly acute challenge is the practical need for computational techniques that aid in scientific knowledge discovery. To that end, we present a set of data-understanding algorithms that work in concert in a pipeline fashion to automatically locate and analyze high energy particle bunches undergoing acceleration in very large simulation datasets. These techniques work cooperatively by first identifying features of interest in individual timesteps, then integrating features across timesteps, and based on the information derived perform analysis of temporally dynamic features. This combination of techniques supports accurate detection of particle beams enabling a deeper level of scientific understanding of physical phenomena than hasbeen possible before. By combining efficient data analysis algorithms and state-of-the-art data management we enable high-performance analysis of extremely large particle datasets in 3D. We demonstrate the usefulness of our methods for a variety of 2D and 3D datasets and discuss the performance of our analysis pipeline.

  11. An Automatic Development Process for Integrated Modular Avionics Software

    Directory of Open Access Journals (Sweden)

    Ying Wang

    2013-05-01

    Full Text Available With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA. ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic development process based on Architecture Analysis & Design Language. An automatic model transformation approach from domain-specific models to platform-specific ARINC653 models and safety-critical ARINC653-compliant code generation technology are respectively presented during this process. A simplified multi-task flight application as a case study with preliminary experiment result is given to show the validity of this process.

  12. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    Science.gov (United States)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  13. Improved automatic tuning of PID controller for stable processes.

    Science.gov (United States)

    Kumar Padhy, Prabin; Majhi, Somanath

    2009-10-01

    This paper presents an improved automatic tuning method for stable processes using a modified relay in the presence of static load disturbances and measurement noise. The modified relay consists of a standard relay in series with a PI controller of unity proportional gain. The integral time constant of the PI controller of the modified relay is chosen so as to ensure a minimum loop phase margin of 30( composite function). A limit cycle is then obtained using the modified relay. Hereafter, the PID controller is designed using the limit cycle output data. The derivative time constant is obtained by maintaining the above mentioned loop phase margin. Minimizing the distance of Nyquist curve of the loop transfer function from the imaginary axis of the complex plane gives the proportional gain. The integral time constant of the PID controller is set equal to the integral time constant of the PI controller of the modified relay. The effectiveness of the proposed technique is verified by simulation results.

  14. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  15. The Automatic and Controlled Processing of Temporal and Spatial Patterns.

    Science.gov (United States)

    1980-02-01

    Atkinson and Juola, 1973; Slhffrin and Geisler, 1973; and Corballis, 1975; Posner and Snyder, 1975). Schneider and Shiffrin (1977; Shiffrin and Schneider...Besides the frame size, Schneider and Shiffrin (1977) also varied the memory set size to study the differential load requirements of CM and VM...theoretical level, Shiffrin and Schneider (1977) described an automatic process as a sequence of memory nodes that nearly always become active in

  16. Altering automatic verbal processes with transcranial direct current stimulation

    Directory of Open Access Journals (Sweden)

    Tracy D Vannorsdall

    2012-08-01

    Full Text Available AbstractBackground: Word retrieval during verbal fluency tasks utilizes both automatic and controlled cognitive processes. A distinction has been made between the generation of clusters and switches on verbal fluency tasks. Clusters, or the reporting of contiguous words within semantic or phonemic subcategories, are thought to reflect a relatively automatic processes In contrast, switching from one subcategory to another is thought to represent more controlled, effortful form of cognitive processing. Objective: In this single-blind experiment, we investigated whether tDCS can modify qualitative aspects of verbal fluency, such as clustering and switching, in healthy adults. Methods: Participants were randomly assigned to receive 1mA of either anodal/excitatory or cathodal/inhibitory active tDCS over the left prefrontal cortex in addition to sham stimulation. In the last segment of each 30-minute session, participants completed letter- and category-cued fluency tasks.Results: Anodal tDCS increased both overall productivity and the number and proportion of words in clusters during category-guided verbal fluency, whereas cathodal stimulation produced the opposite effect. Conclusions: tDCS can selectively alter automatic aspects of speeded lexical retrieval in a polarity-dependent fashion during a category-guided fluency task.  

  17. Automatic and controlled processing in the corticocerebellar system.

    Science.gov (United States)

    Ramnani, Narender

    2014-01-01

    During learning, performance changes often involve a transition from controlled processing in which performance is flexible and responsive to ongoing error feedback, but effortful and slow, to a state in which processing becomes swift and automatic. In this state, performance is unencumbered by the requirement to process feedback, but its insensitivity to feedback reduces its flexibility. Many properties of automatic processing are similar to those that one would expect of forward models, and many have suggested that these may be instantiated in cerebellar circuitry. Since hierarchically organized frontal lobe areas can both send and receive commands, I discuss the possibility that they can act both as controllers and controlled objects and that their behaviors can be independently modeled by forward models in cerebellar circuits. Since areas of the prefrontal cortex contribute to this hierarchically organized system and send outputs to the cerebellar cortex, I suggest that the cerebellum is likely to contribute to the automation of cognitive skills, and to the formation of habitual behavior which is resistant to error feedback. An important prerequisite to these ideas is that cerebellar circuitry should have access to higher order error feedback that signals the success or failure of cognitive processing. I have discussed the pathways through which such feedback could arrive via the inferior olive and the dopamine system. Cerebellar outputs inhibit both the inferior olive and the dopamine system. It is possible that learned representations in the cerebellum use this as a mechanism to suppress the processing of feedback in other parts of the nervous system. Thus, cerebellar processes that control automatic performance may be completed without triggering the engagement of controlled processes by prefrontal mechanisms.

  18. Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data

    Directory of Open Access Journals (Sweden)

    Mathias Disney

    2013-01-01

    Full Text Available This paper presents a new method for constructing quickly and automatically precision tree models from point clouds of the trunk and branches obtained by terrestrial laser scanning. The input of the method is a point cloud of a single tree scanned from multiple positions. The surface of the visible parts of the tree is robustly reconstructed by making a flexible cylinder model of the tree. The thorough quantitative model records also the topological branching structure. In this paper, every major step of the whole model reconstruction process, from the input to the finished model, is presented in detail. The model is constructed by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the tree surface. The neighbor-relations and geometrical properties of these cover sets are used to reconstruct the details of the tree and, step by step, the whole tree. The point cloud and the sets are segmented into branches, after which the branches are modeled as collections of cylinders. From the model, the branching structure and size properties, such as volume and branch size distributions, for the whole tree or some of its parts, can be approximated. The approach is validated using both measured and modeled terrestrial laser scanner data from real trees and detailed 3D models. The results show that the method allows an easy extraction of various tree attributes from terrestrial or mobile laser scanning point clouds.

  19. Automatic auditory intelligence: an expression of the sensory-cognitive core of cognitive processes.

    Science.gov (United States)

    Näätänen, Risto; Astikainen, Piia; Ruusuvirta, Timo; Huotilainen, Minna

    2010-09-01

    In this article, we present a new view on the nature of cognitive processes suggesting that there is a common core, viz., automatic sensory-cognitive processes that form the basis for higher-order cognitive processes. It has been shown that automatic sensory-cognitive processes are shared by humans and various other species and occur at different developmental stages and even in different states of consciousness. This evidence, based on the automatic electrophysiological change-detection response mismatch negativity (MMN), its magnetoencephalographic equivalent MMNm, and behavioral data, indicates that in audition surprisingly complex processes occur automatically and mainly in the sensory-specific cortical regions. These processes include, e.g. stimulus anticipation and extrapolation, sequential stimulus-rule extraction, and pattern and pitch-interval encoding. Furthermore, these complex perceptual-cognitive processes, first found in waking adults, occur similarly even in sleeping newborns, anesthetized animals, and deeply sedated adult humans, suggesting that they form the common perceptual-cognitive core of cognitive processes in general. Although the present evidence originates mainly from the auditory modality, it is likely that analogous evidence could be obtained from other sensory modalities when measures corresponding to those used in the study of the auditory modality become available.

  20. Automatic Defect Detection in X-Ray Images Using Image Data Fusion

    Institute of Scientific and Technical Information of China (English)

    TIAN Yuan; DU Dong; CAI Guorui; WANG Li; ZHANG Hua

    2006-01-01

    Automatic defect detection in X-ray images is currently a focus of much research at home and abroad. The technology requires computerized image processing, image analysis, and pattern recognition. This paper describes an image processing method for automatic defect detection using image data fusion which synthesizes several methods including edge extraction, wave profile analyses, segmentation with dynamic threshold, and weld district extraction. Test results show that defects that induce an abrupt change over a predefined extent of the image intensity can be segmented regardless of the number, location, shape, or size. Thus, the method is more robust and practical than the current methods using only one method.

  1. Automatic Discovery of Non-Compositional Compounds in Parallel Data

    CERN Document Server

    Melamed, I D

    1997-01-01

    Automatic segmentation of text into minimal content-bearing units is an unsolved problem even for languages like English. Spaces between words offer an easy first approximation, but this approximation is not good enough for machine translation (MT), where many word sequences are not translated word-for-word. This paper presents an efficient automatic method for discovering sequences of words that are translated as a unit. The method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languages. It can discover hundreds of non-compositional compounds on each iteration, and constructs longer compounds out of shorter ones. Objective evaluation on a simple machine translation task has shown the method's potential to improve the quality of MT output. The method makes few assumptions about the data, so it can be applied to parallel data other than parallel texts, such as word spellings and pronunciations.

  2. Automatic Multimedia Creation Enriched with Dynamic Conceptual Data

    Directory of Open Access Journals (Sweden)

    Angel Martín

    2012-12-01

    Full Text Available There is a growing gap between the multimedia production and the context centric multimedia services. The main problem is the under-exploitation of the content creation design. The idea is to support dynamic content generation adapted to the user or display profile. Our work is an implementation of a web platform for automatic generation of multimedia presentations based on SMIL (Synchronized Multimedia Integration Language standard. The system is able to produce rich media with dynamic multimedia content retrieved automatically from different content databases matching the semantic context. For this purpose, we extend the standard interpretation of SMIL tags in order to accomplish a semantic translation of multimedia objects in database queries. This permits services to take benefit of production process to create customized content enhanced with real time information fed from databases. The described system has been successfully deployed to create advanced context centric weather forecasts.

  3. Automatic Classification of Variable Stars in Catalogs with missing data

    CERN Document Server

    Pichara, Karim

    2013-01-01

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks, a probabilistic graphical model, that allows us to perform inference to pre- dict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilises sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches and at what computational cost. Integrating these catalogs with missing data we find that classification of variable objects improves by few percent and by 15% for quasar detection while keeping the computational co...

  4. Automatic registration method for mobile LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Ferrie, Frank P.

    2015-01-01

    We present an automatic mutual information (MI) registration method for mobile LiDAR and panoramas collected from a driving vehicle. The suitability of MI for registration of aerial LiDAR and aerial oblique images has been demonstrated under an assumption that minimization of joint entropy (JE) is a sufficient approximation of maximization of MI. We show that this assumption is invalid for the ground-level data. The entropy of a LiDAR image cannot be regarded as approximately constant for small perturbations. Instead of minimizing the JE, we directly maximize MI to estimate corrections of camera poses. Our method automatically registers mobile LiDAR with spherical panoramas over an approximate 4-km drive, and is the first example we are aware of that tests MI registration in a large-scale context.

  5. Cognitive effort and pupil dilation in controlled and automatic processes

    Science.gov (United States)

    Querino, Emanuel; dos Santos, Lafaiete; Ginani, Giuliano; Nicolau, Eduardo; Miranda, Débora; Romano-Silva, Marco; Malloy-Diniz, Leandro

    2015-01-01

    The Five Digits Test (FDT) is a Stroop paradigm test that aims to evaluate executive functions. It is composed of four parts, two of which are related to automatic and two of which are related to controlled processes. It is known that pupillary diameter increases as the task’s cognitive demand increases. In the present study, we evaluated whether the pupillary diameter could distinguish cognitive effort between automated and controlled cognitive processing during the FDT as the task progressed. As a control task, we used a simple reading paradigm with a similar visual aspect as the FDT. We then divided each of the four parts into two blocks in order to evaluate the differences between the first and second half of the task. Results indicated that, compared to a control task, the FDT required higher cognitive effort for each consecutive part. Moreover, the first half of every part of the FDT induced dilation more than the second. The differences in pupil dilation during the first half of the four FDT parts were statistically significant between the parts 2 and 4 (p=0.023), and between the parts 3 and 4 (p=0.006). These results provide further evidence that cognitive effort and pupil diameter can distinguish controlled from automatic processes.

  6. Using Dual-Task Methodology to Dissociate Automatic from Nonautomatic Processes Involved in Artificial Grammar Learning

    Science.gov (United States)

    Hendricks, Michelle A.; Conway, Christopher M.; Kellogg, Ronald T.

    2013-01-01

    Previous studies have suggested that both automatic and intentional processes contribute to the learning of grammar and fragment knowledge in artificial grammar learning (AGL) tasks. To explore the relative contribution of automatic and intentional processes to knowledge gained in AGL, we utilized dual-task methodology to dissociate automatic and…

  7. Automatic meta-data collection of STP observation data

    Science.gov (United States)

    Ishikura, S.; Kimura, E.; Murata, K.; Kubo, T.; Shinohara, I.

    2006-12-01

    For the geo-science and the STP (Solar-Terrestrial Physics) studies, various observations have been done by satellites and ground-based observatories up to now. These data are saved and managed at many organizations, but no common procedure and rule to provide and/or share these data files. Researchers have felt difficulty in searching and analyzing such different types of data distributed over the Internet. To support such cross-over analyses of observation data, we have developed the STARS (Solar-Terrestrial data Analysis and Reference System). The STARS consists of client application (STARS-app), the meta-database (STARS- DB), the portal Web service (STARS-WS) and the download agent Web service (STARS DLAgent-WS). The STARS-DB includes directory information, access permission, protocol information to retrieve data files, hierarchy information of mission/team/data and user information. Users of the STARS are able to download observation data files without knowing locations of the files by using the STARS-DB. We have implemented the Portal-WS to retrieve meta-data from the meta-database. One reason we use the Web service is to overcome a variety of firewall restrictions which is getting stricter in recent years. Now it is difficult for the STARS client application to access to the STARS-DB by sending SQL query to obtain meta- data from the STARS-DB. Using the Web service, we succeeded in placing the STARS-DB behind the Portal- WS and prevent from exposing it on the Internet. The STARS accesses to the Portal-WS by sending the SOAP (Simple Object Access Protocol) request over HTTP. Meta-data is received as a SOAP Response. The STARS DLAgent-WS provides clients with data files downloaded from data sites. The data files are provided with a variety of protocols (e.g., FTP, HTTP, FTPS and SFTP). These protocols are individually selected at each site. The clients send a SOAP request with download request messages and receive observation data files as a SOAP Response with

  8. Automatic humidification system to support the assessment of food drying processes

    Science.gov (United States)

    Ortiz Hernández, B. D.; Carreño Olejua, A. R.; Castellanos Olarte, J. M.

    2016-07-01

    This work shows the main features of an automatic humidification system to provide drying air that match environmental conditions of different climate zones. This conditioned air is then used to assess the drying process of different agro-industrial products at the Automation and Control for Agro-industrial Processes Laboratory of the Pontifical Bolivarian University of Bucaramanga, Colombia. The automatic system allows creating and improving control strategies to supply drying air under specified conditions of temperature and humidity. The development of automatic routines to control and acquire real time data was made possible by the use of robust control systems and suitable instrumentation. The signals are read and directed to a controller memory where they are scaled and transferred to a memory unit. Using the IP address is possible to access data to perform supervision tasks. One important characteristic of this automatic system is the Dynamic Data Exchange Server (DDE) to allow direct communication between the control unit and the computer used to build experimental curves.

  9. Automatic solar feature detection using image processing and pattern recognition techniques

    Science.gov (United States)

    Qu, Ming

    The objective of the research in this dissertation is to develop a software system to automatically detect and characterize solar flares, filaments and Corona Mass Ejections (CMEs), the core of so-called solar activity. These tools will assist us to predict space weather caused by violent solar activity. Image processing and pattern recognition techniques are applied to this system. For automatic flare detection, the advanced pattern recognition techniques such as Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Support Vector Machine (SVM) are used. By tracking the entire process of flares, the motion properties of two-ribbon flares are derived automatically. In the applications of the solar filament detection, the Stabilized Inverse Diffusion Equation (SIDE) is used to enhance and sharpen filaments; a new method for automatic threshold selection is proposed to extract filaments from background; an SVM classifier with nine input features is used to differentiate between sunspots and filaments. Once a filament is identified, morphological thinning, pruning, and adaptive edge linking methods are applied to determine filament properties. Furthermore, a filament matching method is proposed to detect filament disappearance. The automatic detection and characterization of flares and filaments have been successfully applied on Halpha full-disk images that are continuously obtained at Big Bear Solar Observatory (BBSO). For automatically detecting and classifying CMEs, the image enhancement, segmentation, and pattern recognition techniques are applied to Large Angle Spectrometric Coronagraph (LASCO) C2 and C3 images. The processed LASCO and BBSO images are saved to file archive, and the physical properties of detected solar features such as intensity and speed are recorded in our database. Researchers are able to access the solar feature database and analyze the solar data efficiently and effectively. The detection and characterization system greatly improves

  10. Automatic Boat Identification System for VIIRS Low Light Imaging Data

    Directory of Open Access Journals (Sweden)

    Christopher D. Elvidge

    2015-03-01

    Full Text Available The ability for satellite sensors to detect lit fishing boats has been known since the 1970s. However, the use of the observations has been limited by the lack of an automatic algorithm for reporting the location and brightness of offshore lighting features arising from boats. An examination of lit fishing boat features in Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB data indicates that the features are essentially spikes. We have developed a set of algorithms for automatic detection of spikes and characterization of the sharpness of spike features. A spike detection algorithm generates a list of candidate boat detections. A second algorithm measures the height of the spikes for the discard of ionospheric energetic particle detections and to rate boat detections as either strong or weak. A sharpness index is used to label boat detections that appear blurry due to the scattering of light by clouds. The candidate spikes are then filtered to remove features on land and gas flares. A validation study conducted using analyst selected boat detections found the automatic algorithm detected 99.3% of the reference pixel set. VIIRS boat detection data can provide fishery agencies with up-to-date information of fishing boat activity and changes in this activity in response to new regulations and enforcement regimes. The data can provide indications of illegal fishing activity in restricted areas and incursions across Exclusive Economic Zone (EEZ boundaries. VIIRS boat detections occur widely offshore from East and Southeast Asia, South America and several other regions.

  11. Automatic and effortful processing of self-statements in depression.

    Science.gov (United States)

    Wang, Catharina E; Brennen, Tim; Holte, Arne

    2006-01-01

    Clark and Beck (1999) and Williams et al. (1997) have come up with quite different conclusions regarding which cognitive processes are most affected by negative self-schemata and negative knowledge structures. In order to increase the understanding of differences in effortful and automatic processing in depression, we compared never depressed (ND), previously depressed (PD) and clinically depressed (CD) individuals on free recall, recognition and fabrication of positive and negative self-statements. The results showed that: (i) overall NDs and PDs recalled more positive self-statements than CDs, whereas CDs correctly recognized more negative self-statements than NDs and PDs; and (ii) CDs and PDs fabricated more negative than positive self-statements, whereas no difference was obtained for NDs. The results seem to be in line with Clark and Beck's suggestions. However, there are several aspects of the present findings that make the picture more complicated.

  12. Enhancement of the automatic ultrasonic signal processing system using digital technology

    Energy Technology Data Exchange (ETDEWEB)

    Koo, In Soo; Park, H. Y.; Suh, Y. S.; Kim, D. Hoon; Huh, S.; Sung, S. H.; Jang, G. S.; Ryoo, S. G.; Choi, J. H.; Kim, Y. H.; Lee, J. C.; Kim, D. Hyun; Park, H. J.; Kim, Y. C.; Lee, J. P.; Park, C. H.; Kim, M. S

    1999-12-01

    The objective of this study is to develop the automatic ultrasonic signal processing system which can be used in the inspection equipment to assess the integrity of the reactor vessel by enhancing the performance of the ultrasonic signal processing system. Main activities of this study divided into three categories such as the development of the circuits for generating ultrasonic signal and receiving the signal from the inspection equipment, the development of signal processing algorithm and H/W of the data processing system, and the development of the specification for application programs and system S/W for the analysis and evaluation computer. The results of main activities are as follows 1) the design of the ultrasonic detector and the automatic ultrasonic signal processing system by using the investigation of the state-of-the-art technology in the inside and outside of the country. 2) the development of H/W and S/W of the data processing system based on the results. Especially, the H/W of the data processing system, which have both advantages of digital and analog controls through the real-time digital signal processing, was developed using the DSP which can process the digital signal in the real-time, and was developed not only firmware of the data processing system in order for the peripherals but also the test algorithm of specimen for the calibration. The application programs and the system S/W of the analysis/evaluation computer were developed. Developed equipment was verified by the performance test. Based on developed prototype for the automatic ultrasonic signal processing system, the localization of the overall ultrasonic inspection equipment for nuclear industries would be expected through the further studies of the H/W establishment of real applications, developing the S/W specification of the analysis computer. (author)

  13. Automatic removal of outliers in hydrologic time series and quality control of rainfall data: processing a real-time database of the Local System for Flood Monitoring in Klodzko County, Poland

    Science.gov (United States)

    Mizinski, Bartlomiej; Niedzielski, Tomasz; Kryza, Maciej; Szymanowski, Mariusz

    2013-04-01

    Real-time hydrological forecasting requires the highest quality of both hydrologic and meteorological data collected in a given river basin. Large outliers may lead to inaccurate predictions, with substantial departures between observations and prognoses considered even in short term. Although we need the correctness of both riverflow and rainfall data, they cannot be processed in the same way to produce a filtered output. Indeed, hydrologic time series at a given gauge can be interpolated in time domain after having detected suspicious values, however if no outlier has been detected at the upstream sites. In the case of rainfall data, interpolation is not suitable as we cannot verify the potential outliers at a given site against data from other sites especially in the complex terrain. This is due to the fact that very local convective events may occur, leading to large rainfall peaks at a limited space. Hence, instead of interpolating data, we rather perform a flagging procedure that only ranks outliers according to the likelihood of occurrence. Following the aforementioned assumptions, we have developed a few modules that serve a purpose of a fully automated correction of a database that is updated in real-time every 15 minutes, and the main objective of the work was to produce a high-quality database for a purpose of hydrologic rainfall-runoff modeling and ensemble prediction. The database in question is available courtesy of the County Office in Kłodzko (SW Poland), the institution which owns and maintains the Local System for Flood Monitoring in Kłodzko County. The dedicated prediction system, known as HydroProg, is now being built at the University of Wrocław (Poland). As the entire prediction system, the correction modules work automatically in real time and are developed in R language. They are plugged in to a larger IT infrastructure. Hydrologic time series, which are water levels recorded every 15 minutes at 22 gauges located in Kłodzko County, are

  14. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  15. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  16. Exploring Automaticity in Text Processing: Syntactic Ambiguity as a Test Case

    Science.gov (United States)

    Rawson, Katherine A.

    2004-01-01

    A prevalent assumption in text comprehension research is that many aspects of text processing are automatic, with automaticity typically defined in terms of properties (e.g., speed and effort). The present research advocates conceptualization of automaticity in terms of underlying mechanisms and evaluates two such accounts, a…

  17. Memory-Based Processing as a Mechanism of Automaticity in Text Comprehension

    Science.gov (United States)

    Rawson, Katherine A.; Middleton, Erica L.

    2009-01-01

    A widespread theoretical assumption is that many processes involved in text comprehension are automatic, with automaticity typically defined in terms of properties (e.g., speed, effort). In contrast, the authors advocate for conceptualization of automaticity in terms of underlying cognitive mechanisms and evaluate one prominent account, the…

  18. Adapting histogram for automatic noise data removal in building interior point cloud data

    Science.gov (United States)

    Shukor, S. A. Abdul; Rushforth, E. J.

    2015-05-01

    3D point cloud data is now preferred by researchers to generate 3D models. These models can be used throughout a variety of applications including 3D building interior models. The rise of Building Information Modeling (BIM) for Architectural, Engineering, Construction (AEC) applications has given 3D interior modelling more attention recently. To generate a 3D model representing the building interior, a laser scanner is used to collect the point cloud data. However, this data often comes with noise. This is due to several factors including the surrounding objects, lighting and specifications of the laser scanner. This paper highlights on the usage of the histogram to remove the noise data. Histograms, used in statistics and probability, are regularly being used in a number of applications like image processing, where a histogram can represent the total number of pixels in an image at each intensity level. Here, histograms represent the number of points recorded at range distance intervals in various projections. As unwanted noise data has a sparser cloud density compared to the required data and is usually situated at a notable distance from the required data, noise data will have lower frequencies in the histogram. By defining the acceptable range using the average frequency, points below this range can be removed. This research has shown that these histograms have the capabilities to remove unwanted data from 3D point cloud data representing building interiors automatically. This feature will aid the process of data preprocessing in producing an ideal 3D model from the point cloud data.

  19. Automatic Road Pavement Assessment with Image Processing: Review and Comparison

    Directory of Open Access Journals (Sweden)

    Sylvie Chambon

    2011-01-01

    Full Text Available In the field of noninvasive sensing techniques for civil infrastructures monitoring, this paper addresses the problem of crack detection, in the surface of the French national roads, by automatic analysis of optical images. The first contribution is a state of the art of the image-processing tools applied to civil engineering. The second contribution is about fine-defect detection in pavement surface. The approach is based on a multi-scale extraction and a Markovian segmentation. Third, an evaluation and comparison protocol which has been designed for evaluating this difficult task—the road pavement crack detection—is introduced. Finally, the proposed method is validated, analysed, and compared to a detection approach based on morphological tools.

  20. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two...... analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ...... group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre...

  1. Data processing made simple

    CERN Document Server

    Wooldridge, Susan

    2013-01-01

    Data Processing: Made Simple, Second Edition presents discussions of a number of trends and developments in the world of commercial data processing. The book covers the rapid growth of micro- and mini-computers for both home and office use; word processing and the 'automated office'; the advent of distributed data processing; and the continued growth of database-oriented systems. The text also discusses modern digital computers; fundamental computer concepts; information and data processing requirements of commercial organizations; and the historical perspective of the computer industry. The

  2. Automatic Weissenberg data collection system for time-resolved protein crystallography

    CERN Document Server

    Sakabe, N; Higashi, T; Igarashi, N; Suzuki, M; Watanabe, N; Sasaki, K

    2001-01-01

    A totally new type of fully automatic Weissenberg data-collection system called 'Galaxy' was developed and was installed at the Photon Factory. This automatic data collection system consists of a rotated-inclined focusing monochromator, a screenless Weissenberg type camera, an image reader, an eraser, a cassette transportation mechanism, a control console and a safety and high-speed computer network system linking a control console, data processing computers and data servers. The special characteristics of this system are a Weissenberg camera with a fully cylindrical cassette which can be rotated to exchange a frame, a maximum number of 36 images to be recorded in an IP cassette, and a very high speed IP reader with five reading heads. Since the frame exchange time is only a few seconds, this system is applicable for time-resolved protein crystallography at seconds or minutes of time-scale.

  3. Sensitometric comparison of E and F dental radiographic films using manual and automatic processing systems

    Directory of Open Access Journals (Sweden)

    Dabaghi A.

    2008-04-01

    Full Text Available Background and Aim: Processing conditions affect sensitometric properties of X-ray films. In this study, we aimed to evaluate the sensitometric characteristics of InSight (IP, a new F-speed film, in fresh and used processing solutions in dental office condition and compare them with Ektaspeed Plus (EP.Materials and Methods: In this experimental in vitro study, an aluminium step wedge was used to construct characteristic curves for InSight and Ektaspeed Plus films (Kodak Eastman, Rochester, USA.All films were processed in Champion solution (X-ray Iran, Tehran, Iran both manually and automatically in a period of six days. Unexposed films of both types were processed manually and automatically to determine base plus fog density. Speed and film contrast were measured according to ISO definition. Data were analyzed using one-way ANOVA and T tests with P<0.05 as the level of significance.Results: IP was 20 to 22% faster than EP and showed to be an F-speed film when processed in automatic condition and E-F film when processed manually. Also it was F-speed in fresh solution and E-speed in old solution. IP and EP contrasts were similar in automatic processing but EP contrast was higher when processed manually. Both EP and IP films had standard values of base plus fog (<0.35 and B+F densities were decreased in old solution.Conclusion: Based on the results of this study, InSight is a F-speed film with a speed of at least 20% greater than Ektaspeed. In addition, it reduces patient exposure with no damage to image quality.

  4. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  5. System design of the METC automatic data acquisition and control system

    Energy Technology Data Exchange (ETDEWEB)

    Goff, D. R.; Armstrong, D. L.

    1982-02-01

    A system of computer programs and hardware was developed by the Instrumentation Branch of the Morgantown Energy Technology Center (METC) to provide data acquisition and control features for research projects at the site. The Automatic Data Acquisition and Control System (ADACS) has the capability of servicing up to eight individual projects simultaneously, providing data acquisition, data feedback, and process control where needed. Several novel software features - including a data table driven program, extensive feedback in real time, free format English commands, and high reliability - were incorporated to provide these functions.

  6. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  7. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    Science.gov (United States)

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  8. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence.

    Science.gov (United States)

    Shtyrov, Yury; Goryainova, Galina; Tugin, Sergei; Ossadtchi, Alexey; Shestakova, Anna

    2013-01-01

    Previous electrophysiological studies of automatic language processing revealed early (100-200 ms) reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN), a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words, as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realized as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects' attention was concentrated on a concurrent non-linguistic visual dual task in the center of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended perifoveal lexical stimuli. The data suggest early automatic lexical processing of visually presented language which commences rapidly and can take place outside the focus of attention.

  9. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  10. Automatic processing of unattended object features by functional connectivity

    Directory of Open Access Journals (Sweden)

    Katja Martina Mayer

    2013-05-01

    Full Text Available Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended and task-irrelevant (unattended object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, colour, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks or remained constant (no-change blocks during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioural level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localised several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction analyses revealed that several functionally-localised areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.

  11. Automatic delimitation of microwatershed using SRTM data of the NASA

    Directory of Open Access Journals (Sweden)

    Freddy Aníbal Jumbo Castillo

    2015-12-01

    Full Text Available The watershed as the basic territorial unit of planning and management of water resources, requires its proper delimitation of the catchment or drainage area, faced with this situation, the lack of geographic information of Casacay river micro watersheds, hydrographic unit should be resolved, for this purpose the research was aimed at automatic delimitation of micro watersheds using of Geographic Information Systems (GIS techniques and the project Shuttle Radar Topographic Mission (SRTM 30 meters spatial resolution data. The selected methodology was the Pfafstetter one, with which nine micro watersheds were obtained with their respective codification allowing to continue with watersheds standardization adopted by Ecuador Water's Secretariat. With the investigation results watersheds will be updated with more detail information, promoting the execution of tasks or activities related to the integrated management of the hydrographic unit studied

  12. Automatic Identification of Antibodies in the Protein Data Bank

    Institute of Scientific and Technical Information of China (English)

    LI Xun; WANG Renxiao

    2009-01-01

    An automatic method has been developed for identifying antibody entries in the protein data bank (PDB). Our method, called KIAb (Keyword-based Identification of Antibodies), parses PDB-format files to search for particular keywords relevant to antibodies, and makes judgment accordingly. Our method identified 780 entries as antibodies on the entire PDB. Among them, 767 entries were confirmed by manual inspection, indicating a high success rate of 98.3%. Our method recovered basically all of the entries compiled in the Summary of Antibody Crystal Structures (SACS) database. It also identified a number of entries missed by SACS. Our method thus provides a more com-plete mining of antibody entries in PDB with a very low false positive rate.

  13. Evolutionary synthesis of automatic classification on astroinformatic big data

    Science.gov (United States)

    Kojecky, Lumir; Zelinka, Ivan; Saloun, Petr

    2016-06-01

    This article describes the initial experiments using a new approach to automatic identification of Be and B[e] stars spectra in large archives. With enormous amount of these data it is no longer feasible to analyze it using classical approaches. We introduce an evolutionary synthesis of the classification by means of analytic programming, one of methods of symbolic regression. By this method, we synthesize the most suitable mathematical formulas that approximate chosen samples of the stellar spectra. As a result is then selected the category whose formula has the lowest difference compared to the particular spectrum. The results show us that classification of stellar spectra by means of analytic programming is able to identify different shapes of the spectra.

  14. Research on Geological Survey Data Management and Automatic Mapping Technology

    Directory of Open Access Journals (Sweden)

    Dong Huang

    2017-01-01

    Full Text Available The data management of a large geological survey is not an easy task. To efficiently store and manage the huge datasets, a database of geological information on the basis of Microsoft Access has been created. By using the database of geological information, we can make easily and scientifically store and manage the large geological information. The geological maps—borehole diagrams, the rose diagrams for the joint trends, and joint isointensity diagrams—are traditionally drawn by hand, which is not efficient way; next, it is not easily possible to modify. Therefore, to solve those problems, the automatic mapping method and associated interfaces have been developed by using VS2010 and geological information database; these developments are presented in this article. This article describes the theoretical basis of the new method in detail and provides a case study of practical engineering to demonstrate its application.

  15. Automatic Detection and Processing of Attributes Inconsistency for Fuzzy Ontologies Merging

    Directory of Open Access Journals (Sweden)

    Yonghong Luo

    2013-11-01

    Full Text Available Semantic fusion of multiple data sources and semantic interoperability between heterogeneous systems in distributed environment can be implemented through integrating multiple fuzzy local ontologies. However, ontology merging is one of the valid ways for ontology integration. In order to solve the problem of attributes inconsistency for concept mapping in fuzzy ontology merging system, we present an automatic detection algorithm of inconsistency for the range, number and membership grade of attributes between mapping concepts, and adopt corresponding processing strategy during the fuzzy ontologies merging according to the different types of attributes inconsistency. Experiment results show that with regard to merging accuracy, the fuzzy ontology merging system in which the automatic detection algorithm and processing strategy of attributes inconsistency is embedded is better than those traditional ontology merging systems like GLUE, PROMPT and Chimaera.    

  16. Improving the automatic wavelength calibration of EMIR spectroscopic data

    Science.gov (United States)

    Cardiel, N.; Pascual, S.; Picazo, P.; Gallego, J.; Garzón, F.; Castro-Rodríguez, N.; González-Fernández, C.; Hammersley, P.; Insausti, M.; Manjavacas, E.; Miluzio, M.

    2017-03-01

    EMIR, the near-infrared camera-spectrograph operating in the near-infrared (NIR) wavelengths 0.9-2.5μm, is being commissioned at the Nasmyth focus of the Gran Telescopio CANARIAS. One of the most outstanding capabilities of EMIR will be its multi-object spectroscopic mode which, with the help of a robotic reconfigurable slit system, will allow to take around 53 spectra simultaneously. A data reduction pipeline, PyEmir, based on Python, is being developed in order to facilitate the automatic reduction of EMIR data taken in both imaging and spectroscopy mode. Focusing on the reduction of spectroscopic data, some critical manipulations include the geometric distortion correction and the wavelength calibration. Although usually these reductions steps are carried out separately, it is important to realise that these kind of manipulations involve data rebinning and interpolation, which in addition unavoidably lead to the increase of error correlation and to resolution degradation. In order to minimise these effects, it is possible to incorporate those data manipulations as a single geometric transformation. This approach is being used in the development of PyEmir. For this purpose, the geometric transformations available in the Python package Scikit-image are being used. This work was funded by the Spanish Programa Nacional de Astronomía y Astrofísica under grant AYA2013-46724-P.

  17. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  18. A Bottom-Up Approach for Automatically Grouping Sensor Data Layers by their Observed Property

    Directory of Open Access Journals (Sweden)

    Steve H.L. Liang

    2013-01-01

    Full Text Available The Sensor Web is a growing phenomenon where an increasing number of sensors are collecting data in the physical world, to be made available over the Internet. To help realize the Sensor Web, the Open Geospatial Consortium (OGC has developed open standards to standardize the communication protocols for sharing sensor data. Spatial Data Infrastructures (SDIs are systems that have been developed to access, process, and visualize geospatial data from heterogeneous sources, and SDIs can be designed specifically for the Sensor Web. However, there are problems with interoperability associated with a lack of standardized naming, even with data collected using the same open standard. The objective of this research is to automatically group similar sensor data layers. We propose a methodology to automatically group similar sensor data layers based on the phenomenon they measure. Our methodology is based on a unique bottom-up approach that uses text processing, approximate string matching, and semantic string matching of data layers. We use WordNet as a lexical database to compute word pair similarities and derive a set-based dissimilarity function using those scores. Two approaches are taken to group data layers: mapping is defined between all the data layers, and clustering is performed to group similar data layers. We evaluate the results of our methodology.

  19. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  20. Automatic array alignment in data-parallel programs

    Science.gov (United States)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert; Teng, Shang-Hua

    1993-01-01

    FORTRAN 90 and other data-parallel languages express parallelism in the form of operations on data aggregates such as arrays. Misalignment of the operands of an array operation can reduce program performance on a distributed-memory parallel machine by requiring nonlocal data accesses. Determining array alignments that reduce communication is therefore a key issue in compiling such languages. We present a framework for the automatic determination of array alignments in array-based, data-parallel languages. Our language model handles array sectioning, reductions, spreads, transpositions, and masked operations. We decompose alignment functions into three constituents: axis, stride, and offset. For each of these subproblems, we show how to solve the alignment problem for a basic block of code, possibly containing common subexpressions. Alignments are generated for all array objects in the code, both named program variables and intermediate results. We assign computation to processors by virtue of explicit alignment of all temporaries; the resulting work assignment is in general better than that provided by the 'owner-computes' rule. Finally, we present some ideas for dealing with control flow, replication, and dynamic alignments that depend on loop induction variables.

  1. Analysis on the Influence of Automatic Station Temperature Data on the Sequence Continuity of Historical Meteorological Data

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The research aimed to study the influence of automatic station data on the sequence continuity of historical meteorological data. [Method] Based on the temperature data which were measured by the automatic meteorological station and the corresponding artificial observation data during January-December in 2001, the monthly average, maximum and minimum temperatures in the automatic station were compared with the corresponding artificial observation temperature data in the parallel observation peri...

  2. Photonic curvilinear data processing

    Science.gov (United States)

    Browning, Clyde; Quaglio, Thomas; Figueiro, Thiago; Pauliac, Sébastien; Belledent, Jérôme; Fay, Aurélien; Bustos, Jessy; Marusic, Jean-Christophe; Schiavone, Patrick

    2014-10-01

    With more and more photonic data presence in e-beam lithography, the need for efficient and accurate data fracturing is required to meet acceptable manufacturing cycle time. Large photonic based layouts now create high shot count patterns for VSB based tools. Multiple angles, sweeping curves, and non-orthogonal data create a challenge for today's e-beam tools that are more efficient on Manhattan style data. This paper describes techniques developed and used for creating fractured data for VSB based pattern generators. Proximity Effect Correction is also applied during the fracture process, taking into account variable shot sizes to apply for accuracy and design style. Choosing different fracture routines for pattern data on-the-fly allows for fast and efficient processing. Data interpretation is essential for processing curvilinear data as to its size, angle, and complexity. Fracturing complex angled data into "efficient" shot counts is no longer practical as shot creation now requires knowledge of the actual data content as seen in photonic based pattern data. Simulation and physical printing results prove the implementations for accuracy and write times compared to traditional VSB writing strategies on photonic data. Geometry tolerance is used as part of the fracturing algorithm for controlling edge placement accuracy and tuning to different e-beam processing parameters.

  3. Designing and Building an Automatic Information Retrieval System for Handling the Arabic Data

    Directory of Open Access Journals (Sweden)

    Ibrahiem M.M. El Emary

    2005-01-01

    Full Text Available This paper aimed to design and build an Automatic Information Retrieval System to handle the Arabic data. Also, this paper presents some type of comparison between the retrieval results using the vector space model in two different indexing methods: the full-ward indexing and the root indexing. The proposed Automatic Information Retrieval system was implemented and built using a traditional model technique: Vector Space Model (VSM where the cosine measure similarity was used. The output results indicate and show that the root indexing improved the retrieval performance more than the full-ward indexing on the Arabic documents; furthermore it reduces the size of stored data and minimizes the time of system processing.

  4. Automatic fault detection on BIPV systems without solar irradiation data

    CERN Document Server

    Leloux, Jonathan; Luna, Alberto; Desportes, Adrien

    2014-01-01

    BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar ...

  5. AUTOMATIC REGISTRATION OF MULTI-SOURCE DATA USING MUTUAL INFORMATION

    Directory of Open Access Journals (Sweden)

    E. G. Parmehr

    2012-07-01

    Full Text Available Automatic image registration is a basic step in multi-sensor data integration in remote sensing and photogrammetric applications such as data fusion. The effectiveness of Mutual Information (MI as a technique for automated multi-sensor image registration has previously been demonstrated for medical and remote sensing applications. In this paper, a new General Weighted MI (GWMI approach that improves the robustness of MI to local maxima, particularly in the case of registering optical imagery and 3D point clouds, is presented. Two different methods including a Gaussian Mixture Model (GMM and Kernel Density Estimation have been used to define the weight function of joint probability, regardless of the modality of the data being registered. The Expectation Maximizing method is then used to estimate parameters of GMM, and in order to reduce the cost of computation, a multi-resolution strategy has been used. The performance of the proposed GWMI method for the registration of aerial orthotoimagery and LiDAR range and intensity information has been experimentally evaluated and the results obtained are presented.

  6. Automatic processing of CERN video, audio and photo archives

    Energy Technology Data Exchange (ETDEWEB)

    Kwiatek, M [CERN, Geneva (Switzerland)], E-mail: Michal.Kwiatek@cem.ch

    2008-07-15

    The digitalization of CERN audio-visual archives, a major task currently in progress, will generate over 40 TB of video, audio and photo files. Storing these files is one issue, but a far more important challenge is to provide long-time coherence of the archive and to make these files available on-line with minimum manpower investment. An infrastructure, based on standard CERN services, has been implemented, whereby master files, stored in the CERN Distributed File System (DFS), are discovered and scheduled for encoding into lightweight web formats based on predefined profiles. Changes in master files, conversion profiles or in the metadata database (read from CDS, the CERN Document Server) are automatically detected and the media re-encoded whenever necessary. The encoding processes are run on virtual servers provided on-demand by the CERN Server Self Service Centre, so that new servers can be easily configured to adapt to higher load. Finally, the generated files are made available from the CERN standard web servers with streaming implemented using Windows Media Services.

  7. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  8. An Automatic Cycle-Slip Processing Method and Its Precision Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHENG Zuoya; LU Xiushan

    2006-01-01

    On the basis of analyzing and researching the current algorithms of cycle-slip detection and correction, a new method of cycle-slip detection and correction is put forward in this paper, that is, a reasonable cycle-slip detection condition and algorithm with corresponding program COMPRE (COMpass PRE-processing) to detect and correct cycle-slip automatically, compared with GIPSY and GAMIT software, for example, it is proved that this method is effective and credible to cycle-slip detection and correction in GPS data pre-processing.

  9. Measuring Service Reliability Using Automatic Vehicle Location Data

    Directory of Open Access Journals (Sweden)

    Zhenliang Ma

    2014-01-01

    Full Text Available Bus service reliability has become a major concern for both operators and passengers. Buffer time measures are believed to be appropriate to approximate passengers' experienced reliability in the context of departure planning. Two issues with regard to buffer time estimation are addressed, namely, performance disaggregation and capturing passengers’ perspectives on reliability. A Gaussian mixture models based method is applied to disaggregate the performance data. Based on the mixture models distribution, a reliability buffer time (RBT measure is proposed from passengers’ perspective. A set of expected reliability buffer time measures is developed for operators by using different spatial-temporal levels combinations of RBTs. The average and the latest trip duration measures are proposed for passengers that can be used to choose a service mode and determine the departure time. Using empirical data from the automatic vehicle location system in Brisbane, Australia, the existence of mixture service states is verified and the advantage of mixture distribution model in fitting travel time profile is demonstrated. Numerical experiments validate that the proposed reliability measure is capable of quantifying service reliability consistently, while the conventional ones may provide inconsistent results. Potential applications for operators and passengers are also illustrated, including reliability improvement and trip planning.

  10. Information Processing - Administrative Data Processing

    Science.gov (United States)

    Bubenko, Janis

    A three semester, 60-credit course package in the topic of Administrative Data Processing (ADP), offered in 1966 at Stockholm University (SU) and the Royal Institute of Technology (KTH) is described. The package had an information systems engineering orientation. The first semester focused on datalogical topics, while the second semester focused on the infological topics. The third semester aimed to deepen the students’ knowledge in different parts of ADP and at writing a bachelor thesis. The concluding section of this paper discusses various aspects of the department’s first course effort. The course package led to a concretisation of our discipline and gave our discipline an identity. Our education seemed modern, “just in time”, and well adapted to practical needs. The course package formed the first concrete activity of a group of young teachers and researchers. In a forty-year perspective, these people have further developed the department and the topic to an internationally well-reputed body of knowledge and research. The department has produced more than thirty professors and more than one hundred doctoral degrees.

  11. Memory as a function of attention, level of processing, and automatization.

    Science.gov (United States)

    Fisk, A D; Schneider, W

    1984-04-01

    The relationships between long-term memory (LTM) modification, attentional allocation, and type of processing are examined. Automatic/controlled processing theory (Schneider & Shiffrin, 1977) predicts that the nature and amount of controlled processing determines LTM storage and that stimuli can be automatically processed with no lasting LTM effect. Subjects performed the following: (a) an intentional learning, (b) a semantic categorization, (c) a graphic categorization, (d) a distracting digit-search while intentionally learning words, and (e) a distracting digit-search while ignoring words. Frequency judgments were more accurate in the semantic and intentional conditions than the graphic condition. Frequency judgments in the digit-search conditions were near chance. Experiment 2 extensively trained subjects to develop automatic categorization. Automatic categorization produced no frequency learning and little recognition. These results also disconfirm the Hasher and Zacks (1979) "automatic encoding" proposal regarding the nature of processing.

  12. An Automatic Number Plate Recognition System under Image Processing

    Directory of Open Access Journals (Sweden)

    Sarbjit Kaur

    2016-03-01

    Full Text Available Automatic Number Plate Recognition system is an application of computer vision and image processing technology that takes photograph of vehicles as input image and by extracting their number plate from whole vehicle image , it display the number plate information into text. Mainly the ANPR system consists of 4 phases: - Acquisition of Vehicle Image and Pre-Processing, Extraction of Number Plate Area, Character Segmentation and Character Recognition. The overall accuracy and efficiency of whole ANPR system depends on number plate extraction phase as character segmentation and character recognition phases are also depend on the output of this phase. Further the accuracy of Number Plate Extraction phase depends on the quality of captured vehicle image. Higher be the quality of captured input vehicle image more will be the chances of proper extraction of vehicle number plate area. The existing methods of ANPR works well for dark and bright/light categories image but it does not work well for Low Contrast, Blurred and Noisy images and the detection of exact number plate area by using the existing ANPR approach is not successful even after applying existing filtering and enhancement technique for these types of images. Due to wrong extraction of number plate area, the character segmentation and character recognition are also not successful in this case by using the existing method. To overcome these drawbacks I proposed an efficient approach for ANPR in which the input vehicle image is pre-processed firstly by iterative bilateral filtering , adaptive histogram equalization and number plate is extracted from pre-processed vehicle image using morphological operations, image subtraction, image binarization/thresholding, sobel vertical edge detection and by boundary box analysis. Sometimes the extracted plate area also contains noise, bolts, frames etc. So the extracted plate area is enhanced by using morphological operations to improve the quality of

  13. Automatic detection and segmentation of lymph nodes from CT data.

    Science.gov (United States)

    Barbu, Adrian; Suehling, Michael; Xu, Xun; Liu, David; Zhou, S Kevin; Comaniciu, Dorin

    2012-02-01

    Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.

  14. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Ding, Weidong; Lu, Fan

    2014-01-01

    This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT) reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  15. Controlled versus automatic processes: which is dominant to safety? The moderating effect of inhibitory control.

    Directory of Open Access Journals (Sweden)

    Yaoshan Xu

    Full Text Available This study explores the precursors of employees' safety behaviors based on a dual-process model, which suggests that human behaviors are determined by both controlled and automatic cognitive processes. Employees' responses to a self-reported survey on safety attitudes capture their controlled cognitive process, while the automatic association concerning safety measured by an Implicit Association Test (IAT reflects employees' automatic cognitive processes about safety. In addition, this study investigates the moderating effects of inhibition on the relationship between self-reported safety attitude and safety behavior, and that between automatic associations towards safety and safety behavior. The results suggest significant main effects of self-reported safety attitude and automatic association on safety behaviors. Further, the interaction between self-reported safety attitude and inhibition and that between automatic association and inhibition each predict unique variances in safety behavior. Specifically, the safety behaviors of employees with lower level of inhibitory control are influenced more by automatic association, whereas those of employees with higher level of inhibitory control are guided more by self-reported safety attitudes. These results suggest that safety behavior is the joint outcome of both controlled and automatic cognitive processes, and the relative importance of these cognitive processes depends on employees' individual differences in inhibitory control. The implications of these findings for theoretical and practical issues are discussed at the end.

  16. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  17. Automatic assignment of protein backbone resonances by direct spectrum inspection in targeted acquisition of NMR data.

    Science.gov (United States)

    Wong, Leo E; Masse, James E; Jaravine, Victor; Orekhov, Vladislav; Pervushin, Konstantin

    2008-10-01

    The necessity to acquire large multidimensional datasets, a basis for assignment of NMR resonances, results in long data acquisition times during which substantial degradation of a protein sample might occur. Here we propose a method applicable for such a protein for automatic assignment of backbone resonances by direct inspection of multidimensional NMR spectra. In order to establish an optimal balance between completeness of resonance assignment and losses of cross-peaks due to dynamic processes/degradation of protein, assignment of backbone resonances is set as a stirring criterion for dynamically controlled targeted nonlinear NMR data acquisition. The result is demonstrated with the 12 kDa (13)C,(15) N-labeled apo-form of heme chaperone protein CcmE, where hydrolytic cleavage of 29 C-terminal amino acids is detected. For this protein, 90 and 98% of manually assignable resonances are automatically assigned within 10 and 40 h of nonlinear sampling of five 3D NMR spectra, respectively, instead of 600 h needed to complete the full time domain grid. In addition, resonances stemming from degradation products are identified. This study indicates that automatic resonance assignment might serve as a guiding criterion for optimal run-time allocation of NMR resources in applications to proteins prone to degradation.

  18. Automatic assignment of protein backbone resonances by direct spectrum inspection in targeted acquisition of NMR data

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Leo E. [Nanyang Technological University, School of Biological Sciences (Singapore); Masse, James E. [National Institutes of Health (United States); Jaravine, Victor [J. W. Goethe-University Frankfurt, Institute of Biophysical Chemistry (Germany); Orekhov, Vladislav [Gothenburg University, Swedish NMR Centre (Sweden); Pervushin, Konstantin [Nanyang Technological University, School of Biological Sciences (Singapore)], E-mail: kpervushin@ntu.edu.sg

    2008-10-15

    The necessity to acquire large multidimensional datasets, a basis for assignment of NMR resonances, results in long data acquisition times during which substantial degradation of a protein sample might occur. Here we propose a method applicable for such a protein for automatic assignment of backbone resonances by direct inspection of multidimensional NMR spectra. In order to establish an optimal balance between completeness of resonance assignment and losses of cross-peaks due to dynamic processes/degradation of protein, assignment of backbone resonances is set as a stirring criterion for dynamically controlled targeted nonlinear NMR data acquisition. The result is demonstrated with the 12 kDa {sup 13}C,{sup 15} N-labeled apo-form of heme chaperone protein CcmE, where hydrolytic cleavage of 29 C-terminal amino acids is detected. For this protein, 90 and 98% of manually assignable resonances are automatically assigned within 10 and 40 h of nonlinear sampling of five 3D NMR spectra, respectively, instead of 600 h needed to complete the full time domain grid. In addition, resonances stemming from degradation products are identified. This study indicates that automatic resonance assignment might serve as a guiding criterion for optimal run-time allocation of NMR resources in applications to proteins prone to degradation.

  19. Are Automatic Imitation and Spatial Compatibility Mediated by Different Processes?

    Science.gov (United States)

    Cooper, Richard P.; Catmur, Caroline; Heyes, Cecilia

    2013-01-01

    Automatic imitation or "imitative compatibility" is thought to be mediated by the mirror neuron system and to be a laboratory model of the motor mimicry that occurs spontaneously in naturalistic social interaction. Imitative compatibility and spatial compatibility effects are known to depend on different stimulus dimensions--body…

  20. The Masked Semantic Priming Effect Is Task Dependent: Reconsidering the Automatic Spreading Activation Process

    Science.gov (United States)

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    Semantic priming effects are popularly explained in terms of an automatic spreading activation process, according to which the activation of a node in a semantic network spreads automatically to interconnected nodes, preactivating a semantically related word. It is expected from this account that semantic priming effects should be routinely…

  1. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  2. Automatic physiological waveform processing for FMRI noise correction and analysis.

    Directory of Open Access Journals (Sweden)

    Daniel J Kelley

    Full Text Available Functional MRI resting state and connectivity studies of brain focus on neural fluctuations at low frequencies which share power with physiological fluctuations originating from lung and heart. Due to the lack of automated software to process physiological signals collected at high magnetic fields, a gap exists in the processing pathway between the acquisition of physiological data and its use in fMRI software for both physiological noise correction and functional analyses of brain activation and connectivity. To fill this gap, we developed an open source, physiological signal processing program, called PhysioNoise, in the python language. We tested its automated processing algorithms and dynamic signal visualization on resting monkey cardiac and respiratory waveforms. PhysioNoise consistently identifies physiological fluctuations for fMRI noise correction and also generates covariates for subsequent analyses of brain activation and connectivity.

  3. RACORO aerosol data processing

    Energy Technology Data Exchange (ETDEWEB)

    Elisabeth Andrews

    2011-10-31

    The RACORO aerosol data (cloud condensation nuclei (CCN), condensation nuclei (CN) and aerosol size distributions) need further processing to be useful for model evaluation (e.g., GCM droplet nucleation parameterizations) and other investigations. These tasks include: (1) Identification and flagging of 'splash' contaminated Twin Otter aerosol data. (2) Calculation of actual supersaturation (SS) values in the two CCN columns flown on the Twin Otter. (3) Interpolation of CCN spectra from SGP and Twin Otter to 0.2% SS. (4) Process data for spatial variability studies. (5) Provide calculated light scattering from measured aerosol size distributions. Below we first briefly describe the measurements and then describe the results of several data processing tasks that which have been completed, paving the way for the scientific analyses for which the campaign was designed. The end result of this research will be several aerosol data sets which can be used to achieve some of the goals of the RACORO mission including the enhanced understanding of cloud-aerosol interactions and improved cloud simulations in climate models.

  4. PALSAR ground data processing

    Science.gov (United States)

    Frick, Heinrich; Palsetia, Marzban; Carande, Richard; Curlander, James C.

    2002-02-01

    The upcoming launches of new satellites like ALOS, Envisat, Radarsat2 and ECHO will pose a significant challenge for many ground stations, namely to integrate new SAR processing software into their existing systems. Vexcel Corporation in Boulder, Colorado, has built a SAR processing system, named APEX -Suite, for spaceborne SAR satellites that can easily be expanded for the next generation of SAR satellites. APEX-Suite includes an auto-satellite-detecting Level 0 Processor that includes bit-error correction, data quality characterization, and as a unique feature, a sophisticated and very accurate Doppler centroid estimator. The Level 1 processing is divided into the strip mode processor FOCUST, based on the well-proven range-Doppler algorithm, and the SWATHT ScanSAR processor that uses the Chirp Z Trans-form algorithm. A high-accuracy ortho-rectification processor produces systematic and precision corrected Level 2 SAR image pro ducts. The PALSAR instrument is an L-band SAR with multiple fine and standard resolution beams in strip mode, and several wide-swath ScanSAR modes. We will address the adaptation process of Vexcel's APEX-Suite processing system for the PALSAR sensor and discuss image quality characteristics based on processed simulated point target phase history data.

  5. FULLY AUTOMATIC IMAGE-BASED REGISTRATION OF UNORGANIZED TLS DATA

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-09-01

    Full Text Available The estimation of the transformation parameters between different point clouds is still a crucial task as it is usually followed by scene reconstruction, object detection or object recognition. Therefore, the estimates should be as accurate as possible. Recent developments show that it is feasible to utilize both the measured range information and the reflectance information sampled as image, as 2D imagery provides additional information. In this paper, an image-based registration approach for TLS data is presented which consists of two major steps. In the first step, the order of the scans is calculated by checking the similarity of the respective reflectance images via the total number of SIFT correspondences between them. Subsequently, in the second step, for each SIFT correspondence the respective SIFT features are filtered with respect to their reliability concerning the range information and projected to 3D space. Combining the 3D points with 2D observations on a virtual plane yields 3D-to-2D correspondences from which the coarse transformation parameters can be estimated via a RANSAC-based registration scheme including the EPnP algorithm. After this coarse registration, the 3D points are again checked for consistency by using constraints based on the 3D distance, and, finally, the remaining 3D points are used for an ICP-based fine registration. Thus, the proposed methodology provides a fast, reliable, accurate and fully automatic image-based approach for the registration of unorganized point clouds without the need of a priori information about the order of the scans, the presence of regular surfaces or human interaction.

  6. Automatic identification of artifacts in electrodermal activity data.

    Science.gov (United States)

    Taylor, Sara; Jaques, Natasha; Chen, Weixuan; Fedor, Szymon; Sano, Akane; Picard, Rosalind

    2015-01-01

    Recently, wearable devices have allowed for long term, ambulatory measurement of electrodermal activity (EDA). Despite the fact that ambulatory recording can be noisy, and recording artifacts can easily be mistaken for a physiological response during analysis, to date there is no automatic method for detecting artifacts. This paper describes the development of a machine learning algorithm for automatically detecting EDA artifacts, and provides an empirical evaluation of classification performance. We have encoded our results into a freely available web-based tool for artifact and peak detection.

  7. Study on the automatic process of line heating for pillow shape plate

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    This paper focuses on the process for pillow shape plate by line heating technique, which is widely applied in the production of ship hull. Based on the analysis of primary parameters and experimental data in line heating process, the amount of local contraction generated by line heating has been illustrated. Then, combining with the computational result of local deformation determined by shell plate development, an optimization method for line heating parameters has been studied. This prediction system may provide rational arrangements of heating lines and technical parameters of process. By integrating the prediction system into the line heating robot for pillow shape plate, the automatic process of line heating for pillow shape plate can be achieved.

  8. Processing of intentional and automatic number magnitudes in children born prematurely: evidence from fMRI.

    Science.gov (United States)

    Klein, Elise; Moeller, Korbinian; Kiechl-Kohlendorfer, Ursula; Kremser, Christian; Starke, Marc; Cohen Kadosh, Roi; Pupp-Peglow, Ulrike; Schocke, Michael; Kaufmann, Liane

    2014-01-01

    This study examined the neural correlates of intentional and automatic number processing (indexed by number comparison and physical Stroop task, respectively) in 6- and 7-year-old children born prematurely. Behavioral results revealed significant numerical distance and size congruity effects. Imaging results disclosed (1) largely overlapping fronto-parietal activation for intentional and automatic number processing, (2) a frontal to parietal shift of activation upon considering the risk factors gestational age and birth weight, and (3) a task-specific link between math proficiency and functional magnetic resonance imaging (fMRI) signal within distinct regions of the parietal lobes-indicating commonalities but also specificities of intentional and automatic number processing.

  9. Conditioning reaction time: evidence for a process of conditioned automatization.

    Science.gov (United States)

    Montare, A

    1992-12-01

    The classical conditioning of the standard, simple reaction time (RT) in 140 college men and women is described. Consequent to an anticipatory instructed conditioning procedure, two experimental and two control groups acquired voluntary, controlled US(light)-URTR (unconditioned reaction-time response) associations which then served as the foundation for subsequent classical conditioning when a novel CS (auditory click) was simultaneously paired with the US. Conditioned reaction-time responses (CRTRs) occurred significantly more often during test trials in the two experimental groups than in the two control groups. Statistical and introspective findings support the notion that observed CRTRs may be products of cognitively unconscious conditioned automatization whereby the conditioning of relatively slow, voluntary, and controlled US-URTR associations leads to the acquisition of relatively fast, involuntary, and automatic CS-CRTR associations.

  10. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  11. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music...... and of the different music instruments over the centuries were also generated. Finally quantitative indicators of the importance of different cities over the different centuries in the lives of these composers are constructed. A range of interesting findings emerge in relation to all of these aspects of the lives...

  12. An Automatic Development Process for Integrated Modular Avionics Software

    OpenAIRE

    2013-01-01

    With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA). ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic deve...

  13. Automaticity and Reading: Perspectives from the Instance Theory of Automatization.

    Science.gov (United States)

    Logan, Gordon D.

    1997-01-01

    Reviews recent literature on automaticity, defining the criteria that distinguish automatic processing from non-automatic processing, and describing modern theories of the underlying mechanisms. Focuses on evidence from studies of reading and draws implications from theory and data for practical issues in teaching reading. Suggests that…

  14. The N400 and Late Positive Complex (LPC Effects Reflect Controlled Rather than Automatic Mechanisms of Sentence Processing

    Directory of Open Access Journals (Sweden)

    Boris Kotchoubey

    2012-08-01

    Full Text Available This study compared automatic and controlled cognitive processes that underlie event-related potentials (ERPs effects during speech perception. Sentences were presented to French native speakers, and the final word could be congruent or incongruent, and presented at one of four levels of degradation (using a modulation with pink noise: no degradation, mild degradation (2 levels, or strong degradation. We assumed that degradation impairs controlled more than automatic processes. The N400 and Late Positive Complex (LPC effects were defined as the differences between the corresponding wave amplitudes to incongruent words minus congruent words. Under mild degradation, where controlled sentence-level processing could still occur (as indicated by behavioral data, both N400 and LPC effects were delayed and the latter effect was reduced. Under strong degradation, where sentence processing was rather automatic (as indicated by behavioral data, no ERP effect remained. These results suggest that ERP effects elicited in complex contexts, such as sentences, reflect controlled rather than automatic mechanisms of speech processing. These results differ from the results of experiments that used word-pair or word-list paradigms.

  15. Real-time hyperspectral processing for automatic nonferrous material sorting

    Science.gov (United States)

    Picón, Artzai; Ghita, Ovidiu; Bereciartua, Aranzazu; Echazarra, Jone; Whelan, Paul F.; Iriondo, Pedro M.

    2012-01-01

    The application of hyperspectral sensors in the development of machine vision solutions has become increasingly popular as the spectral characteristics of the imaged materials are better modeled in the hyperspectral domain than in the standard trichromatic red, green, blue data. While there is no doubt that the availability of detailed spectral information is opportune as it opens the possibility to construct robust image descriptors, it also raises a substantial challenge when this high-dimensional data is used in the development of real-time machine vision systems. To alleviate the computational demand, often decorrelation techniques are commonly applied prior to feature extraction. While this approach has reduced to some extent the size of the spectral descriptor, data decorrelation alone proved insufficient in attaining real-time classification. This fact is particularly apparent when pixel-wise image descriptors are not sufficiently robust to model the spectral characteristics of the imaged materials, a case when the spatial information (or textural properties) also has to be included in the classification process. The integration of spectral and spatial information entails a substantial computational cost, and as a result the prospects of real-time operation for the developed machine vision system are compromised. To answer this requirement, in this paper we have reengineered the approach behind the integration of the spectral and spatial information in the material classification process to allow the real-time sorting of the nonferrous fractions that are contained in the waste of electric and electronic equipment scrap.

  16. Flash Lidar Data Processing

    Science.gov (United States)

    Bergkoetter, M. D.; Ruppert, L.; Weimer, C. S.; Ramond, T.; Lefsky, M. A.; Burke, I. C.; Hu, Y.

    2009-12-01

    Late last year, a prototype Flash LIDAR instrument flew on a series of airborne tests to demonstrate its potential for improved vegetation measurements. The prototype is a precursor to the Electronically Steerable Flash LIDAR (ESFL) currently under development at Ball Aerospace and Technology Corp. with funding from the NASA Earth Science Technology Office. ESFL may soon significantly expand our ability to measure vegetation and forests and better understand the extent of their role in global climate change and the carbon cycle - all critical science questions relating to the upcoming NASA DESDynI and ESA BIOMASS missions. In order to more efficiently exploit data returned from the experimental Flash Lidar system and plan for data exploitation from future flights, Ball funded a graduate student project (through the Ball Summer Intern Program, summer 2009) to develop and implement algorithms for post-processing of the 3-Dimensional Flash Lidar data. This effort included developing autonomous algorithms to resample the data to a uniform rectangular grid, geolocation of the data, and visual display of large swaths of data. The resampling, geolocation, surface hit detection, and aggregation of frame data are implemented with new MATLAB code, and the efficient visual display is achieved with free commercial viewing software. These efforts directly support additional tests flights planned as early as October 2009, including possible flights over Niwot Ridge, CO, for which there is ICESat data, and a sea-level coastal area in California to test the effect of higher altitude (above ground level) on the divergence of the beams and the beam spot sizes.

  17. Automatic processing of atmospheric CO2 and CH4 mole fractions at the ICOS Atmosphere Thematic Centre

    Science.gov (United States)

    Hazan, Lynn; Tarniewicz, Jérôme; Ramonet, Michel; Laurent, Olivier; Abbaris, Amara

    2016-09-01

    The Integrated Carbon Observation System Atmosphere Thematic Centre (ICOS ATC) automatically processes atmospheric greenhouse gases mole fractions of data coming from sites of the ICOS network. Daily transferred raw data files are automatically processed and archived. Data are stored in the ICOS atmospheric database, the backbone of the system, which has been developed with an emphasis on the traceability of the data processing. Many data products, updated daily, explore the data through different angles to support the quality control of the dataset performed by the principal operators in charge of the instruments. The automatic processing includes calibration and water vapor corrections as described in the paper. The mole fractions calculated in near-real time (NRT) are automatically revaluated as soon as a new instrument calibration is processed or when the station supervisors perform quality control. By analyzing data from 11 sites, we determined that the average calibration corrections are equal to 1.7 ± 0.3 µmol mol-1 for CO2 and 2.8 ± 3 nmol mol-1 for CH4. These biases are important to correct to avoid artificial gradients between stations that could lead to error in flux estimates when using atmospheric inversion techniques. We also calculated that the average drift between two successive calibrations separated by 15 days amounts to ±0.05 µmol mol-1 and ±0.7 nmol mol-1 for CO2 and CH4, respectively. Outliers are generally due to errors in the instrument configuration and can be readily detected thanks to the data products provided by the ATC. Several developments are still ongoing to improve the processing, including automated spike detection and calculation of time-varying uncertainties.

  18. Automatic data acquisition system for a photovoltaic solar plant

    Energy Technology Data Exchange (ETDEWEB)

    Santos, A.; Barrio, C.L.; Guerra, A.G.

    1986-01-01

    An autonomous monitoring system for photovoltaic solar plants is described. The system is able to collect data about the plant's physical and electrical characteristics and also about the environmental conditions. It may present the results on a display, if requested, but its main function is measuring periodically a set of parameters, including several points in the panel I-V characteristics, in an unattended mode. The data are stored on a magnetic tape for later processing on a computer. The system hardware and software are described, as well as their main functions.

  19. NGS-Trex: an automatic analysis workflow for RNA-Seq data.

    Science.gov (United States)

    Boria, Ilenia; Boatti, Lara; Saggese, Igor; Mignone, Flavio

    2015-01-01

    RNA-Seq technology allows the rapid analysis of whole transcriptomes taking advantage of next-generation sequencing platforms. Moreover with the constant decrease of the cost of NGS analysis RNA-Seq is becoming very popular and widespread. Unfortunately data analysis is quite demanding in terms of bioinformatic skills and infrastructures required, thus limiting the potential users of this method. Here we describe the complete analysis of sample data from raw sequences to data mining of results by using NGS-Trex platform, a low user interaction, fully automatic analysis workflow. Used through a web interface, NGS-Trex processes data and profiles the transcriptome of the samples identifying expressed genes, transcripts, and new and known splice variants. It also detects differentially expressed genes and transcripts across different experiments.

  20. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  1. Normalising orthographic and dialectal variants for the automatic processing of Swiss German

    OpenAIRE

    Samardzic, Tanja; Scherrer, Yves; Glaser, Elvira

    2015-01-01

    Swiss dialects of German are, unlike most dialects of well standardised languages, widely used in everyday communication. Despite this fact, they lack tools and resources for natural language processing. The main reason for this is the fact that the dialects are mostly spoken and that written resources are small and highly inconsistent. This paper addresses the great variability in writing that poses a problem for automatic processing. We propose an automatic approach to normalising the varia...

  2. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  3. Features and Ground Automatic Extraction from Airborne LIDAR Data

    Science.gov (United States)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  4. Research on HJ-1A/B satellite data automatic geometric precision correction design

    Institute of Scientific and Technical Information of China (English)

    Xiong Wencheng; Shen Wenming; Wang Qiao; Shi Yuanli; Xiao Rulin; Fu Zhuo

    2014-01-01

    Developed independently by China,HJ-1A/B satellites have operated well on-orbit for five years and acquired a large number of high-quality observation data. The realization of the observation data geometric precision correction is of great significance for macro and dynamic ecological environment monitoring. The pa-per analyzed the parameter characteristics of HJ-1 satellite and geometric features of HJ-1 satellite level 2 data (systematic geo-corrected data). Based on this,the overall HJ-1 multi-sensor geometric correction flow and charge-coupled device (CCD) automatic geometric precision correction method were designed. Actual operating data showed that the method could achieve good result for automatic geometric precision correction of HJ-1 sat-ellite data,automatic HJ-1 CCD image geometric precision correction accuracy could be achieved within two pixels and automatic matching accuracy between the images of same satellite could be obtained less than one pixel.

  5. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  6. Automatic decision support system based on SAR data for oil spill detection

    Science.gov (United States)

    Mera, David; Cotos, José M.; Varela-Pet, José; Rodríguez, Pablo G.; Caro, Andrés

    2014-11-01

    Global trade is mainly supported by maritime transport, which generates important pollution problems. Thus, effective surveillance and intervention means are necessary to ensure proper response to environmental emergencies. Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillages on the oceans surface. Several decision support systems have been based on this technology. This paper presents an automatic oil spill detection system based on SAR data which was developed on the basis of confirmed spillages and it was adapted to an important international shipping route off the Galician coast (northwest Iberian Peninsula). The system was supported by an adaptive segmentation process based on wind data as well as a shape oriented characterization algorithm. Moreover, two classifiers were developed and compared. Thus, image testing revealed up to 95.1% candidate labeling accuracy. Shared-memory parallel programming techniques were used to develop algorithms in order to improve above 25% of the system processing time.

  7. Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data

    Directory of Open Access Journals (Sweden)

    Günther Greiner

    2010-01-01

    Full Text Available Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

  8. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  9. Laplace domain automatic data assimilation of contaminant transport using a Wireless Sensor Network

    Science.gov (United States)

    Barnhart, K.; Illangasekare, T. H.

    2011-12-01

    Emerging in situ sensors and distributed network technologies have the potential to monitor dynamic hydrological and environmental processes more effectively than traditional monitoring and data acquisition techniques by sampling at greater spatial and temporal resolutions. In particular, Wireless Sensor Networks, the combination of low-power telemetry and energy-harvesting with miniaturized sensors, could play a large role in monitoring the environment on nature's time scale. Since sensor networks supply data with little or no delay, applications exist where automatic or real-time assimilation of this data would be useful, for example during smart remediation procedures where tracking of the plume response will reinforce real-time decisions. As a foray into this new data context, we consider the estimation of hydraulic conductivity when incorporating subsurface plume concentration data. Current practice optimizes the model in the time domain, which is often slow and overly sensitive to data anomalies. Instead, we perform model inversion in Laplace space and are able to do so because data gathered using new technologies can be sampled densely in time. An intermediate-scale synthetic aquifer is used to illustrate the developed technique. Data collection and model (re-)optimization are automatic. Electric conductivity values of passing sodium bromide plumes are sent through a wireless sensor network, stored in a database, scrubbed and passed to a modeling server which transforms the data and assimilates it into a Laplace domain model. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000

  10. Suprasegmental speech cues are automatically processed by the human brain: a mismatch negativity study.

    Science.gov (United States)

    Honbolygó, Ferenc; Csépe, Valéria; Ragó, Anett

    2004-06-03

    This study investigates the electrical brain activity correlates of the automatic detection of suprasegmental and local speech cues by using a passive oddball paradigm, in which the standard Hungarian word 'banán' ('banana' in English) was contrasted with two deviants: a voiceless phoneme deviant ('panán'), and a stress deviant, where the stress was on the second syllable, instead of the obligatory first one. As a result, we obtained the mismatch negativity component (MMN) of event-related brain potentials in each condition. The stress deviant elicited two MMNs: one as a response to the lack of stress as compared to the standard stimulus, and another to the additional stress. Our results support that the MMN is as valuable in investigating processing characteristics of suprasegmental features as in that of phonemic features. MMN data may provide further insight into pre-attentive processes contributing to spoken word recognition.

  11. Adaptive Automatic Gauge Control of a Cold Strip Rolling Process

    Directory of Open Access Journals (Sweden)

    ROMAN, N.

    2010-02-01

    Full Text Available The paper tackles with thickness control structure of the cold rolled strips. This structure is based on the rolls position control of a reversible quarto rolling mill. The main feature of the system proposed in the paper consists in the compensation of the errors introduced by the deficient dynamics of the hydraulic servo-system used for the rolls positioning, by means of a dynamic compensator that approximates the inverse system of the servo-system. Because the servo-system is considered variant over time, an on-line identification of the servo-system and parameter adapting of the compensator are achieved. The results obtained by numerical simulation are presented together with the data taken from real process. These results illustrate the efficiency of the proposed solutions.

  12. Automatic interpretation of magnetic data using Euler deconvolution with nonlinear background

    Digital Repository Service at National Institute of Oceanography (India)

    Dewangan, P.; Ramprasad, T.; Ramana, M.V.; Desa, M.; Shailaja, B.

    The voluminous gravity and magnetic data sets demand automatic interpretation techniques like Naudy, Euler and Werner deconvolution. Of these techniques, the Euler deconvolution has become a popular choice because the method assumes no particular...

  13. Data Processing for Scientists.

    Science.gov (United States)

    Heumann, K F

    1956-10-26

    This brief survey of integrated and electronic data processing has touched on such matters as the origin of the concepts, their use in business, machines that are available, indexing problems, and, finally, some scientific uses that surely foreshadow further development. The purpose of this has been to present for the consideration of scientists a point of view and some techniques which have had a phenomenal growth in the business world and to suggest that these are worth consideration in scientific data-handling problems (30). To close, let me quote from William Bamert on the experience of the C. and O. Railroad once more (8, p. 121): "Frankly, we have been asked whether we weren't planning for Utopia-the implication being that everyone except starry-eyed visionaries knows that Utopia is unattainable. Our answer is that of course we are! Has anyone yet discovered a better way to begin program planning of this nature? Our feeling is that compromise comes early enough in the normal order of things."

  14. 药物Ⅰ期临床试验计量资料统计分析自动报表的SAS实现%Statistical analysis process by SAS software to realize automatically statements of measurement data in phase Ⅰ clinical trials

    Institute of Scientific and Technical Information of China (English)

    段锋; 魏振满; 陈大为; 朱珍真; 屠舒; 毕京峰; 李文淑; 孙斌

    2013-01-01

    Objective To realize automatically statements of measurement data in phase I clinical trials by SAS software. Methods Based on the SAS software, we used its functions, such as spliting table, statistical analysis, choosing results, transposing table, mergering tables horizontally or lengthways, to built statements of measurement data step by step. Results The statistical analysis results were imported to EXELE form automaticlly as report requirement. Conclusion The process is accurate and reliable, and saves time and effort. It is suitable for the non-professional statistical researchers to realize automatically statements of measurement data.%目的 借助SAS软件实现Ⅰ期临床试验计量资料统计分析的自动报表.方法 基于SAS软件,应用其拆分表格、统计分析、结果选择、表格转置、横向合并表格、纵向合并表格等程序逐步导出报表.结果 借助EXELE表格成功实现药物Ⅰ期临床试验计量资料统计分析结果的自动报表.结论 该过程省时省力,准确可靠,为非统计专业医学研究人员提供了一种简便、可靠的自动报表方法.

  15. Methodology and Implications of Reconstruction and Automatic Processing of Natural Language of the Classroom.

    Science.gov (United States)

    Marlin, Marjorie; Barron, Nancy

    This paper discusses in some detail the procedural areas of reconstruction and automatic processing used by the Classroom Interaction Project of the University of Missouri's Center for Research in Social Behavior in the analysis of classroom language. First discussed is the process of reconstruction, here defined as the "process of adding to…

  16. Automatic Data Filter Customization Using a Genetic Algorithm

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    This work predicts whether a retrieval algorithm will usefully determine CO2 concentration from an input spectrum of GOSAT (Greenhouse Gases Observing Satellite). This was done to eliminate needless runtime on atmospheric soundings that would never yield useful results. A space of 50 dimensions was examined for predictive power on the final CO2 results. Retrieval algorithms are frequently expensive to run, and wasted effort defeats requirements and expends needless resources. This algorithm could be used to help predict and filter unneeded runs in any computationally expensive regime. Traditional methods such as the Fischer discriminant analysis and decision trees can attempt to predict whether a sounding will be properly processed. However, this work sought to detect a subsection of the dimensional space that can be simply filtered out to eliminate unwanted runs. LDAs (linear discriminant analyses) and other systems examine the entire data and judge a "best fit," giving equal weight to complex and problematic regions as well as simple, clear-cut regions. In this implementation, a genetic space of "left" and "right" thresholds outside of which all data are rejected was defined. These left/right pairs are created for each of the 50 input dimensions. A genetic algorithm then runs through countless potential filter settings using a JPL computer cluster, optimizing the tossed-out data s yield (proper vs. improper run removal) and number of points tossed. This solution is robust to an arbitrary decision boundary within the data and avoids the global optimization problem of whole-dataset fitting using LDA or decision trees. It filters out runs that would not have produced useful CO2 values to save needless computation. This would be an algorithmic preprocessing improvement to any computationally expensive system.

  17. Automatic 3D modelling of metal frame connections from LiDAR data for structural engineering purposes

    Science.gov (United States)

    Cabaleiro, M.; Riveiro, B.; Arias, P.; Caamaño, J. C.; Vilán, J. A.

    2014-10-01

    The automatic generation of 3D as-built models from LiDAR data is a topic where significant progress has been made in recent years. This paper describes a new method for the detection and automatic 3D modelling of frame connections and the formation of profiles comprising a metal frame from LiDAR data. The method has been developed using an approach to create 2.5D density images for subsequent processing using the Hough transform. The structure connections can be automatically identified after selecting areas in the point cloud. As a result, the coordinates of the connection centre, composition (profiles, size and shape of the haunch) and direction of their profiles are extracted. A standard file is generated with the data obtained from the geometric and semantic characterisation of the connections. The 3D model of connections and metal frames, which are suitable for processing software for structural engineering applications, are generated automatically based on this file. The algorithm presented in this paper has been tested under laboratory conditions and also with several industrial portal frames, achieving promising results. Finally, 3D models were generated, and structural calculations were performed.

  18. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers.

    Science.gov (United States)

    Salimi-Khorshidi, Gholamreza; Douaud, Gwenaëlle; Beckmann, Christian F; Glasser, Matthew F; Griffanti, Ludovica; Smith, Stephen M

    2014-04-15

    Many sources of fluctuation contribute to the fMRI signal, and this makes identifying the effects that are truly related to the underlying neuronal activity difficult. Independent component analysis (ICA) - one of the most widely used techniques for the exploratory analysis of fMRI data - has shown to be a powerful technique in identifying various sources of neuronally-related and artefactual fluctuation in fMRI data (both with the application of external stimuli and with the subject "at rest"). ICA decomposes fMRI data into patterns of activity (a set of spatial maps and their corresponding time series) that are statistically independent and add linearly to explain voxel-wise time series. Given the set of ICA components, if the components representing "signal" (brain activity) can be distinguished form the "noise" components (effects of motion, non-neuronal physiology, scanner artefacts and other nuisance sources), the latter can then be removed from the data, providing an effective cleanup of structured noise. Manual classification of components is labour intensive and requires expertise; hence, a fully automatic noise detection algorithm that can reliably detect various types of noise sources (in both task and resting fMRI) is desirable. In this paper, we introduce FIX ("FMRIB's ICA-based X-noiseifier"), which provides an automatic solution for denoising fMRI data via accurate classification of ICA components. For each ICA component FIX generates a large number of distinct spatial and temporal features, each describing a different aspect of the data (e.g., what proportion of temporal fluctuations are at high frequencies). The set of features is then fed into a multi-level classifier (built around several different classifiers). Once trained through the hand-classification of a sufficient number of training datasets, the classifier can then automatically classify new datasets. The noise components can then be subtracted from (or regressed out of) the original

  19. Testing interactive effects of automatic and conflict control processes during response inhibition - A system neurophysiological study.

    Science.gov (United States)

    Chmielewski, Witold X; Beste, Christian

    2017-02-01

    In everyday life successful acting often requires to inhibit automatic responses that might not be appropriate in the current situation. These response inhibition processes have been shown to become aggravated with increasing automaticity of pre-potent response tendencies. Likewise, it has been shown that inhibitory processes are complicated by a concurrent engagement in additional cognitive control processes (e.g. conflicting monitoring). Therefore, opposing processes (i.e. automaticity and cognitive control) seem to strongly impact response inhibition. However, possible interactive effects of automaticity and cognitive control for the modulation of response inhibition processes have yet not been examined. In the current study we examine this question using a novel experimental paradigm combining a Go/NoGo with a Simon task in a system neurophysiological approach combining EEG recordings with source localization analyses. The results show that response inhibition is less accurate in non-conflicting than in conflicting stimulus-response mappings. Thus it seems that conflicts and the resulting engagement in conflict monitoring processes, as reflected in the N2 amplitude, may foster response inhibition processes. This engagement in conflict monitoring processes leads to an increase in cognitive control, as reflected by an increased activity in the anterior and posterior cingulate areas, while simultaneously the automaticity of response tendencies is decreased. Most importantly, this study suggests that the quality of conflict processes in anterior cingulate areas and especially the resulting interaction of cognitive control and automaticity of pre-potent response tendencies are important factors to consider, when it comes to the modulation of response inhibition processes.

  20. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region.

    Directory of Open Access Journals (Sweden)

    Jing Tian

    Full Text Available Optical coherence tomography (OCT is a high speed, high resolution and non-invasive imaging modality that enables the capturing of the 3D structure of the retina. The fast and automatic analysis of 3D volume OCT data is crucial taking into account the increased amount of patient-specific 3D imaging data. In this work, we have developed an automatic algorithm, OCTRIMA 3D (OCT Retinal IMage Analysis 3D, that could segment OCT volume data in the macular region fast and accurately. The proposed method is implemented using the shortest-path based graph search, which detects the retinal boundaries by searching the shortest-path between two end nodes using Dijkstra's algorithm. Additional techniques, such as inter-frame flattening, inter-frame search region refinement, masking and biasing were introduced to exploit the spatial dependency between adjacent frames for the reduction of the processing time. Our segmentation algorithm was evaluated by comparing with the manual labelings and three state of the art graph-based segmentation methods. The processing time for the whole OCT volume of 496×644×51 voxels (captured by Spectralis SD-OCT was 26.15 seconds which is at least a 2-8-fold increase in speed compared to other, similar reference algorithms used in the comparisons. The average unsigned error was about 1 pixel (∼ 4 microns, which was also lower compared to the reference algorithms. We believe that OCTRIMA 3D is a leap forward towards achieving reliable, real-time analysis of 3D OCT retinal data.

  1. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  2. Automatic convey or System with In–Process Sorting Mechanism using PLC and HMI System

    Directory of Open Access Journals (Sweden)

    Y V Aruna

    2015-11-01

    Full Text Available Programmable logic controllers are widely used in many manufacturing process like machinery packaging material handling automatic assembly. These are special type of microprocessor based controller used for any application that needs any kind of electrical controller including lighting controller and HVAC control system. Automatic conveyor system is a computerized control method of controlling and managing the sorting mechanism at the same time maintaining the efficiency of the industry & quality of the products.HMI for automatic conveyor system is considered the primary way of controlling each operation. Text displays are available as well as graphical touch screens. It is used in touch panels and local monitoring of machines. This paper deals with the efficient use of PLC in automatic conveyor system and also building the accuracy in it.

  3. Research on automatic loading & unloading technology for vertical hot ring rolling process

    Directory of Open Access Journals (Sweden)

    Xiaokai Wang

    2015-01-01

    Full Text Available The automatic loading & unloading technology is the key to the automatic ring production line. In this paper, the automatic vertical hot ring rolling (VHRR process is taken as the target, the method of the loading & unloading for VHRR is proposed, and the mechanical structure of loading & unloading system is designed, The virtual prototype model of VHRR mill and loading & unloading mechanism is established, and the coordinated control method of VHRR mill and loading & unloading auxiliaries is studied, the movement trace and dynamic characteristic of the critical components are obtained. Finally, a series of hot ring rolling tests are conducted on the VHRR mill, and the production rhythm and the formed rings' geometric precision are analysed. The tests results show that the loading & unloading technology can meet the high quality and high efficiency ring production requirement. The research conclusions have practical significance for the large-scale automatic ring production.

  4. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  5. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data

    Science.gov (United States)

    Huang, Hai; Brenner, Claus; Sester, Monika

    2013-05-01

    This paper presents a generative statistical approach to automatic 3D building roof reconstruction from airborne laser scanning point clouds. In previous works, bottom-up methods, e.g., points clustering, plane detection, and contour extraction, are widely used. Due to the data artefacts caused by tree clutter, reflection from windows, water features, etc., the bottom-up reconstruction in urban areas may suffer from a number of incomplete or irregular roof parts. Manually given geometric constraints are usually needed to ensure plausible results. In this work we propose an automatic process with emphasis on top-down approaches. The input point cloud is firstly pre-segmented into subzones containing a limited number of buildings to reduce the computational complexity for large urban scenes. For the building extraction and reconstruction in the subzones we propose a pure top-down statistical scheme, in which the bottom-up efforts or additional data like building footprints are no more required. Based on a predefined primitive library we conduct a generative modeling to reconstruct roof models that fit the data. Primitives are assembled into an entire roof with given rules of combination and merging. Overlaps of primitives are allowed in the assembly. The selection of roof primitives, as well as the sampling of their parameters, is driven by a variant of Markov Chain Monte Carlo technique with specified jump mechanism. Experiments are performed on data-sets of different building types (from simple houses, high-rise buildings to combined building groups) and resolutions. The results show robustness despite the data artefacts mentioned above and plausibility in reconstruction.

  6. Designing and Building an Automatic Information Retrieval System for Handling the Arabic Data

    OpenAIRE

    2005-01-01

    This paper aimed to design and build an Automatic Information Retrieval System to handle the Arabic data. Also, this paper presents some type of comparison between the retrieval results using the vector space model in two different indexing methods: the full-ward indexing and the root indexing. The proposed Automatic Information Retrieval system was implemented and built using a traditional model technique: Vector Space Model (VSM) where the cosine measure similarity was used. The output resu...

  7. Data mining spacecraft telemetry: towards generic solutions to automatic health monitoring and status characterisation

    Science.gov (United States)

    Royer, P.; De Ridder, J.; Vandenbussche, B.; Regibo, S.; Huygen, R.; De Meester, W.; Evans, D. J.; Martinez, J.; Korte-Stapff, M.

    2016-07-01

    We present the first results of a study aimed at finding new and efficient ways to automatically process spacecraft telemetry for automatic health monitoring. The goal is to reduce the load on the flight control team while extending the "checkability" to the entire telemetry database, and provide efficient, robust and more accurate detection of anomalies in near real time. We present a set of effective methods to (a) detect outliers in the telemetry or in its statistical properties, (b) uncover and visualise special properties of the telemetry and (c) detect new behavior. Our results are structured around two main families of solutions. For parameters visiting a restricted set of signal values, i.e. all status parameters and about one third of all the others, we focus on a transition analysis, exploiting properties of Poincare plots. For parameters with an arbitrarily high number of possible signal values, we describe the statistical properties of the signal via its Kernel Density Estimate. We demonstrate that this allows for a generic and dynamic approach of the soft-limit definition. Thanks to a much more accurate description of the signal and of its time evolution, we are more sensitive and more responsive to outliers than the traditional checks against hard limits. Our methods were validated on two years of Venus Express telemetry. They are generic for assisting in health monitoring of any complex system with large amounts of diagnostic sensor data. Not only spacecraft systems but also present-day astronomical observatories can benefit from them.

  8. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  9. Automatic metaphor processing in adults with Asperger syndrome: a metaphor interference effect task.

    Science.gov (United States)

    Hermann, Ismene; Haser, Verena; van Elst, Ludger Tebartz; Ebert, Dieter; Müller-Feldmeth, Daniel; Riedel, Andreas; Konieczny, Lars

    2013-11-01

    This paper investigates automatic processing of novel metaphors in adults with Asperger Syndrome (AS) and typically developing controls. We present an experiment combining a semantic judgment task and a recognition task. Four types of sentences were compared: Literally true high-typical sentences, literally true low-typical sentences, apt metaphors, and scrambled metaphors (literally false sentences which are not readily interpretable as metaphors). Participants were asked to make rapid decisions about the literal truth of such sentences. The results revealed that AS and control participants showed significantly slower RTs for metaphors than for scrambled metaphors and made more mistakes in apt metaphoric sentences than in scrambled metaphors. At the same time, there was higher recognition of apt metaphors compared with scrambled metaphors. The findings indicate intact automatic metaphor processing in AS and replicate previous findings on automatic metaphor processing in typically developing individuals.

  10. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    Science.gov (United States)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  11. Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text

    CERN Document Server

    Jonnalagadda, Siddhartha; Hakenberg, Jorg; Baral, Chitta; Gonzalez, Graciela

    2010-01-01

    The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.

  12. Automatic digital document processing and management problems, algorithms and techniques

    CERN Document Server

    Ferilli, Stefano

    2011-01-01

    This text reviews the issues involved in handling and processing digital documents. Examining the full range of a document's lifetime, this book covers acquisition, representation, security, pre-processing, layout analysis, understanding, analysis of single components, information extraction, filing, indexing and retrieval. This title: provides a list of acronyms and a glossary of technical terms; contains appendices covering key concepts in machine learning, and providing a case study on building an intelligent system for digital document and library management; discusses issues of security,

  13. Process concepts for semi-automatic dismantling of LCD televisions

    OpenAIRE

    Elo, Kristofer; Sundin, Erik

    2014-01-01

    There is a large variety of electrical and electronic equipment products, for example liquid crystal display television sets (LCD TVs), in the waste stream today. Many LCD TVs contain mercury, which is a challenge to treat at the recycling plants. Two current used processes to recycle LCD TVs are automated shredding and manual disassembly. This paper aims to present concepts for semi-automated dismantling processes for LCD TVs in order to achieve higher productivity and flexibility, and in tu...

  14. Automatic concrete cracks detection and mapping of terrestrial laser scan data

    Directory of Open Access Journals (Sweden)

    Mostafa Rabah

    2013-12-01

    The current paper submits a method for automatic concrete cracks detection and mapping from the data that was obtained during laser scanning survey. The method of cracks detection and mapping is achieved by three steps, namely the step of shading correction in the original image, step of crack detection and finally step of crack mapping and processing steps. The detected crack is defined in a pixel coordinate system. To remap the crack into the referred coordinate system, a reverse engineering is used. This is achieved by a hybrid concept of terrestrial laser-scanner point clouds and the corresponding camera image, i.e. a conversion from the pixel coordinate system to the terrestrial laser-scanner or global coordinate system. The results of the experiment show that the mean differences between terrestrial laser scan and the total station are about 30.5, 16.4 and 14.3 mms in x, y and z direction, respectively.

  15. Eye movements in pedophiles: automatic and controlled attentional processes while viewing prepubescent stimuli.

    Science.gov (United States)

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2013-05-01

    Recent theories in sexuality highlight the importance of automatic and controlled attentional processes in viewing sexually relevant stimuli. The model of Spiering and Everaerd (2007) assumes that sexually relevant features of a stimulus are preattentively selected and automatically induce focal attention to these sexually relevant aspects. Whether this assumption proves true for pedophiles is unknown. It is aim of this study to test this assumption empirically for people suffering from pedophilic interests. Twenty-two pedophiles, 8 nonpedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult while eye movements were measured. Entry time was assessed as a measure of automatic attentional processes and relative fixation time in order to assess controlled attentional processes. Pedophiles demonstrated significantly shorter entry time to child stimuli than to adult stimuli. The opposite was the case for nonpedophiles, as they showed longer relative fixation time for adult stimuli, and, against all expectations, pedophiles also demonstrated longer relative fixation time for adult stimuli. The results confirmed the hypothesis that pedophiles automatically selected sexually relevant stimuli (children). Contrary to all expectations, this automatic selection did not trigger the focal attention to these sexually relevant pictures. Furthermore, pedophiles were first and longest attracted by faces and pubic regions of children; nonpedophiles were first and longest attracted by faces and breasts of adults. The results demonstrated, for the first time, that the face and pubic region are the most attracting regions in children for pedophiles.

  16. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  17. Cooperative processing data bases

    Science.gov (United States)

    Hasta, Juzar

    1991-01-01

    Cooperative processing for the 1990's using client-server technology is addressed. The main theme is concepts of downsizing from mainframes and minicomputers to workstations on a local area network (LAN). This document is presented in view graph form.

  18. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    Science.gov (United States)

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane

    2008-01-01

    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  19. Automatic fracture density update using smart well data and artificial neural networks

    Science.gov (United States)

    Al-Anazi, A.; Babadagli, T.

    2010-03-01

    This paper presents a new methodology to continuously update and improve fracture network models. We begin with a hypothetical model whose fracture network parameters and geological information are known. After generating the "exact" fracture network with known characteristics, the data were exported to a reservoir simulator and simulations were run over a period of time. Intelligent wells equipped with downhole multiple pressure and flow sensors were placed throughout the reservoir and put into production. These producers were completed in different fracture zones to create a representative pressure and production response. We then considered a number of wells of which static (cores and well logs) and dynamic (production) data were used to model well fracture density. As new wells were opened, historical static and dynamic data from previous wells and static data from the new wells were used to update the fracture density using Artificial Neural Networks (ANN). The accuracy of the prediction model depends significantly on the representation of the available data of the existing fracture network. The importance of conventional data (surface production data) and smart well data prediction capability was also investigated. Highly sensitive input data were selected through a forward selection scheme to train the ANN. Well geometric locations were included as a new link in the ANN regression process. Once the relationship between fracture network parameters and well performance data was established, the ANN model was used to predict fracture density at newly drilled locations. Finally, an error analysis through a correlation coefficient and percentage absolute relative error performance was performed to examine the accuracy of the proposed inverse modeling methodology. It was shown that fracture dominated production performance data collected from both conventional and smart wells allow for automatically updating the fracture network model. The proposed technique helps

  20. Multi-level Bayesian safety analysis with unprocessed Automatic Vehicle Identification data for an urban expressway.

    Science.gov (United States)

    Shi, Qi; Abdel-Aty, Mohamed; Yu, Rongjie

    2016-03-01

    In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature.

  1. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    NARCIS (Netherlands)

    Calamé, Jens R.; Ioustinova, Natalia; Pol, van de Jaco; Romijn, J.M.T.; Smith, G.; Pol, van de J.C.

    2007-01-01

    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to gener

  2. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    Science.gov (United States)

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  3. Processing LHC data

    CERN Multimedia

    CERN IT department

    2013-01-01

    The LHC produces 600 million collisions every second in each detector, which generates approximately one petabyte of data per second. None of today’s computing systems are capable of recording such rates. Hence sophisticated selection systems are used for a first fast electronic pre-selection, only passing one out of 10 000 events. Tens of thousands of processor cores then select 1% of the remaining events. Even after such a drastic data reduction, the four big experiments, ALICE, ATLAS, CMS and LHCb, together need to store over 25 petabytes per year. The LHC data are aggregated in the CERN Data Centre, where initial data reconstruction is performed, and a copy is archived to long-term tape storage. Another copy is sent to several large scale data centres around the world. Subsequently hundreds of thousands of computers from around the world come into action: harnessed in a distributed computing service, they form the Worldwide LHC Computing Grid (WLCG), which provides the resources to store, distribute, an...

  4. Automatic Detection of Steel Ball's Surface Flaws Based on Image Processing

    Institute of Scientific and Technical Information of China (English)

    YU Zheng-lin; TAN Wei; YANG Dong-lin; CAO Guo-hua

    2007-01-01

    A new method to detect steel ball's surface flaws is presented based on computer techniques of image processing and pattern recognition. The steel ball's surface flaws is the primary factor causing bearing failure. The high efficient and precision detections for the surface flaws of steel ball can be conducted by the presented method, including spot, abrasion, burn, scratch and crack, etc. The design of main components of the detecting system is described in detail including automatic feeding mechanism, automatic spreading mechanism of steel ball's surface, optical system of microscope, image acquisition system, image processing system. The whole automatic system is controlled by an industrial control computer, which can carry out the recognition of flaws of steel ball's surface effectively.

  5. Effect of Data Processing on Data Quality

    Directory of Open Access Journals (Sweden)

    A. R. Samih

    2008-01-01

    Full Text Available Problem statement: Great attention had been paid on spatial data quality by the scientific community. This was due to the negative impact that a poor spatial data quality had on the competitiveness of an organization. On other hand, we can never obtain good quality data from a poor quality data. In this study, we demonstrate the effects of the different processing and preprocessing on the quality of spatial data. As we know each type of processing introduces errors and deformations at the original spatial data. Approach: Field applications and real samples were presented to prove the effect of data processing on data quality. We used spectrally and spatially processed satellite images which present the following areas: (i Greater Amman area, (ii Walla and Habisse basins. Different types of processing using different scales and resolutions were applied to field applications to evaluate the effect of scale, resolution and electronic transfer from vector to raster. Results: The vector layers extracted from these spatial data at different scales and resolutions were compared to each other. The comparison showed a great deformation in shape and value. This research demonstrates the influence of the scale, the resolution and transformation from vector to raster of spatial data base on the accuracy. Conclusion: We concluded that scale, resolution and electronic transfer have great effects on data quality. This effect should be considered in building any data base and all data base must have history file for evaluating its accuracy quality.

  6. Automatic detection of NIL defects using microscopy and image processing

    KAUST Repository

    Pietroy, David

    2013-12-01

    Nanoimprint Lithography (NIL) is a promising technology for low cost and large scale nanostructure fabrication. This technique is based on a contact molding-demolding process, that can produce number of defects such as incomplete filling, negative patterns, sticking. In this paper, microscopic imaging combined to a specific processing algorithm is used to detect numerically defects in printed patterns. Results obtained for 1D and 2D imprinted gratings with different microscopic image magnifications are presented. Results are independent on the device which captures the image (optical, confocal or electron microscope). The use of numerical images allows the possibility to automate the detection and to compute a statistical analysis of defects. This method provides a fast analysis of printed gratings and could be used to monitor the production of such structures. © 2013 Elsevier B.V. All rights reserved.

  7. Automatic optimized discovery, creation and processing of astronomical catalogs

    NARCIS (Netherlands)

    Buddelmeijer, Hugo; Boxhoorn, Danny; Valentijn, Edwin A.

    2013-01-01

    We present the design of a novel way of handling astronomical catalogs in Astro-WISE in order to achieve the scalability required for the data produced by large scale surveys. A high level of automation and abstraction is achieved in order to facilitate interoperation with visualization software for

  8. Quality assessment of automatically extracted data from GPs' EPR.

    Science.gov (United States)

    de Clercq, Etienne; Moreels, Sarah; Van Casteren, Viviane; Bossuyt, Nathalie; Goderis, Geert; Bartholomeeusen, Stefaan

    2012-01-01

    There are many secondary benefits to collecting routine primary care data, but we first need to understand some of the properties of this data. In this paper we describe the method used to assess the PPV and sensitivity of data extracted from Belgian GPs' EPR (diagnoses, drug prescriptions, referrals, and certain parameters), using data collected through an electronic questionnaire as a gold standard. We describe the results of the ResoPrim phase 2 project, which involved 4 software systems and 43 practices (10,307 patients). This method of assessment could also be applied to other research networks.

  9. Cognitive regulation of smoking behavior within a cigarette: Automatic and nonautomatic processes.

    Science.gov (United States)

    Motschman, Courtney A; Tiffany, Stephen T

    2016-06-01

    There has been limited research on cognitive processes governing smoking behavior in individuals who are tobacco dependent. In a replication (Baxter & Hinson, 2001) and extension, this study examined the theory (Tiffany, 1990) that drug use may be controlled by automatic processes that develop over repeated use. Heavy and occasional cigarette smokers completed a button-press reaction time (RT) task while concurrently smoking a cigarette, pretending to smoke a lit cigarette, or not smoking. Slowed RT during the button-press task indexed the cognitive disruption associated with nonautomatic control of behavior. Occasional smokers' RTs were slowed when smoking or pretending to smoke compared with when not smoking. Heavy smokers' RTs were slowed when pretending to smoke versus not smoking; however, their RTs were similarly fast when smoking compared with not smoking. The results indicated that smoking behavior was more highly regulated by controlled, nonautomatic processes among occasional smokers and by automatic processes among heavy smokers. Patterns of RT across the interpuff interval indicated that occasional smokers were significantly slowed in anticipation of and immediately after puffing onset, whereas heavy smokers were only slowed significantly after puffing onset. These findings suggest that the entirety of the smoking sequence becomes automatized, with the behaviors leading up to puffing becoming more strongly regulated by automatic processes with experience. These results have relevance to theories on the cognitive regulation of cigarette smoking and support the importance of interventions that focus on routinized behaviors that individuals engage in during and leading up to drug use. (PsycINFO Database Record

  10. Sample Data Processing.

    Science.gov (United States)

    1982-08-01

    the relative practicality of compensating the channel with an approach of predistorting the masking sequence, by processing in a filter that...replicates the channel response, with a conventional approach of equal- izing the channel with an inverse filter. The predistortion method demonstrated a...compensate for the channel distortion is to predistort the encryption stream in the receiver by means of a fil- ter which replicates the impulse response of

  11. Adaptive Clutch Engaging Process Control for Automatic Mechanical Transmission

    Institute of Scientific and Technical Information of China (English)

    LIU Hai-ou; CHEN Hui-yan; DING Hua-rong; HE Zhong-bo

    2005-01-01

    Based on detail analysis of clutch engaging process control targets and adaptive demands, a control strategy which is based on speed signal, different from that of based on main clutch displacement signal, is put forward. It considers both jerk and slipping work which are the most commonly used quality evaluating indexes of vehicle starting phase. The adaptive control system and its reference model are discussed profoundly.Taking the adaptability to different starting gears and different road conditions as examples, some proving field test records are shown to illustrate the main clutch adaptive control strategy at starting phase. Proving field test gives acceptable results.

  12. Automatic land vehicle navigation using road map data

    Energy Technology Data Exchange (ETDEWEB)

    Schindwolf, R.

    1984-06-01

    A land navigation system has been developed that provides accurate navigation data while it is traveling on mapped roads. The system is autonomous and consists of a simple dead-reckoning navigator that is updated with stored road map data. Simulation and preliminary test results indicate that accuracies on the order of 50 feet can be achieved. Accuracy is independent of time.

  13. Assessing the Utility of Automatic Cancer Registry Notifications Data Extraction from Free-Text Pathology Reports.

    Science.gov (United States)

    Nguyen, Anthony N; Moore, Julie; O'Dwyer, John; Philpot, Shoni

    2015-01-01

    Cancer Registries record cancer data by reading and interpreting pathology cancer specimen reports. For some Registries this can be a manual process, which is labour and time intensive and subject to errors. A system for automatic extraction of cancer data from HL7 electronic free-text pathology reports has been proposed to improve the workflow efficiency of the Cancer Registry. The system is currently processing an incoming trickle feed of HL7 electronic pathology reports from across the state of Queensland in Australia to produce an electronic cancer notification. Natural language processing and symbolic reasoning using SNOMED CT were adopted in the system; Queensland Cancer Registry business rules were also incorporated. A set of 220 unseen pathology reports selected from patients with a range of cancers was used to evaluate the performance of the system. The system achieved overall recall of 0.78, precision of 0.83 and F-measure of 0.80 over seven categories, namely, basis of diagnosis (3 classes), primary site (66 classes), laterality (5 classes), histological type (94 classes), histological grade (7 classes), metastasis site (19 classes) and metastatic status (2 classes). These results are encouraging given the large cross-section of cancers. The system allows for the provision of clinical coding support as well as indicative statistics on the current state of cancer, which is not otherwise available.

  14. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    Science.gov (United States)

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica

    2013-05-01

    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.

  15. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  16. Automatic mapping of urban areas from Landsat data using impervious surface fraction algorithm

    Science.gov (United States)

    Nguyen, S. T.; Chen, C. F.; Chen, C. R.

    2014-12-01

    Urbanization is a result of aggregation of people in urban areas that can help advance socioeconomic development and pull out people from the poverty line. However, if not monitored well, it can also lead to loss of farmlands, natural forests as well as to societal impacts including burgeoning growth of slums, pollution, and crime. Thus, spatiotemporal information that shapes the urbanization is thus critical to the process of urban planning. The overall objective of this study is to develop an impervious surface fraction algorithm (ISFA) for automatically mapping urban areas from Landsat data. We processed the data for 1986, 2001 and 2014 to trace the multi-decadal spatiotemporal change of Honduran capital city using a three-step procedure: (1) data pre-processing to perform image normalization as well as to produce the difference in the values (DVSS) between the simple ratio (SR) of green and shortwave bands and the soil adjust vegetation index (SAVI), (2) quantification of urban areas using ISFA, and (3) accuracy assessment of mapping results using the ground reference data constructed using land-cover maps and FORMOSAT-2 imagery. The mapping accuracy assessment was performed for 2001 and 2014 by comparing with the ground reference data indicated satisfactory results with the overall accuracies and Kappa coefficients generally higher than 90% and 0.8, respectively. When examining the urbanization between these years, it could be observed that the urban area was significantly expanded from 1986 to 2014, mainly driven by two factors of rapid population growth and socioeconomic development. This study eventually leads to a realization of the merit of using ISFA for multi-decadal monitoring of the urbanization of Honduran capital city from Landsat data. Results from this research can be used by urban planners as a general indicator to quantify urban change and environmental impacts. The methods were thus transferable to monitor urban growth in cities and their peri

  17. Fast Implementation of Matched Filter Based Automatic Alignment Image Processing

    Energy Technology Data Exchange (ETDEWEB)

    Awwal, A S; Rice, K; Taha, T

    2008-04-02

    Video images of laser beams imprinted with distinguishable features are used for alignment of 192 laser beams at the National Ignition Facility (NIF). Algorithms designed to determine the position of these beams enable the control system to perform the task of alignment. Centroiding is a common approach used for determining the position of beams. However, real world beam images suffer from intensity fluctuation or other distortions which make such an approach susceptible to higher position measurement variability. Matched filtering used for identifying the beam position results in greater stability of position measurement compared to that obtained using the centroiding technique. However, this gain is achieved at the expense of extra processing time required for each beam image. In this work we explore the possibility of using a field programmable logic array (FPGA) to speed up these computations. The results indicate a performance improvement of 20 using the FPGA relative to a 3 GHz Pentium 4 processor.

  18. Native language shapes automatic neural processing of speech.

    Science.gov (United States)

    Intartaglia, Bastien; White-Schwoch, Travis; Meunier, Christine; Roman, Stéphane; Kraus, Nina; Schön, Daniele

    2016-08-01

    The development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this attunement to native language. However, most work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the first formant when listening to the syllable of their native language as compared to non-native language. These results align with the hypothesis that language experience shapes sensory processing of speech and that this plasticity occurs as a function of what is meaningful to a listener.

  19. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  20. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  1. Automatically identifying scatter in fluorescence data using robust techniques

    DEFF Research Database (Denmark)

    Engelen, S.; Frosch, Stina; Hubert, M.

    2007-01-01

    First and second order Rayleigh and Raman scatter is a common problem when fitting Parallel Factor Analysis (PARAFAC) to fluorescence excitation-emission data (EEM). The scatter does not contain any relevant chemical information and does not conform to the low-rank trilinear model. The scatter...... is developed based on robust statistical methods. The method does not demand any visual inspection of the data prior to modeling, and can handle first and second order Rayleigh scatter as well as Raman scatter in various types of EEM data. The results of the automated scatter identification method were used...... as input data for three different PARAFAC methods. Firstly inserting missing values in the scatter regions are tested, secondly an interpolation of the scatter regions is performed and finally the scatter regions are down-weighted. These results show that the PARAFAC method to choose after scatter...

  2. Examining the influence of psychopathy, hostility biases, and automatic processing on criminal offenders' Theory of Mind

    NARCIS (Netherlands)

    L. Nentjes; D. Bernstein; A. Arntz; G. van Breukelen; M. Slaats

    2015-01-01

    Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psyc

  3. Automatic and Manual Processes in End-User Multimedia Authoring Tools: Where is the Balance?

    NARCIS (Netherlands)

    Guimarães, R.L.

    2010-01-01

    This thesis aims to analyze, model, and develop a framework for next-generation multimedia authoring tools targeted to end-users. In particular, I concentrate on the combination of automatic and manual processes for the realization of such framework. My contributions are realized in the context of a

  4. Semi-Automatic Post-Processing for Improved Usability of Electure Podcasts

    Science.gov (United States)

    Hurst, Wolfgang; Welte, Martina

    2009-01-01

    Purpose: Playing back recorded lectures on handheld devices offers interesting perspectives for learning, but suffers from small screen sizes. The purpose of this paper is to propose several semi-automatic post-processing steps in order to improve usability by providing a better readability and additional navigation functionality.…

  5. Automatic image processing solutions for MRI-guided minimally invasive intervention planning

    NARCIS (Netherlands)

    Noorda, YH

    2016-01-01

    In this thesis, automatic image processing methods are discussed for the purpose of improving treatment planning of MRI-guided minimally invasive interventions. Specifically, the following topics are addressed: rib detection in MRI, liver motion modeling in MRI and MR-CT registration of planning ima

  6. Automatic image processing solutions for MRI-guided minimally invasive intervention planning

    OpenAIRE

    Noorda, YH

    2016-01-01

    In this thesis, automatic image processing methods are discussed for the purpose of improving treatment planning of MRI-guided minimally invasive interventions. Specifically, the following topics are addressed: rib detection in MRI, liver motion modeling in MRI and MR-CT registration of planning image for HIFU treatment.

  7. Automatic Processing of Emotional Faces in High-Functioning Pervasive Developmental Disorders: An Affective Priming Study

    Science.gov (United States)

    Kamio, Yoko; Wolf, Julie; Fein, Deborah

    2006-01-01

    This study examined automatic processing of emotional faces in individuals with high-functioning Pervasive Developmental Disorders (HFPDD) using an affective priming paradigm. Sixteen participants (HFPDD and matched controls) were presented with happy faces, fearful faces or objects in both subliminal and supraliminal exposure conditions, followed…

  8. SNPflow: a lightweight application for the processing, storing and automatic quality checking of genotyping assays.

    Directory of Open Access Journals (Sweden)

    Hansi Weissensteiner

    Full Text Available Single nucleotide polymorphisms (SNPs play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX and ABI 7900HT (TaqMan, KASPar systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at.

  9. SNPflow: A Lightweight Application for the Processing, Storing and Automatic Quality Checking of Genotyping Assays

    Science.gov (United States)

    Schönherr, Sebastian; Neuner, Mathias; Forer, Lukas; Specht, Günther; Kloss-Brandstätter, Anita; Kronenberg, Florian; Coassin, Stefan

    2013-01-01

    Single nucleotide polymorphisms (SNPs) play a prominent role in modern genetics. Current genotyping technologies such as Sequenom iPLEX, ABI TaqMan and KBioscience KASPar made the genotyping of huge SNP sets in large populations straightforward and allow the generation of hundreds of thousands of genotypes even in medium sized labs. While data generation is straightforward, the subsequent data conversion, storage and quality control steps are time-consuming, error-prone and require extensive bioinformatic support. In order to ease this tedious process, we developed SNPflow. SNPflow is a lightweight, intuitive and easily deployable application, which processes genotype data from Sequenom MassARRAY (iPLEX) and ABI 7900HT (TaqMan, KASPar) systems and is extendible to other genotyping methods as well. SNPflow automatically converts the raw output files to ready-to-use genotype lists, calculates all standard quality control values such as call rate, expected and real amount of replicates, minor allele frequency, absolute number of discordant replicates, discordance rate and the p-value of the HWE test, checks the plausibility of the observed genotype frequencies by comparing them to HapMap/1000-Genomes, provides a module for the processing of SNPs, which allow sex determination for DNA quality control purposes and, finally, stores all data in a relational database. SNPflow runs on all common operating systems and comes as both stand-alone version and multi-user version for laboratory-wide use. The software, a user manual, screenshots and a screencast illustrating the main features are available at http://genepi-snpflow.i-med.ac.at. PMID:23527209

  10. GPU applications for data processing

    Energy Technology Data Exchange (ETDEWEB)

    Vladymyrov, Mykhailo, E-mail: mykhailo.vladymyrov@cern.ch [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); Aleksandrov, Andrey [LPI - Lebedev Physical Institute of the Russian Academy of Sciences, RUS-119991 Moscow (Russian Federation); INFN sezione di Napoli, I-80125 Napoli (Italy); Tioukov, Valeri [INFN sezione di Napoli, I-80125 Napoli (Italy)

    2015-12-31

    Modern experiments that use nuclear photoemulsion imply fast and efficient data acquisition from the emulsion can be performed. The new approaches in developing scanning systems require real-time processing of large amount of data. Methods that use Graphical Processing Unit (GPU) computing power for emulsion data processing are presented here. It is shown how the GPU-accelerated emulsion processing helped us to rise the scanning speed by factor of nine.

  11. Parallelization and automatic data distribution for nuclear reactor simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, L.M. [Liebrock-Hicks Research, Calumet, MI (United States)

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  12. FEATURES AND GROUND AUTOMATIC EXTRACTION FROM AIRBORNE LIDAR DATA

    OpenAIRE

    D. Costantino; M. G. Angelini

    2012-01-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and l...

  13. Necessary Processing of Personal Data

    DEFF Research Database (Denmark)

    Tranberg, Charlotte Bagger

    2006-01-01

    The Data Protection Directive prohibits processing of sensitive data (racial or ethnic origin, political, religious or philosophical convictions, trade union membership and information on health and sex life). All other personal data may be processed, provided processing is deemed necessary...... Handelsgesellschaft. The aim of this article is to clarify the necessity requirement of the Data Protection Directive in terms of the general principle of proportionality. The usefulness of the principle of proportionality as the standard by which processing of personal data may be weighed is illustrated by the Peck...

  14. Automatic analysis (aa: efficient neuroimaging workflows and parallel processing using Matlab and XML

    Directory of Open Access Journals (Sweden)

    Rhodri eCusack

    2015-01-01

    Full Text Available Recent years have seen neuroimaging data becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complex to set up and run (increasing the risk of human error and time consuming to execute (restricting what analyses are attempted. Here we present an open-source framework, automatic analysis (aa, to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (redone. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA. However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast and efficient, for simple single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  15. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    Science.gov (United States)

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E

    2014-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  16. Intentional and Automatic Numerical Processing as Predictors of Mathematical Abilities in Primary School Children

    Directory of Open Access Journals (Sweden)

    Violeta ePina

    2015-03-01

    Full Text Available Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1 to 6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved.

  17. Intentional and automatic numerical processing as predictors of mathematical abilities in primary school children.

    Science.gov (United States)

    Pina, Violeta; Castillo, Alejandro; Cohen Kadosh, Roi; Fuentes, Luis J

    2015-01-01

    Previous studies have suggested that numerical processing relates to mathematical performance, but it seems that such relationship is more evident for intentional than for automatic numerical processing. In the present study we assessed the relationship between the two types of numerical processing and specific mathematical abilities in a sample of 109 children in grades 1-6. Participants were tested in an ample range of mathematical tests and also performed both a numerical and a size comparison task. The results showed that numerical processing related to mathematical performance only when inhibitory control was involved in the comparison tasks. Concretely, we found that intentional numerical processing, as indexed by the numerical distance effect in the numerical comparison task, was related to mathematical reasoning skills only when the task-irrelevant dimension (the physical size) was incongruent; whereas automatic numerical processing, indexed by the congruency effect in the size comparison task, was related to mathematical calculation skills only when digits were separated by small distance. The observed double dissociation highlights the relevance of both intentional and automatic numerical processing in mathematical skills, but when inhibitory control is also involved.

  18. Neural network for automatic analysis of motility data

    DEFF Research Database (Denmark)

    Jakobsen, Erik; Kruse-Andersen, S; Kolberg, Jens Godsk

    1994-01-01

    events. Due to great variation in events, this method often fails to detect biologically relevant pressure variations. We have tried to develop a new concept for recognition of pressure events based on a neural network. Pressures were recorded for over 23 hours in 29 normal volunteers by means...... of a portable data recording system. A number of pressure events and non-events were selected from 9 recordings and used for training the network. The performance of the trained network was then verified on recordings from the remaining 20 volunteers. The accuracy and sensitivity of the two systems were...

  19. An algorithm for discovering Lagrangians automatically from data

    Directory of Open Access Journals (Sweden)

    Daniel J.A. Hills

    2015-11-01

    Full Text Available An activity fundamental to science is building mathematical models. These models are used to both predict the results of future experiments and gain insight into the structure of the system under study. We present an algorithm that automates the model building process in a scientifically principled way. The algorithm can take observed trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. It does this by applying the principle of least action and searching for the simplest Lagrangian that describes the system’s behaviour. By generating this Lagrangian in a human interpretable form, it can also provide insight into the workings of the system.

  20. Automatically Creating Design Models from 3D Anthropometry Data

    CERN Document Server

    Wuhrer, Stefanie; Bose, Prosenjit

    2011-01-01

    When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.

  1. Low-cost automatic activity data recording system

    Directory of Open Access Journals (Sweden)

    Moraes M.F.D.

    1997-01-01

    Full Text Available We describe a low-cost, high quality device capable of monitoring indirect activity by detecting touch-release events on a conducting surface, i.e., the animal's cage cover. In addition to the detecting sensor itself, the system includes an IBM PC interface for prompt data storage. The hardware/software design, while serving for other purposes, is used to record the circadian activity rhythm pattern of rats with time in an automated computerized fashion using minimal cost computer equipment (IBM PC XT. Once the sensor detects a touch-release action of the rat in the upper portion of the cage, the interface sends a command to the PC which records the time (hours-minutes-seconds when the activity occurred. As a result, the computer builds up several files (one per detector/sensor containing a time list of all recorded events. Data can be visualized in terms of actograms, indicating the number of detections per hour, and analyzed by mathematical tools such as Fast Fourier Transform (FFT or cosinor. In order to demonstrate method validation, an experiment was conducted on 8 Wistar rats under 12/12-h light/dark cycle conditions (lights on at 7:00 a.m.. Results show a biological validation of the method since it detected the presence of circadian activity rhythm patterns in the behavior of the rats

  2. Automatic testing system design and data analysis of permafrost temperature in Qinghai-Tibet Railway

    Institute of Scientific and Technical Information of China (English)

    尚迎春; 齐红元

    2008-01-01

    Aimed at the characteristics of permafrost temperature influencing the safety of Qinghai-Tibet Railway and its on-line testing system, comparing the achievement of permafrost study nationwide with those worldwide, an automatic testing system of permafrost temperature, containing a master computer and some slave computers, was designed. By choosing high-precise thermistors as temperature sensor, designing and positioning the depth and interval of testing sections, testing, keeping and sending permafrost temperature data at time over slave computers, and receiving, processing and analyzing the data of collecting permafrost temperature over master computer, the change of the permafrost temperature can be described and analyzed, which can provide information for permafrost railway engineering design. Moreover, by taking permafrost temperature testing in a certain section of Qinghai-Tibet Railway as an instance, the collected data of permafrost temperature were analyzed, and the effect of permafrost behavior was depicted under the railway, as well as, a BP model was set up to predict the permafrost characteristics. This testing system will provide information timely about the change of the permafrost to support the safety operation in Qinghai-Tibet Railway.

  3. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Science.gov (United States)

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon

    2016-02-01

    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  4. Automatic perceptual simulation of first language meanings during second language sentence processing in bilinguals.

    Science.gov (United States)

    Vukovic, Nikola; Williams, John N

    2014-01-01

    Research supports the claim that, when understanding language, people perform mental simulation using those parts of the brain which support sensation, action, and emotion. A major criticism of the findings quoted as evidence for embodied simulation, however, is that they could be a result of conscious image generation strategies. Here we exploit the well-known fact that bilinguals routinely and automatically activate both their languages during comprehension to test whether this automatic process is, in turn, modulated by embodied simulatory processes. Dutch participants heard English sentences containing interlingual homophones and implying specific distance relations, and had to subsequently respond to pictures of objects matching or mismatching this implied distance. Participants were significantly slower to reject critical items when their perceptual features matched said distance relationship. These results suggest that bilinguals not only activate task-irrelevant meanings of interlingual homophones, but also automatically simulate these meanings in a detailed perceptual fashion. Our study supports the claim that embodied simulation is not due to participants' conscious strategies, but is an automatic component of meaning construction.

  5. Gemini Planet Imager Calibrations, Pipeline Updates, and Campaign Data Processing

    Science.gov (United States)

    Perrin, Marshall D.; Follette, Katherine B.; Millar-Blanchaer, Max; Wang, Jason; Wolff, Schulyer; Hung, Li-Wei; Arriaga, Pauline; Savransky, Dmitry; Bailey, Vanessa P.; Bruzzone, Sebastian; Chilcote, Jeffrey K.; De Rosa, Robert J.; Draper, Zachary; Fitzgerald, Michael P.; Greenbaum, Alexandra; Ingraham, Patrick; Konopacky, Quinn M.; Macintosh, Bruce; Marchis, Franck; Marois, Christian; Maire, Jerome; Nielsen, Eric L.; Rajan, Abhijith; Rameau, Julien; Rantakyro, Fredrik; Ruffio, Jean-Baptise; Tran, Debby; Ward-Duong, Kimberly; Zalesky, Joe; GPIES Team

    2017-01-01

    In support of GPI imaging and spectroscopy of exoplanets, polarimetry of disks, and the ongoing Exoplanet Survey we continue to refine calibrations, improve data reduction methods, and develop other enhancements to the data pipeline. We summarize here the latest updates to the open-source GPI Data Reduction Pipeline, including recent improvements spectroscopic and photometric calibrations and polarimetric data processing. For the GPI Exoplanet Survey we have incorporated the GPI Data Pipeline into a larger campaign data system that provides automatic data processing including rapid PSF subtraction and contrast measurements in real time during observations and fully automated PSF subtractions using several state-of-the-art algorithms shortly after each observation completes.

  6. Methods for automatic cloud classification from MODIS data

    Science.gov (United States)

    Astafurov, V. G.; Kuriyanovich, K. V.; Skorokhodov, A. V.

    2016-12-01

    In this paper, different texture-analysis methods are used to describe different cloud types in MODIS satellite images. A universal technique is suggested for the formation of efficient sets of textural features using the algorithm of truncated scanning of the features for different classifiers based on neural networks and cluster-analysis methods. Efficient sets of textural features are given for the considered classifiers; the cloud-image classification results are discussed. The characteristics of the classification methods used in this work are described: the probabilistic neural network, K-nearest neighbors, self-organizing Kohonen network, fuzzy C-means, and density clustering algorithm methods. It is shown that the algorithm based on a probabilistic neural network is the most efficient. It provides for the best classification reliability for 25 cloud types and allows the recognition of 11 cloud types with a probability greater than 0.7. As an example, the cloud classification results are given for the Tomsk region. The classifications were carried out using full-size satellite cloud images and different methods. The results agree with each other and agree well with the observational data from ground-based weather stations.

  7. Automatic cross-talk removal from multi-channel data

    CERN Document Server

    Allen, B; Ottewill, A; Allen, Bruce; Hua, Wensheng; Ottewill, Adrian

    1999-01-01

    A technique is described for removing interference from a signal of interest ("channel 1") which is one of a set of N time-domain instrumental signals ("channels 1 to N"). We assume that channel 1 is a linear combination of "true" signal plus noise, and that the "true" signal is not correlated with the noise. We also assume that part of this noise is produced, in a poorly-understood way, by the environment, and that the environment is monitored by channels 2 to N. Finally, we assume that the contribution of channel n to channel 1 is described by an (unknown!) linear transfer function R_n(t-t'). Our technique estimates the R_i and provides a way to subtract the environmental contamination from channel 1, giving an estimate of the "true" signal which minimizes its variance. It also provides some insights into how the environment is contaminating the signal of interest. The method is illustrated with data from a prototype interferometric gravitational-wave detector, in which the channel of interest (differential...

  8. Automatic estimation of excavation volume from laser mobile mapping data for mountain road widening

    NARCIS (Netherlands)

    Wang, J.; González-Jorge, H.; Lindenbergh, R.; Arias-Sánchez, P.; Menenti, M.

    2013-01-01

    Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS) which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically est

  9. Energy balance of a glacier surface: analysis of Automatic Weather Station data from the Morteratschgletscher, Switzerland

    NARCIS (Netherlands)

    Oerlemans, J.; Klok, E.J.

    2002-01-01

    We describe and analyze a complete 1-yr data set from an automatic weather station (AWS) located on the snout of the Morteratschgletscher, Switzerland. The AWS stands freely on the glacier surface and measures pressure, windspeed, wind direction, air temperature and humidity, incoming and reflected

  10. Oscillatory brain dynamics associated with the automatic processing of emotion in words.

    Science.gov (United States)

    Wang, Lin; Bastiaansen, Marcel

    2014-10-01

    This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant.

  11. SDPG: Spatial Data Processing Grid

    Institute of Scientific and Technical Information of China (English)

    XIAO Nong(肖侬); FUV Wei(付伟)

    2003-01-01

    Spatial applications will gain high complexity as the volume of spatial data increases rapidly. A suitable data processing and computing infrastructure for spatial applications needs to be established. Over the past decade, grid has become a powerful computing environment for data intensive and computing intensive applications. Integrating grid computing with spatial data processing technology, the authors designed a spatial data processing grid (called SDPG) to address the related problems. Requirements of spatial applications are examined and the architecture of SDPG is described in this paper. Key technologies for implementing SDPG are discussed with emphasis.

  12. UNCERTAIN TRAINING DATA EDITION FOR AUTOMATIC OBJECT-BASED CHANGE MAP EXTRACTION

    Directory of Open Access Journals (Sweden)

    S. Hajahmadi

    2013-09-01

    Full Text Available Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  13. Automatic detection of zebra crossings from mobile LiDAR data

    Science.gov (United States)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  14. Adaptive automatic data analysis in full-field fringe-pattern-based optical metrology

    Science.gov (United States)

    Trusiak, Maciej; Patorski, Krzysztof; Sluzewski, Lukasz; Pokorski, Krzysztof; Sunderland, Zofia

    2016-12-01

    Fringe pattern processing and analysis is an important task of full-field optical measurement techniques like interferometry, digital holography, structural illumination and moiré. In this contribution we present several adaptive automatic data analysis solutions based on the notion of Hilbert-Huang transform for measurand retrieval via fringe pattern phase and amplitude demodulation. The Hilbert-Huang transform consists of 2D empirical mode decomposition algorithm and Hilbert spiral transform analysis. Empirical mode decomposition adaptively dissects a meaningful number of same-scale subimages from the analyzed pattern - it is a data-driven method. Appropriately managing this set of unique subimages results in a very powerful fringe pre-filtering tool. Phase/amplitude demodulation is performed using Hilbert spiral transform aided by the local fringe orientation estimator. We describe several optical measurement techniques for technical and biological objects characterization basing on the especially tailored Hilbert-Huang algorithm modifications for fringe pattern denoising, detrending and amplitude/phase demodulation.

  15. AMPERE Science Data Reduction and Processing

    Science.gov (United States)

    Korth, H.; Dyrud, L.; Anderson, B.; Waters, C. L.; Barnes, R. J.

    2010-12-01

    The Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) uses the constellation of Iridium Communications satellites in 780-km-altitude, circular, near-polar orbits to monitor the electro-dynamic coupling of the ionosphere to the surrounding space environment in real time. The constellation consists of 66 satellites plus on-orbit spares, and each satellite carries a magnetometer for attitude determination. The magnetometer data are continuously sent from Iridium Satellite Network Operations Center to the AMPERE Science Data Center, where they are processed to extract the magnetic perturbation signatures associated with the Birkeland currents. This is accomplished by first merging real-time telemetry packets from each satellite into time-ordered sets of records, formatting and compiling a database. Subsequent processing automatically evaluates baselines, inter-calibrates magnetic field data between satellites, and quantifies the magnetic field residuals with the goal to reduce errors to the 30-nT digitization resolution of the magnetometers. The magnetic field residuals are then used to rank the quality of the data from the individual satellites and weight the data in subsequent science processing. Because magnetic fields generated by the Birkeland currents represent typically less than one percent of the total magnetic field, numerous challenges must be overcome to derive reliable magnetic perturbation signals. For example, corrections to the IGRF magnetic field model must be applied and adverse effects due to missing data must be mitigated. In the final processing step the Birkeland currents are derived by applying Ampere's law to the spherical harmonic fit of the perturbation data. We present the processing methodology, discuss the sensitivity of the Birkeland currents on the accuracy of the derived magnetic perturbations, and show a preliminary analysis of the 3-5 August 2010 geomagnetic storm.

  16. NMRFx Processor: a cross-platform NMR data processing program.

    Science.gov (United States)

    Norris, Michael; Fetler, Bayard; Marchant, Jan; Johnson, Bruce A

    2016-08-01

    NMRFx Processor is a new program for the processing of NMR data. Written in the Java programming language, NMRFx Processor is a cross-platform application and runs on Linux, Mac OS X and Windows operating systems. The application can be run in both a graphical user interface (GUI) mode and from the command line. Processing scripts are written in the Python programming language and executed so that the low-level Java commands are automatically run in parallel on computers with multiple cores or CPUs. Processing scripts can be generated automatically from the parameters of NMR experiments or interactively constructed in the GUI. A wide variety of processing operations are provided, including methods for processing of non-uniformly sampled datasets using iterative soft thresholding. The interactive GUI also enables the use of the program as an educational tool for teaching basic and advanced techniques in NMR data analysis.

  17. Computer processing of tomography data

    OpenAIRE

    Konečný, Jan

    2011-01-01

    Computer processing of tomography data Tomographs are one of the most important diagnostic devices, which are used in every hospital nowadays; they have already been so for a considerable period of time. The different types of tomographs and the processing of tomographic data and imaging of these data are the subject of this thesis. I have described the four most common types of tomography: X-ray Computed Tomography, Magnetic Resonance Imaging, Positron Emission Tomography and Single Photon E...

  18. Experimental Data Processing. Part 2

    Directory of Open Access Journals (Sweden)

    Wilhelm LAURENZI

    2011-03-01

    Full Text Available This paper represents the second part of a study regarding the processing of experimental monofactorialdata, and it presents the original program developed by the author for processing experimental data.Using consecrated methods and relations, this program allows establishing the number of samples,generating the experimental plan, entering and saving the measured data, identifying the data corrupted byaberrant errors, verifying the randomness, verifying the normality of data repartition, calculating the mainstatistical parameters and exporting the experimental data to Excel or to other programs for statistical dataprocessing.

  19. Process Mining Online Assessment Data

    Science.gov (United States)

    Pechenizkiy, Mykola; Trcka, Nikola; Vasilyeva, Ekaterina; van der Aalst, Wil; De Bra, Paul

    2009-01-01

    Traditional data mining techniques have been extensively applied to find interesting patterns, build descriptive and predictive models from large volumes of data accumulated through the use of different information systems. The results of data mining can be used for getting a better understanding of the underlying educational processes, for…

  20. Graphical Language for Data Processing

    Science.gov (United States)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  1. Automatic Clustering of Flow Cytometry Data with Density-Based Merging

    Directory of Open Access Journals (Sweden)

    Guenther Walther

    2009-01-01

    made this technology ubiquitous and indispensable in the clinical and laboratory setting. A current limit to the potential of this technology is the lack of automated tools for analyzing the resulting data. We describe methodology and software to automatically identify cell populations in flow cytometry data. Our approach advances the paradigm of manually gating sequential two-dimensional projections of the data to a procedure that automatically produces gates based on statistical theory. Our approach is nonparametric and can reproduce nonconvex subpopulations that are known to occur in flow cytometry samples, but which cannot be produced with current parametric model-based approaches. We illustrate the methodology with a sample of mouse spleen and peritoneal cavity cells.

  2. Process acceptance and adjustment techniques for Swiss automatic screw machine parts. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Robb, J.M.

    1976-01-01

    Product tolerance requirements for small, cylindrical, piece parts produced on swiss automatic screw machines have progressed to the reliability limits of inspection equipment. The miniature size, configuration, and tolerance requirements (plus or minus 0.0001 in.) (0.00254 mm) of these parts preclude the use of screening techniques to accept product or adjust processes during setup and production runs; therefore, existing means of product acceptance and process adjustment must be refined or new techniques must be developed. The purpose of this endeavor has been to determine benefits gained through the implementation of a process acceptance technique (PAT) to swiss automatic screw machine processes. PAT is a statistical approach developed for the purpose of accepting product and centering processes for parts produced by selected, controlled processes. Through this endeavor a determination has been made of the conditions under which PAT can benefit a controlled process and some specific types of screw machine processes upon which PAT could be applied. However, it was also determined that PAT, if used indiscriminately, may become a record keeping burden when applied to more than one dimension at a given machining operation. (auth)

  3. Modeling, Learning, and Processing of Text Technological Data Structures

    CERN Document Server

    Kühnberger, Kai-Uwe; Lobin, Henning; Lüngen, Harald; Storrer, Angelika; Witt, Andreas

    2012-01-01

    Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book “Modelling, Learning and Processing of Text-Technological Data Structures” deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.

  4. REAL TIME DATA PROCESSING FRAMEWORKS

    Directory of Open Access Journals (Sweden)

    Yash Sakaria

    2015-09-01

    Full Text Available On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solve the problem of live stream data mining. In summary, the traditional approach of big data being co-relational to Hadoop is false; focus needs to be given on business value as well. Data Warehousing, Hadoop and stream processing complement each other very well. In this paper, we have tried reviewing a few frameworks and products which use real time data streaming by providing modifications to Hadoop.

  5. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  6. REAL TIME DATA PROCESSING FRAMEWORKS

    OpenAIRE

    Yash Sakaria; Chetashri Bhadane

    2015-01-01

    On a business level, everyone wants to get hold of the business value and other organizational advantages that big data has to offer. Analytics has arisen as the primitive path to business value from big data. Hadoop is not just a storage platform for big data; it’s also a computational and processing platform for business analytics. Hadoop is, however, unsuccessful in fulfilling business requirements when it comes to live data streaming. The initial architecture of Apache Hadoop did not solv...

  7. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  8. Automatic registration of imaging mass spectrometry data to the Allen Brain Atlas transcriptome

    Science.gov (United States)

    Abdelmoula, Walid M.; Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Tolner, Else; van den Maagdenberg, Arn M. J. M.; Lelieveldt, B. P. F.; McDonnell, Liam; Dijkstra, Jouke

    2014-03-01

    Imaging Mass Spectrometry (IMS) is an emerging molecular imaging technology that provides spatially resolved information on biomolecular structures; each image pixel effectively represents a molecular mass spectrum. By combining the histological images and IMS-images, neuroanatomical structures can be distinguished based on their biomolecular features as opposed to morphological features. The combination of IMS data with spatially resolved gene expression maps of the mouse brain, as provided by the Allen Mouse Brain atlas, would enable comparative studies of spatial metabolic and gene expression patterns in life-sciences research and biomarker discovery. As such, it would be highly desirable to spatially register IMS slices to the Allen Brain Atlas (ABA). In this paper, we propose a multi-step automatic registration pipeline to register ABA histology to IMS- images. Key novelty of the method is the selection of the best reference section from the ABA, based on pre-processed histology sections. First, we extracted a hippocampus-specific geometrical feature from the given experimental histological section to initially localize it among the ABA sections. Then, feature-based linear registration is applied to the initially localized section and its two neighbors in the ABA to select the most similar reference section. A non-rigid registration yields a one-to-one mapping of the experimental IMS slice to the ABA. The pipeline was applied on 6 coronal sections from two mouse brains, showing high anatomical correspondence, demonstrating the feasibility of complementing biomolecule distributions from individual mice with the genome-wide ABA transcriptome.

  9. A Novel OD Estimation Method Based on Automatic Vehicle Identification Data

    Science.gov (United States)

    Sun, Jian; Feng, Yu

    With the development and application of Automatic Vehicle Identification (AVI) technologies, a novel high resolution OD estimation method was proposed based on AVI detector information. 4 detected categories (Ox + Dy, Ox/Dy + Path(s), Ox/Dy, Path(s)) were divided at the first step. Then the initial OD matrix was updated using the Ox + Dy sample information considering the AVI detector errors. Referenced by particle filter, the link-path relationship data were revised using the last 3 categories information based on Bayesian inference and the possible trajectory and OD were determined using Monte Carlo random process at last. Finally, according to the current application of video detector in Shanghai, the North-South expressway was selected as the testbed which including 17 OD pairs and 9 AVI detectors. The results show that the calculated average relative error is 12.09% under the constraints that the simulation error is under 15% and the detector error is about 10%. It also shows that this method is highly efficient and can fully using the partial vehicle trajectory which can be satisfied with the dynamic traffic management application in reality.

  10. Automatic extraction of faults and fractal analysis from remote sensing data

    Directory of Open Access Journals (Sweden)

    R. Gloaguen

    2007-01-01

    Full Text Available Object-based classification is a promising technique for image classification. Unlike pixel-based methods, which only use the measured radiometric values, the object-based techniques can also use shape and context information of scene textures. These extra degrees of freedom provided by the objects allow the automatic identification of geological structures. In this article, we present an evaluation of object-based classification in the context of extraction of geological faults. Digital elevation models and radar data of an area near Lake Magadi (Kenya have been processed. We then determine the statistics of the fault populations. The fractal dimensions of fault dimensions are similar to fractal dimensions directly measured on remote sensing images of the study area using power spectra (PSD and variograms. These methods allow unbiased statistics of faults and help us to understand the evolution of the fault systems in extensional domains. Furthermore, the direct analysis of image texture is a good indicator of the fault statistics and allows us to classify the intensity and type of deformation. We propose that extensional fault networks can be modeled by iterative function system (IFS.

  11. Smooth pursuit detection in binocular eye-tracking data with automatic video-based performance evaluation.

    Science.gov (United States)

    Larsson, Linnéa; Nyström, Marcus; Ardö, Håkan; Åström, Kalle; Stridh, Martin

    2016-12-01

    An increasing number of researchers record binocular eye-tracking signals from participants viewing moving stimuli, but the majority of event-detection algorithms are monocular and do not consider smooth pursuit movements. The purposes of the present study are to develop an algorithm that discriminates between fixations and smooth pursuit movements in binocular eye-tracking signals and to evaluate its performance using an automated video-based strategy. The proposed algorithm uses a clustering approach that takes both spatial and temporal aspects of the binocular eye-tracking signal into account, and is evaluated using a novel video-based evaluation strategy based on automatically detected moving objects in the video stimuli. The binocular algorithm detects 98% of fixations in image stimuli compared to 95% when only one eye is used, while for video stimuli, both the binocular and monocular algorithms detect around 40% of smooth pursuit movements. The present article shows that using binocular information for discrimination of fixations and smooth pursuit movements is advantageous in static stimuli, without impairing the algorithm's ability to detect smooth pursuit movements in video and moving-dot stimuli. With an automated evaluation strategy, time-consuming manual annotations are avoided and a larger amount of data can be used in the evaluation process.

  12. Automatic Descriptor-Based Co-Registration of Frame Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Maria Vakalopoulou

    2014-04-01

    Full Text Available Frame hyperspectral sensors, in contrast to push-broom or line-scanning ones, produce hyperspectral datasets with, in general, better geometry but with unregistered spectral bands. Being acquired at different instances and due to platform motion and movements (UAVs, aircrafts, etc., every spectral band is displaced and acquired with a different geometry. The automatic and accurate registration of hyperspectral datasets from frame sensors remains a challenge. Powerful local feature descriptors when computed over the spectrum fail to extract enough correspondences and successfully complete the registration procedure. To this end, we propose a generic and automated framework which decomposes the problem and enables the efficient computation of a sufficient amount of accurate correspondences over the given spectrum, without using any ancillary data (e.g., from GPS/IMU. First, the spectral bands are divided in spectral groups according to their wavelength. The spectral borders of each group are not strict and their formulation allows certain overlaps. The spectral variance and proximity determine the applicability of every spectral band to act as a reference during the registration procedure. The proposed decomposition allows the descriptor and the robust estimation process to deliver numerous inliers. The search space of possible solutions has been effectively narrowed by sorting and selecting the optimal spectral bands which under an unsupervised manner can quickly recover hypercube’s geometry. The developed approach has been qualitatively and quantitatively evaluated with six different datasets obtained by frame sensors onboard aerial platforms and UAVs. Experimental results appear promising.

  13. Is place-value processing in four-digit numbers fully automatic? Yes, but not always.

    Science.gov (United States)

    García-Orza, Javier; Estudillo, Alejandro J; Calleja, Marina; Rodríguez, José Miguel

    2017-01-30

    Knowing the place-value of digits in multi-digit numbers allows us to identify, understand and distinguish between numbers with the same digits (e.g., 1492 vs. 1942). Research using the size congruency task has shown that the place-value in a string of three zeros and a non-zero digit (e.g., 0090) is processed automatically. In the present study, we explored whether place-value is also automatically activated when more complex numbers (e.g., 2795) are presented. Twenty-five participants were exposed to pairs of four-digit numbers that differed regarding the position of some digits and their physical size. Participants had to decide which of the two numbers was presented in a larger font size. In the congruent condition, the number shown in a bigger font size was numerically larger. In the incongruent condition, the number shown in a smaller font size was numerically larger. Two types of numbers were employed: numbers composed of three zeros and one non-zero digit (e.g., 0040-0400) and numbers composed of four non-zero digits (e.g., 2795-2759). Results showed larger congruency effects in more distant pairs in both type of numbers. Interestingly, this effect was considerably stronger in the strings composed of zeros. These results indicate that place-value coding is partially automatic, as it depends on the perceptual and numerical properties of the numbers to be processed.

  14. Automatic differentiation tools in the dynamic simulation of chemical engineering processes

    Directory of Open Access Journals (Sweden)

    Castro M.C.

    2000-01-01

    Full Text Available Automatic Differentiation is a relatively recent technique developed for the differentiation of functions applicable directly to the source code to compute the function written in standard programming languages. That technique permits the automatization of the differentiation step, crucial for dynamic simulation and optimization of processes. The values for the derivatives obtained with AD are exact (to roundoff. The theoretical exactness of the AD comes from the fact that it uses the same rules of differentiation as in differential calculus, but these rules are applied to an algorithmic specification of the function rather than to a formula. The main purpose of this contribution is to discuss the impact of Automatic Differentiation in the field of dynamic simulation of chemical engineering processes. The influence of the differentiation technique on the behavior of the integration code, the performance of the generated code and the incorporation of AD tools in consistent initialization tools are discussed from the viewpoint of dynamic simulation of typical models in chemical engineering.

  15. Manifestation of the synergetic mechanism in the implementation of automatic processing of scientific documents

    Science.gov (United States)

    Chizhacovsky, Valentin; Popescu, Anatol N.; Russu, Vladimir

    2005-02-01

    This article is dedicated to the problem of the presence in the natural language and in the cognitive-verbal human activity of an interior mechanisms of reproduction and self regulation which will help us solve problem by the automatic documents. We have attributed to these mechanisms the property of being synergetic. The above expressed thought has been confirmed at the implementation of the automatic processing of scientific articles, published in the German specialized magazine "Wasserwirtschaft-Wassertechnik-WWT", which we have addressed to the sub subject field "Abwasser" (waste water). By dividing the implementation of our task in many consequent stages and sub stages we have obtained the possibilities to create the conditions which favored the manifestation of the needed linguistic synergetic activities.

  16. Automatic calculation of tree diameter from stereoscopic image pairs using digital image processing.

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu

    2012-06-20

    Automatic operations play an important role in societies by saving time and improving efficiency. In this paper, we apply the digital image processing method to the field of lumbering to automatically calculate tree diameters in order to reduce culler work and enable a third party to verify tree diameters. To calculate the cross-sectional diameter of a tree, the image was first segmented by the marker-controlled watershed transform algorithm based on the hue saturation intensity (HSI) color model. Then, the tree diameter was obtained by measuring the area of every isolated region in the segmented image. Finally, the true diameter was calculated by multiplying the diameter computed in the image and the scale, which was derived from the baseline and disparity of correspondence points from stereoscopic image pairs captured by rectified configuration cameras.

  17. Fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Olof; Malbet-Monaco, Stéphanie; Popov, Alexander; Nurizzo, Didier, E-mail: nurizzo@esrf.fr [European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France); Bowler, Matthew W., E-mail: nurizzo@esrf.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); Université Grenoble Alpes–EMBL–CNRS, Grenoble Outstation, 71 Avenue des Martyrs, CS 90181, 38042 Grenoble (France); European Synchrotron Radiation Facility, 71 Avenue des Martyrs, CS 40220, 38043 Grenoble (France)

    2015-07-31

    A fully automatic system has been developed that performs X-ray centring and characterization of, and data collection from, large numbers of cryocooled crystals without human intervention. Considerable effort is dedicated to evaluating macromolecular crystals at synchrotron sources, even for well established and robust systems. Much of this work is repetitive, and the time spent could be better invested in the interpretation of the results. In order to decrease the need for manual intervention in the most repetitive steps of structural biology projects, initial screening and data collection, a fully automatic system has been developed to mount, locate, centre to the optimal diffraction volume, characterize and, if possible, collect data from multiple cryocooled crystals. Using the capabilities of pixel-array detectors, the system is as fast as a human operator, taking an average of 6 min per sample depending on the sample size and the level of characterization required. Using a fast X-ray-based routine, samples are located and centred systematically at the position of highest diffraction signal and important parameters for sample characterization, such as flux, beam size and crystal volume, are automatically taken into account, ensuring the calculation of optimal data-collection strategies. The system is now in operation at the new ESRF beamline MASSIF-1 and has been used by both industrial and academic users for many different sample types, including crystals of less than 20 µm in the smallest dimension. To date, over 8000 samples have been evaluated on MASSIF-1 without any human intervention.

  18. VACTIV: A graphical dialog based program for an automatic processing of line and band spectra

    Science.gov (United States)

    Zlokazov, V. B.

    2013-05-01

    and estimation of parameters of interest. VACTIV can run on any standard modern laptop. Reasons for the new version: At the time of its creation (1999) VACTIV was seemingly the first attempt to apply the newest programming languages and styles to systems of spectrum analysis. Its goal was to both get a convenient and efficient technique for data processing, and to elaborate the formalism of spectrum analysis in terms of classes, their properties, their methods and events of an object-oriented programming language. Summary of revisions: Compared with ACTIV, VACTIV preserves all the mathematical algorithms, but provides the user with all the benefits of an interface, based on a graphical dialog. It allows him to make a quick intervention in the work of the program; in particular, to carry out the on-line control of the fitting process: depending on the intermediate results and using the visual form of data representation, to change the conditions for the fitting and so achieve the optimum performance, selecting the optimum strategy. To find the best conditions for the fitting one can compress the spectrum, delete the blunders from it, smooth it using a high-frequency spline filter and build the background using a low-frequency spline filter; use not only automatic methods for the blunder deletion, the peak search, the peak model forming and the calibration, but also use manual mouse clicking on the spectrum graph. Restrictions: To enhance the reliability and portability of the program the majority of the most important arrays have a static allocation; all the arrays are allocated with a surplus, and the total pool of the program is restricted only by the size of the computer virtual memory. A spectrum has the static size of 32 K real words. The maximum size of the least-square matrix is 314 (the maximum number of fitted parameters per one analyzed spectrum interval, not for the whole spectrum), from which it follows that the maximum number of peaks in one spectrum

  19. Automatic Identification of Critical Data Items in a Database to Mitigate the Effects of Malicious Insiders

    Science.gov (United States)

    White, Jonathan; Panda, Brajendra

    A major concern for computer system security is the threat from malicious insiders who target and abuse critical data items in the system. In this paper, we propose a solution to enable automatic identification of critical data items in a database by way of data dependency relationships. This identification of critical data items is necessary because insider threats often target mission critical data in order to accomplish malicious tasks. Unfortunately, currently available systems fail to address this problem in a comprehensive manner. It is more difficult for non-experts to identify these critical data items because of their lack of familiarity and due to the fact that data systems are constantly changing. By identifying the critical data items automatically, security engineers will be better prepared to protect what is critical to the mission of the organization and also have the ability to focus their security efforts on these critical data items. We have developed an algorithm that scans the database logs and forms a directed graph showing which items influence a large number of other items and at what frequency this influence occurs. This graph is traversed to reveal the data items which have a large influence throughout the database system by using a novel metric based formula. These items are critical to the system because if they are maliciously altered or stolen, the malicious alterations will spread throughout the system, delaying recovery and causing a much more malignant effect. As these items have significant influence, they are deemed to be critical and worthy of extra security measures. Our proposal is not intended to replace existing intrusion detection systems, but rather is intended to complement current and future technologies. Our proposal has never been performed before, and our experimental results have shown that it is very effective in revealing critical data items automatically.

  20. Visual Execution and Data Visualisation in Natural Language Processing

    OpenAIRE

    Rodgers, Peter; Gaizauskas, Robert; Humphreys, Kevin; Cunningham, Hamish

    1997-01-01

    We describe GGI, a visual system that allows the user to execute an automatically generated data flow graph containing code modules that perform natural language processing tasks. These code modules operate on text documents. GGI has a suite of text visualisation tools that allows the user useful views of the annotation data that is produced by the modules in the executable graph. GGI forms part of the GATE natural language engineering system.

  1. Big Data in Market Research: Why More Data Does Not Automatically Mean Better Information

    Directory of Open Access Journals (Sweden)

    Bosch Volker

    2016-11-01

    Full Text Available Big data will change market research at its core in the long term because consumption of products and media can be logged electronically more and more, making it measurable on a large scale. Unfortunately, big data datasets are rarely representative, even if they are huge. Smart algorithms are needed to achieve high precision and prediction quality for digital and non-representative approaches. Also, big data can only be processed with complex and therefore error-prone software, which leads to measurement errors that need to be corrected. Another challenge is posed by missing but critical variables. The amount of data can indeed be overwhelming, but it often lacks important information. The missing observations can only be filled in by using statistical data imputation. This requires an additional data source with the additional variables, for example a panel. Linear imputation is a statistical procedure that is anything but trivial. It is an instrument to “transport information,” and the higher the observed data correlates with the data to be imputed, the better it works. It makes structures visible even if the depth of the data is limited.

  2. BRICORK: an automatic machine with image processing for the production of corks

    Science.gov (United States)

    Davies, Roger; Correia, Bento A. B.; Carvalho, Fernando D.; Rodrigues, Fernando C.

    1991-06-01

    The production of cork stoppers from raw cork strip is a manual and labour-intensive process in which a punch-operator quickly inspects all sides of the cork strip for defects and decides where to punch out stoppers. He then positions the strip underneath a rotating tubular cutter and punches out the stoppers one at a time. This procedure is somewhat subjective and prone to error, being dependent on the judgement and accuracy of the operator. This paper describes the machine being developed jointly by Mecanova, Laboratorio Nacional de Engenharia e Tecnologia (LNETI) and Empresa de Investiga&sigmafcoe Desenvolvimento de Electronica SA (EID) which automatically processes cork strip introduced by an unskilled operator. The machine uses both image processing and laser inspection techniques to examine the strip. Defects in the cork are detected and categorised in order to determine regions where stoppers may be punched. The precise locations are then automatically optimised for best usage of the raw material (quantity and quality of stoppers). In order to achieve the required speed of production these image processing techniques may be implemented in hardware. The paper presents results obtained using the vision system software under development together with descriptions of both the image processing and mechanical aspects of the proposed machine.

  3. Low-complexity PDE-based approach for automatic microarray image processing.

    Science.gov (United States)

    Belean, Bogdan; Terebes, Romulus; Bot, Adrian

    2015-02-01

    Microarray image processing is known as a valuable tool for gene expression estimation, a crucial step in understanding biological processes within living organisms. Automation and reliability are open subjects in microarray image processing, where grid alignment and spot segmentation are essential processes that can influence the quality of gene expression information. The paper proposes a novel partial differential equation (PDE)-based approach for fully automatic grid alignment in case of microarray images. Our approach can handle image distortions and performs grid alignment using the vertical and horizontal luminance function profiles. These profiles are evolved using a hyperbolic shock filter PDE and then refined using the autocorrelation function. The results are compared with the ones delivered by state-of-the-art approaches for grid alignment in terms of accuracy and computational complexity. Using the same PDE formalism and curve fitting, automatic spot segmentation is achieved and visual results are presented. Considering microarray images with different spots layouts, reliable results in terms of accuracy and reduced computational complexity are achieved, compared with existing software platforms and state-of-the-art methods for microarray image processing.

  4. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  5. Using Hybrid Decision Tree -Houph Transform Approach For Automatic Bank Check Processing

    Directory of Open Access Journals (Sweden)

    Heba A. Elnemr

    2012-05-01

    Full Text Available One of the first steps in the realization of an automatic system of bank check processing is the automatic classification of checks and extraction of handwritten area. This paper presents a new hybrid method which couple together the statistical color histogram features, the entropy, the energy and the Houph transform to achieve the automatic classification of checks as well as the segmentation and recognition of the various information on the check. The proposed method relies on two stages. First, a two-step classification algorithm is implemented. In the first step, a decision classification tree is built using the entropy, the energy, the logo location and histogram features of colored bank checks. These features are used to classify checks into several groups. Each group may contain one or more type of checks. Therefore, in the second step the bank logo or bank name are matched against its stored template to identify the correct prototype. Second, Hough transform is utilized to detect lines in the classified checks. These lines are used as indicator to the bank check fields. A group of experiments is performed showing that the proposed technique is promising as regards classifying the bank checks and extracting the important fields in that check.

  6. Performing the processing required for automatically get a PDF/A version of the CERN Library documentation

    CERN Document Server

    Molina Garcia-Retamero, Antonio

    2015-01-01

    The aim of the project was to perform the processing required for automatically get a PDF/A version of the CERN Library documentation. For this, it is necessary to extract as much metadata as possible from the sources files, inject the required data into the original source files creating new ones ready for being compiled with all related dependencies. Besides, I’ve proposed the creation of a HTML version consistent with the PDF and navigable for easy access, I’ve been trying to perform some Natural Language Processing for extracting metadata, I’ve proposed the injection of the cern library documentation into the HTML version of the long writeups where it is referenced (for instance, when a CERN Library function is referenced in a sample code) Finally, I’ve designed and implemented a Graphical User Interface in order to simplify the process for the user.

  7. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Science.gov (United States)

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.

    2016-10-01

    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  8. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  9. Visa: AN Automatic Aware and Visual Aids Mechanism for Improving the Correct Use of Geospatial Data

    Science.gov (United States)

    Hong, J. H.; Su, Y. T.

    2016-06-01

    With the fast growth of internet-based sharing mechanism and OpenGIS technology, users nowadays enjoy the luxury to quickly locate and access a variety of geospatial data for the tasks at hands. While this sharing innovation tremendously expand the possibility of application and reduce the development cost, users nevertheless have to deal with all kinds of "differences" implicitly hidden behind the acquired georesources. We argue the next generation of GIS-based environment, regardless internet-based or not, must have built-in knowledge to automatically and correctly assess the fitness of data use and present the analyzed results to users in an intuitive and meaningful way. The VISA approach proposed in this paper refer to four different types of visual aids that can be respectively used for addressing analyzed results, namely, virtual layer, informative window, symbol transformation and augmented TOC. The VISA-enabled interface works in an automatic-aware fashion, where the standardized metadata serve as the known facts about the selected geospatial resources, algorithms for analyzing the differences of temporality and quality of the geospatial resources were designed and the transformation of analyzed results into visual aids were automatically executed. It successfully presents a new way for bridging the communication gaps between systems and users. GIS has been long seen as a powerful integration tool, but its achievements would be highly restricted if it fails to provide a friendly and correct working platform.

  10. Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm

    Science.gov (United States)

    Marghany, Maged

    2016-10-01

    In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiveroperating characteristic (ROC) curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.

  11. Automatic Mexico Gulf Oil Spill Detection from Radarsat-2 SAR Satellite Data Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Marghany Maged

    2016-10-01

    Full Text Available In this work, a genetic algorithm is exploited for automatic detection of oil spills of small and large size. The route is achieved using arrays of RADARSAT-2 SAR ScanSAR Narrow single beam data obtained in the Gulf of Mexico. The study shows that genetic algorithm has automatically segmented the dark spot patches related to small and large oil spill pixels. This conclusion is confirmed by the receiver-operating characteristic (ROC curve and ground data which have been documented. The ROC curve indicates that the existence of oil slick footprints can be identified with the area under the curve between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. The small oil spill sizes represented 30% of the discriminated oil spill pixels in ROC curve. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills of either small or large size and the ScanSAR Narrow single beam mode serves as an excellent sensor for oil spill patterns detection and surveying in the Gulf of Mexico.

  12. A framework for automatic segmentation in three dimensions of microstructural tomography data

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus;

    2010-01-01

    Routine use of quantitative three dimensional analysis of material microstructure by in particular, focused ion beam (FIB) serial sectioning is generally restricted by the time consuming task of manually delineating structures within each image slice or the quality of manual and automatic...... segmentation schemes. We present here a framework for performing automatic segmentation of complex microstructures using a level set method. The technique is based on numerical approximations to partial differential equations to evolve a 3D surface to capture the phase boundaries. Vector fields derived from...... the experimentally acquired data are used as the driving forces. The framework performs the segmentation in 3D rather than on a slice by slice basis. It naturally supplies sub-voxel precision of segmented surfaces and allows constraints on the surface curvature to enforce a smooth surface in the segmentation. Two...

  13. Design of a modern automatic control system for the activated sludge process in wastewater treatment

    Institute of Scientific and Technical Information of China (English)

    Alexandros D. Kotzapetros; Panayotis A. Paraskevas; Athanasios S. Stasinakis

    2015-01-01

    The Activated Sludge Process (ASP) exhibits highly nonlinear properties. The design of an automatic control system that is robust against disturbance of inlet wastewater flow rate and has short process settling times is a chal enging matter. The proposed control method is an I-P modified controller automatic control system with state variable feedback and control canonical form simulation diagram for the process. A more stable response is achieved with this type of modern control. Settling times of 0.48 days are achieved for the concentration of microorganisms, (reference value step increase of 50 mg·L−1) and 0.01 days for the concentration of oxygen (reference value step increase of 0.1 mg·L−1). Fluctuations of concentrations of oxygen and microorganisms after an inlet disturbance of 5 × 103m3·d−1 are smal . Changes in the reference values of oxygen and microorganisms (increases by 10%, 20%and 30%) show satisfactory response of the system in al cases. Changes in the value of inlet wastewater flow rate disturbance (increases by 10%, 25%, 50%and 100%) are stabilized by the control system in short time. Maximum percent overshoot is also taken in consideration in all cases and the largest value is 25%which is acceptable. The proposed method with I-P controller is better for disturbance rejection and process settling times compared to the same method using PI control er. This method can substitute optimal control systems in ASP.

  14. Communicating Processes with Data for Supervisory Coordination

    Directory of Open Access Journals (Sweden)

    Jasen Markovski

    2012-08-01

    Full Text Available We employ supervisory controllers to safely coordinate high-level discrete(-event behavior of distributed components of complex systems. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of the supervisory controllers can be automatically synthesized based on formal models of the system components and a formalization of the safe coordination (control requirements. Based on the obtained models, code generation can be used to implement the supervisory controllers in software, on a PLC, or an embedded (microprocessor. In this article, we develop a process theory with data that supports a model-based systems engineering framework for supervisory coordination. We employ communication to distinguish between the different flows of information, i.e., observation and supervision, whereas we employ data to specify the coordination requirements more compactly, and to increase the expressivity of the framework. To illustrate the framework, we remodel an industrial case study involving coordination of maintenance procedures of a printing process of a high-tech Oce printer.

  15. Processing Terrain Point Cloud Data

    KAUST Repository

    DeVore, Ronald

    2013-01-10

    Terrain point cloud data are typically acquired through some form of Light Detection And Ranging sensing. They form a rich resource that is important in a variety of applications including navigation, line of sight, and terrain visualization. Processing terrain data has not received the attention of other forms of surface reconstruction or of image processing. The goal of terrain data processing is to convert the point cloud into a succinct representation system that is amenable to the various application demands. The present paper presents a platform for terrain processing built on the following principles: (i) measuring distortion in the Hausdorff metric, which we argue is a good match for the application demands, (ii) a multiscale representation based on tree approximation using local polynomial fitting. The basic elements held in the nodes of the tree can be efficiently encoded, transmitted, visualized, and utilized for the various target applications. Several challenges emerge because of the variable resolution of the data, missing data, occlusions, and noise. Techniques for identifying and handling these challenges are developed. © 2013 Society for Industrial and Applied Mathematics.

  16. Study on Rear-end Real-time Data Quality Control Method of Regional Automatic Weather Station

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to study the rear-end real-time data quality control method of regional automatic weather station. [Method] The basic content and steps of rear-end real-time data quality control of regional automatic weather station were introduced. Each element was treated with systematic quality control procedure. The existence of rear-end real time data of regional meteorological station in Guangxi was expounded. Combining with relevant elements and linear changes, improvement based on traditiona...

  17. Leveraging Parallel Data Processing Frameworks with Verified Lifting

    Directory of Open Access Journals (Sweden)

    Maaz Bin Safeer Ahmad

    2016-11-01

    Full Text Available Many parallel data frameworks have been proposed in recent years that let sequential programs access parallel processing. To capitalize on the benefits of such frameworks, existing code must often be rewritten to the domain-specific languages that each framework supports. This rewriting–tedious and error-prone–also requires developers to choose the framework that best optimizes performance given a specific workload. This paper describes Casper, a novel compiler that automatically retargets sequential Java code for execution on Hadoop, a parallel data processing framework that implements the MapReduce paradigm. Given a sequential code fragment, Casper uses verified lifting to infer a high-level summary expressed in our program specification language that is then compiled for execution on Hadoop. We demonstrate that Casper automatically translates Java benchmarks into Hadoop. The translated results execute on average 3.3x faster than the sequential implementations and scale better, as well, to larger datasets.

  18. Fast data processing with Spark

    CERN Document Server

    Sankar, Krishna

    2015-01-01

    Fast Data Processing with Spark - Second Edition is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too big to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

  19. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Science.gov (United States)

    Wang, Jie; Shen, Yuzhong

    2011-01-01

    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  20. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols.

  1. Key issues in automatic classification of defects in post-inspection review process of photomasks

    Science.gov (United States)

    Pereira, Mark; Maji, Manabendra; Pai, Ravi R.; B. V. R., Samir; Seshadri, R.; Patil, Pradeepkumar

    2012-11-01

    very small real defects, registering grey level defect images with layout data base, automatically finding out maximum critical dimension (CD) variation for defective patterns (where patterns could have Manhattan as well as all angle edges), etc. This paper discusses about many such key issues and suggests strategies to address some of them based upon our experience while developing the NxADC and evaluating it on production mask defects.

  2. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  3. FPGA based system for automatic cDNA microarray image processing.

    Science.gov (United States)

    Belean, Bogdan; Borda, Monica; Le Gal, Bertrand; Terebes, Romulus

    2012-07-01

    Automation is an open subject in DNA microarray image processing, aiming reliable gene expression estimation. The paper presents a novel shock filter based approach for automatic microarray grid alignment. The proposed method brings up significantly reduced computational complexity compared to state of the art approaches, while similar results in terms of accuracy are achieved. Based on this approach, we also propose an FPGA based system for microarray image analysis that eliminates the shortcomings of existing software platforms: user intervention, increased computational time and cost. Our system includes application-specific architectures which involve algorithm parallelization, aiming fast and automated cDNA microarray image processing. The proposed automated image processing chain is implemented both on a general purpose processor and using the developed hardware architectures as co-processors in a FPGA based system. The comparative results included in the last section show that an important gain in terms of computational time is obtained using hardware based implementations.

  4. Materials processing by use of a Ti:Sapphire laser with automatically-adjustable pulse duration

    Science.gov (United States)

    Kamata, M.; Imahoko, T.; Ozono, K.; Obara, M.

    We have developed an automatic pulsewidth-adjustable femtosecond Ti:Sapphire laser system that can generate an output of 50 fs-1 ps in duration, and sub-mJ/pulse at a repetition rate of 1 kpps. The automatic pulse compressor enables one to control the pulsewidth in the range of 50 fs-1 ps by use of a personal computer (PC). The compressor can change the distance in-between and the tilt angle of the grating pairs by use of two stepping motors and two piezo-electric transducer(PZT) driven actuators, respectively. Both are controlled by a PC. Therefore, not only control of the pulsewidth, but also of the optical chirp becomes easy. By use of this femtosecond laser system, we fabricated a waveguide in fused quartz. The numerical aperture is chosen to 0.007 to loosely focus the femtosecond laser. The fabricated waveguides are well controllable by the incident laser pulsewidth. We also demonstrated the ablation processing of hydroxyapatite (Ca10(PO4)6(OH)2), which is a key component of human tooth and human bone for orthopedics and dentistry. With pulsewidth tunable output from 50 fs through 2 ps at 1 kpps, the chemical content of calcium and phosphorus is kept unchanged before and after 50-fs-2-ps laser ablation. We also demonstrated the precise ablation processing of human tooth enamel with 2 ps Ti:Sapphire laser.

  5. Automatic Gauge Control in Rolling Process Based on Multiple Smith Predictor Models

    Directory of Open Access Journals (Sweden)

    Jiangyun Li

    2014-01-01

    Full Text Available Automatic rolling process is a high-speed system which always requires high-speed control and communication capabilities. Meanwhile, it is also a typical complex electromechanical system; distributed control has become the mainstream of computer control system for rolling mill. Generally, the control system adopts the 2-level control structure—basic automation (Level 1 and process control (Level 2—to achieve the automatic gauge control. In Level 1, there is always a certain distance between the roll gap of each stand and the thickness testing point, leading to the time delay of gauge control. Smith predictor is a method to cope with time-delay system, but the practical feedback control based on traditional Smith predictor cannot get the ideal control result, because the time delay is hard to be measured precisely and in some situations it may vary in a certain range. In this paper, based on adaptive Smith predictor, we employ multiple models to cover the uncertainties of time delay. The optimal model will be selected by the proposed switch mechanism. Simulations show that the proposed multiple Smith model method exhibits excellent performance in improving the control result even for system with jumping time delay.

  6. Automatic rice crop height measurement using a field server and digital image processing.

    Science.gov (United States)

    Sritarapipat, Tanakorn; Rakwatin, Preesan; Kasetkasem, Teerasit

    2014-01-07

    Rice crop height is an important agronomic trait linked to plant type and yield potential. This research developed an automatic image processing technique to detect rice crop height based on images taken by a digital camera attached to a field server. The camera acquires rice paddy images daily at a consistent time of day. The images include the rice plants and a marker bar used to provide a height reference. The rice crop height can be indirectly measured from the images by measuring the height of the marker bar compared to the height of the initial marker bar. Four digital image processing steps are employed to automatically measure the rice crop height: band selection, filtering, thresholding, and height measurement. Band selection is used to remove redundant features. Filtering extracts significant features of the marker bar. The thresholding method is applied to separate objects and boundaries of the marker bar versus other areas. The marker bar is detected and compared with the initial marker bar to measure the rice crop height. Our experiment used a field server with a digital camera to continuously monitor a rice field located in Suphanburi Province, Thailand. The experimental results show that the proposed method measures rice crop height effectively, with no human intervention required.

  7. Abnormalities in Automatic Processing of Illness-Related Stimuli in Self-Rated Alexithymia.

    Directory of Open Access Journals (Sweden)

    Laura Brandt

    Full Text Available To investigate abnormalities in automatic information processing related to self- and observer-rated alexithymia, especially with regard to somatization, controlling for confounding variables such as depression and affect.89 healthy subjects (60% female, aged 19-71 years (M = 32.1. 58 subjects were additionally rated by an observer.Alexithymia (self-rating: TAS-20, observer rating: OAS; automatic information processing (priming task including verbal [illness-related, negative, positive, neutral] and facial [negative, positive, neutral] stimuli; somatoform symptoms (SOMS-7T; confounders: depression (BDI, affect (PANAS.Higher self-reported alexithymia scores were associated with lower reaction times for negative (r = .19, p < .10 and positive (r = .26, p < .05 verbal primes when the target was illness-related. Self-reported alexithymia was correlated with number (r = .42, p < .01 and intensity of current somatoform symptoms (r = .36, p < .01, but unrelated to observer-rated alexithymia (r = .11, p = .42.Results indicate a faster allocation of attentional resources away from task-irrelevant information towards illness-related stimuli in alexithymia. Considering the close relationship between alexithymia and somatization, these findings are compatible with the theoretical view that alexithymics focus strongly on bodily sensations of emotional arousal. A single observer rating (OAS does not seem to be an adequate alexithymia-measure in community samples.

  8. Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet

    Directory of Open Access Journals (Sweden)

    Paweł Kędzia

    2015-06-01

    Full Text Available Automatic Prompt System in the Process of Mapping plWordNet on Princeton WordNet The paper offers a critical evaluation of the power and usefulness of an automatic prompt system based on the extended Relaxation Labelling algorithm in the process of (manual mapping plWordNet on Princeton WordNet. To this end the results of manual mapping – that is inter-lingual relations between plWN and PWN synsets – are juxtaposed with the automatic prompts that were generated for the source language synsets to be mapped. We check the number and type of inter-lingual relations introduced on the basis of automatic prompts and the distance of the respective prompt synsets from the actual target language synsets.

  9. Automatic geocoding of high-value targets using structural image analysis and GIS data

    Science.gov (United States)

    Soergel, Uwe; Thoennessen, Ulrich

    1999-12-01

    Geocoding based merely on navigation data and sensor model is often not possible or precise enough. In these cases an improvement of the preregistration through image-based approaches is a solution. Due to the large amount of data in remote sensing automatic geocoding methods are necessary. For geocoding purposes appropriate tie points, which are present in image and map, have to be detected and matched. The tie points are base of the transformation function. Assigning the tie points is combinatorial problem depending on the number of tie points. This number can be reduced using structural tie points like corners or crossings of prominent extended targets (e.g. harbors, airfields). Additionally the reliability of the tie points is improved. Our approach extracts structural tie points independently in the image and in the vector map by a model-based image analysis. The vector map is provided by a GIS using ATKIS data base. The model parameters are extracted from maps or collateral information of the scenario. The two sets of tie points are automatically matched with a Geometric Hashing algorithm. The algorithm was successfully applied to VIS, IR and SAR data.

  10. Automatic Ethical Filtering using Semantic Vectors Creating Normative Tag Cloud from Big Data

    Directory of Open Access Journals (Sweden)

    Ahsan N. Khan

    2015-03-01

    Full Text Available Ethical filtering has been a painful and controversial issue seen by different angles worldwide. Stalwarts for freedom find newer methods to circumvent banned URLs while generative power of the Internet outpaces velocity of censorship. Hence, keeping online content safe from anti-religious and sexually provocative content is a growing issue in conservative countries in Asia and The Middle East. Solutions for online ethical filters are linearly upper bound given computation and big data growth scales. In this scenario, Semantic Vectors are applied as automatic ethical filters to calculate accuracy and efficiency metrics. The results show a normative tag cloud generated with superior performance to industry solutions.

  11. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    DEFF Research Database (Denmark)

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune

    2014-01-01

    sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model....... The model was optimized using 50 subjects and validated on 76 subjects. Results: The optimized sleep model used six topics, and the topic probabilities changed smoothly during transitions. According to the manual scorings, the model scored an overall subject-specific accuracy of 68.3 +/- 7.44 (% mu +/-sigma...

  12. Raster Data Partitioning for Supporting Distributed GIS Processing

    Science.gov (United States)

    Nguyen Thai, B.; Olasz, A.

    2015-08-01

    In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms

  13. Long-term abacus training induces automatic processing of abacus numbers in children.

    Science.gov (United States)

    Du, Fenglei; Yao, Yuan; Zhang, Qiong; Chen, Feiyan

    2014-01-01

    Abacus-based mental calculation (AMC) is a unique strategy for arithmetic that is based on the mental abacus. AMC experts can solve calculation problems with extraordinarily fast speed and high accuracy. Previous studies have demonstrated that abacus experts showed superior performance and special neural correlates during numerical tasks. However, most of those studies focused on the perception and cognition of Arabic numbers. It remains unclear how the abacus numbers were perceived. By applying a similar enumeration Stroop task, in which participants are presented with a visual display containing two abacus numbers and asked to compare the numerosity of beads that consisted of the abacus number, in the present study we investigated the automatic processing of the numerical value of abacus numbers in abacus-trained children. The results demonstrated a significant congruity effect in the numerosity comparison task for abacus-trained children, in both reaction time and error rate analysis. These results suggested that the numerical value of abacus numbers was perceived automatically by the abacus-trained children after long-term training.

  14. An algorithm for automatic unfolding of one-dimensional data distributions

    Energy Technology Data Exchange (ETDEWEB)

    Dembinski, Hans P., E-mail: hans.dembinski@kit.edu; Roth, Markus

    2013-11-21

    We discuss a non-parametric algorithm to unfold detector effects from one-dimensional data distributions. Unfolding is performed by fitting a flexible spline model to the data using an unbinned maximum-likelihood method while employing a smooth regularisation that maximises the relative entropy of the solution with respect to an a priori guess. A regularisation weight is picked automatically such that it minimises the mean integrated squared error of the fit. The algorithm scales to large data sets by employing an adaptive binning scheme in regions of high density. An estimate of the uncertainty of the solution is provided and shown to be accurate by studying the frequentist properties of the algorithm in Monte-Carlo simulations. The simulations show that the regularisation bias decreases as the sample size increases.

  15. An algorithm for automatic unfolding of one-dimensional data distributions

    Science.gov (United States)

    Dembinski, Hans P.; Roth, Markus

    2013-11-01

    We discuss a non-parametric algorithm to unfold detector effects from one-dimensional data distributions. Unfolding is performed by fitting a flexible spline model to the data using an unbinned maximum-likelihood method while employing a smooth regularisation that maximises the relative entropy of the solution with respect to an a priori guess. A regularisation weight is picked automatically such that it minimises the mean integrated squared error of the fit. The algorithm scales to large data sets by employing an adaptive binning scheme in regions of high density. An estimate of the uncertainty of the solution is provided and shown to be accurate by studying the frequentist properties of the algorithm in Monte-Carlo simulations. The simulations show that the regularisation bias decreases as the sample size increases.

  16. Smart-card-based automatic meal record system intervention tool for analysis using data mining approach.

    Science.gov (United States)

    Zenitani, Satoko; Nishiuchi, Hiromu; Kiuchi, Takahiro

    2010-04-01

    The Smart-card-based Automatic Meal Record system for company cafeterias (AutoMealRecord system) was recently developed and used to monitor employee eating habits. The system could be a unique nutrition assessment tool for automatically monitoring the meal purchases of all employees, although it only focuses on company cafeterias and has never been validated. Before starting an interventional study, we tested the reliability of the data collected by the system using the data mining approach. The AutoMealRecord data were examined to determine if it could predict current obesity. All data used in this study (n = 899) were collected by a major electric company based in Tokyo, which has been operating the AutoMealRecord system for several years. We analyzed dietary patterns by principal component analysis using data from the system and extracted 5 major dietary patterns: healthy, traditional Japanese, Chinese, Japanese noodles, and pasta. The ability to predict current body mass index (BMI) with dietary preference was assessed with multiple linear regression analyses, and in the current study, BMI was positively correlated with male gender, preference for "Japanese noodles," mean energy intake, protein content, and frequency of body measurement at a body measurement booth in the cafeteria. There was a negative correlation with age, dietary fiber, and lunchtime cafeteria use (R(2) = 0.22). This regression model predicted "would-be obese" participants (BMI >or= 23) with 68.8% accuracy by leave-one-out cross validation. This shows that there was sufficient predictability of BMI based on data from the AutoMealRecord System. We conclude that the AutoMealRecord system is valuable for further consideration as a health care intervention tool.

  17. A Paper on Automatic Fabrics Fault Processing Using Image Processing Technique In MATLAB

    Directory of Open Access Journals (Sweden)

    R.Thilepa

    2011-02-01

    Full Text Available The main objective of this paper is to elaborate how defective fabric parts can beprocessed using Matlab with image processing techniques. In developing countries like Indiaespecially in Tamilnadu, Tirupur the Knitwear capital of the country in three decades yields amajor income for the country. The city also employs either directly or indirectly more than 3lakhs of people and earns almost an income of 12, 000 crores per annum for the country in pastthree decades [2]. To upgrade this process the fabrics when processed in textiles the fault presenton the fabrics can be identified using Matlab with Image processing techniques. This imageprocessing technique is done using Matlab 7.3 and for the taken image, Noise Filtering,Histogram and Thresholding techniques are applied for the image and the output is obtained inthis paper. This research thus implements a textile defect detector with system visionmethodology in image processing.

  18. AUTOMATIC EXTRACTION OF BUILDING ROOF PLANES FROM AIRBORNE LIDAR DATA APPLYING AN EXTENDED 3D RANDOMIZED HOUGH TRANSFORM

    OpenAIRE

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-01-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extension...

  19. Automatic detection of referral patients due to retinal pathologies through data mining.

    Science.gov (United States)

    Quellec, Gwenolé; Lamard, Mathieu; Erginay, Ali; Chabouis, Agnès; Massin, Pascale; Cochener, Béatrice; Cazuguel, Guy

    2016-04-01

    With the increased prevalence of retinal pathologies, automating the detection of these pathologies is becoming more and more relevant. In the past few years, many algorithms have been developed for the automated detection of a specific pathology, typically diabetic retinopathy, using eye fundus photography. No matter how good these algorithms are, we believe many clinicians would not use automatic detection tools focusing on a single pathology and ignoring any other pathology present in the patient's retinas. To solve this issue, an algorithm for characterizing the appearance of abnormal retinas, as well as the appearance of the normal ones, is presented. This algorithm does not focus on individual images: it considers examination records consisting of multiple photographs of each retina, together with contextual information about the patient. Specifically, it relies on data mining in order to learn diagnosis rules from characterizations of fundus examination records. The main novelty is that the content of examination records (images and context) is characterized at multiple levels of spatial and lexical granularity: 1) spatial flexibility is ensured by an adaptive decomposition of composite retinal images into a cascade of regions, 2) lexical granularity is ensured by an adaptive decomposition of the feature space into a cascade of visual words. This multigranular representation allows for great flexibility in automatically characterizing normality and abnormality: it is possible to generate diagnosis rules whose precision and generalization ability can be traded off depending on data availability. A variation on usual data mining algorithms, originally designed to mine static data, is proposed so that contextual and visual data at adaptive granularity levels can be mined. This framework was evaluated in e-ophtha, a dataset of 25,702 examination records from the OPHDIAT screening network, as well as in the publicly-available Messidor dataset. It was successfully

  20. Summer Student Work Project Report: SCADA Bridge Tool Development Automatically Capturing Data from SCADA to the Maintenance System

    CERN Document Server

    Alhambra-Moron, Alfonso

    2015-01-01

    The main purpose of this report is to summarize the work project I have been doing at CERN during the last 3 months as a Summer Student. My name is Alfonso Alhambra Morón and the 8th of June 2015 I joined the EN-HE-LM team as a summer student supervised by Damien Lafarge in order to collaborate in the automation of the transfer of meter readings from SCADA1 to Infor EAM2, the computerized maintenance management system at CERN. The main objective of my project was to enable the automatic updates of meters in Infor EAM fetching data from SCADA so as to automatize a process which was done manually before and consumed resources in terms of having to consult the meter physically, import this information to Infor EAM by hand and detecting and correcting the errors that can occur when doing all of this manually. This problem is shared by several other teams at CERN apart from the Lift Maintenance team and for this reason the main target I had when developing my solution was flexibility and scalability so as to make...

  1. Automatic word processing hypnosis and cognitive therapy for psychosis: a case report.

    Science.gov (United States)

    Rusk, Gregory

    2014-07-01

    Hallucinations are often perceived as auditory and visual experiences emanating from outside the mind and it is this belief by patients that is a powerfully convincing factor in maintaining psychotic symptoms and accompanying distress. One of the main tasks of cognitive therapy for psychosis is to help the person recognize that the hallucinations emerge from within their own mind for some meaningful reason. A change in meaning can change a person's affective and behavioral responses to hallucinatory phenomena. Automatic Word Processing (AWP) hypnosis is a novel way to help a person realize that the hallucinations they perceive as external and distressful are really internally generated phenomena often based upon his or her life experiences. The case presented here illustrates how AWP hypnosis helped a 13-year-old girl access the internal material that shaped the form and content of visual and auditory hallucinations and interfered with her social and academic functioning.

  2. Automatic Estimation of Live Coffee Leaf Infection Based on Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Eric Hitimana

    2014-02-01

    Full Text Available Image segmentation is the most challenging issue in computer vision applications. And most difficulties for crops management in agriculture ar e the lack of appropriate methods for detecting the leaf damage for pests’ treatment. In this paper we proposed an automatic method for leaf damage detection and severity estimation o f coffee leaf by avoiding defoliation. After enhancing the contrast of the original image using LUT based gamma correction, the image is processed to remove the background, and the output leaf is clustered using Fuzzy c-means segmentation in V channel of YUV color space to max imize all leaf damage detection, and finally, the severity of leaf is estimated in terms of ratio for leaf pixel distribution between the normal and the detected leaf damage. The results in each proposed method was compared to the current researches and the accuracy is obvious either in the background removal or dama ge detection.

  3. SAUNA—a system for automatic sampling, processing, and analysis of radioactive xenon

    Science.gov (United States)

    Ringbom, A.; Larson, T.; Axelsson, A.; Elmgren, K.; Johansson, C.

    2003-08-01

    A system for automatic sampling, processing, and analysis of atmospheric radioxenon has been developed. From an air sample of about 7 m3 collected during 12 h, 0.5 cm3 of xenon is extracted, and the atmospheric activities from the four xenon isotopes 133Xe, 135Xe, 131mXe, and 133mXe are determined with a beta-gamma coincidence technique. The collection is performed using activated charcoal and molecular sieves at ambient temperature. The sample preparation and quantification are performed using preparative gas chromatography. The system was tested under routine conditions for a 5-month period, with average minimum detectable concentrations below 1 mBq/ m3 for all four isotopes.

  4. Choice architectural nudge interventions to promote vegetable consumption based on automatic processes decision-making

    DEFF Research Database (Denmark)

    Skov, Laurits Rohden; Friis Rasmussen, Rasmus; Møller Andersen, Pernille;

    2014-01-01

    Objective: To test the effectiveness of three types of choice architectural nudges to promote vegetable consumption among Danish people. The experiment aims at providing evidence on the influence of automatic processing system in the food choice situation in an all you can eat buffet serving...... of the salad separately to increase choices compared to a pre-mixed salad. Results: A total of 92 people (dropout rate=21 %) partook in the study (60.2% female) with an average age of 26.5. Nudge 1 (N=27) found a significant decrease in total energy consumption due to high decrease in meat consumption (p..., but promoted health by decreasing total energy intake which suggests that visual variety of fruit and greens prompts a healthy-eater subconscious behaviour....

  5. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  6. Neural dynamics of morphological processing in spoken word comprehension: Laterality and automaticity

    Directory of Open Access Journals (Sweden)

    Caroline M. Whiting

    2013-11-01

    Full Text Available Rapid and automatic processing of grammatical complexity is argued to take place during speech comprehension, engaging a left-lateralised fronto-temporal language network. Here we address how neural activity in these regions is modulated by the grammatical properties of spoken words. We used combined magneto- and electroencephalography (MEG, EEG to delineate the spatiotemporal patterns of activity that support the recognition of morphologically complex words in English with inflectional (-s and derivational (-er affixes (e.g. bakes, baker. The mismatch negativity (MMN, an index of linguistic memory traces elicited in a passive listening paradigm, was used to examine the neural dynamics elicited by morphologically complex words. Results revealed an initial peak 130-180 ms after the deviation point with a major source in left superior temporal cortex. The localisation of this early activation showed a sensitivity to two grammatical properties of the stimuli: 1 the presence of morphological complexity, with affixed words showing increased left-laterality compared to non-affixed words; and 2 the grammatical category, with affixed verbs showing greater left-lateralisation in inferior frontal gyrus compared to affixed nouns (bakes vs. beaks. This automatic brain response was additionally sensitive to semantic coherence (the meaning of the stem vs. the meaning of the whole form in fronto-temporal regions. These results demonstrate that the spatiotemporal pattern of neural activity in spoken word processing is modulated by the presence of morphological structure, predominantly engaging the left-hemisphere’s fronto-temporal language network, and does not require focused attention on the linguistic input.

  7. Sla-Oriented Semi-Automatic Management Of Data Storage And Applications In Distributed Environments

    Directory of Open Access Journals (Sweden)

    Dariusz Król

    2010-01-01

    Full Text Available In this paper we describe a semi-automatic programming framework for supporting userswith managing the deployment of distributed applications along with storing large amountsof data in order to maintain Quality of Service in highly dynamic and distributed environments,e.g., Grid. The Polish national PL-GRID project aims to provide Polish science withboth hardware and software infrastructures which will allow scientists to perform complexsimulations and in-silico experiments on a scale greater than ever before. We highlight theissues and challenges related to data storage strategies that arise at the analysis stage ofuser requirements coming from different areas of science. Next we present a solution to thediscussed issues along with a description of sample usage scenarios. At the end we provideremarks on the current status of the implementation work and some results from the testsperformed.

  8. Automatic spikes detection in seismogram

    Institute of Scientific and Technical Information of China (English)

    王海军; 靳平; 刘贵忠

    2003-01-01

    @@ Data processing for seismic network is very complex and fussy, because a lot of data is recorded in seismic network every day, which make it impossible to process these data all by manual work. Therefore, seismic data should be processed automatically to produce a initial results about events detection and location. Afterwards, these results are reviewed and modified by analyst. In automatic processing data quality checking is important. There are three main problem data thatexist in real seismic records, which include: spike, repeated data and dropouts. Spike is defined as isolated large amplitude point; the other two problem datahave the same features that amplitude of sample points are uniform in a interval. In data quality checking, the first step is to detect and statistic problem data in a data segment, if percent of problem data exceed a threshold, then the whole data segment is masked and not be processed in the later process.

  9. Simple automatic strategy for background drift correction in chromatographic data analysis.

    Science.gov (United States)

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-01

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use.

  10. Process-scheme-driven automatic construction of NC machining cell for aircraft structural parts

    Institute of Scientific and Technical Information of China (English)

    Chen Shulin; Zheng Guolei; Zhou Min; Du Baorui; Chu Hongzhen

    2013-01-01

    In order to enhance the NC programming efficiency and quality of aircraft structural parts (ASPs), an intelligent NC programming pattern driven by process schemes is presented. In this pattern, the NC machining cell is the minimal organizational structure in the technological process, consisting of an operation machining volume cell, and the type and parameters of the machining operation. After the machining cell construction, the final NC program can be easily obtained in a CAD/CAM system by instantiating the machining operation for each machining cell. Accord-ingly, how to automatically establish the machining cells is a key issue in intelligent NC program-ming. On the basis of the NC machining craft of ASP, the paper aims to make an in-depth research on this issue. Firstly, some new terms about the residual volume and the machinable volume are defined, and then, the technological process is modeled with a process scheme. Secondly, the approach to building the machining cells is introduced, in which real-time complement machining is mainly considered to avoid interference and overcutting. Thirdly, the implementing algorithm is designed and applied to the Intelligent NC Programming System of ASP. Finally, the developed algorithm is validated through two case studies.

  11. Protokol Interchangeable Data pada VMeS (Vessel Messaging System dan AIS (Automatic Identification System

    Directory of Open Access Journals (Sweden)

    Farid Andhika

    2012-09-01

    Full Text Available VMeS (Vessel Messaging System merupakan komunikasi berbasis radio untuk mengirimkan pesan antara VMeS terminal kapal di laut dengan VMeS gateway di darat. Dalam perkembangan sistem monitoring kapal di laut umumnya menggunakan AIS (Automatic Identification System yang telah digunakan di seluruh pelabuhan untuk memantau kondisi kapal dan mencegah tabrakan antar kapal. Dalam penelitian ini akan dirancang format data yang sesuai untuk VMeS agar bisa dilakukan proses interchangeable ke AIS sehingga bisa dibaca oleh AIS receiver yang ditujukan untuk kapal dengan ukuran dibawah 30 GT (Gross Tonnage. Format data VmeS dirancang dalam tiga jenis yaitu data posisi, data informasi kapal dan data pesan pendek yang akan dilakukan interchangeable dengan AIS tipe 1,4 dan 8. Pengujian kinerja sistem interchangeable menunjukkan bahwa dengan peningkatan periode pengiriman pesan maka lama delay total meningkat tetapi packet loss menurun. Pada pengiriman pesan setiap 5 detik dengan kecepatan 0-40 km/jam, 96,67 % data dapat diterima dengan baik. Data akan mengalami packet loss jika level daya terima dibawah -112 dBm . Jarak terjauh yang dapat dijangkau modem dengan kondisi bergerak yaitu informatika ITS dengan jarak 530 meter terhadap Laboratorium B406 dengan level daya terima -110 dBm.

  12. ESO Phase 3 automatic data validation: groovy-based tool to assure the compliance of the reduced data with the Science Data Product Standard

    Science.gov (United States)

    Mascetti, L.; Forchı, V.; Arnaboldi, M.; Delmotte, N.; Micol, A.; Retzlaff, J.; Zampieri, S.

    2016-07-01

    The ESO Phase 3 infrastructure provides a channel to submit reduced data products for publication to the astronomical community and long-term data preservation in the ESO Science Archive Facility. To be integrated into Phase 3, data must comply to the ESO Science Data Product Standard regarding format (one unique standard data format is associated to each type of product, like image, spectrum, IFU cube, etc.) and required metadata. ESO has developed a Groovy based tool that carries out an automatic validation of the submitted reduced products that is triggered when data are uploaded and then submitted. Here we present how the tool is structured and which checks are implemented.

  13. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    Science.gov (United States)

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes.

  14. Examining the influence of psychopathy, hostility biases, and automatic processing on criminal offenders' Theory of Mind.

    Science.gov (United States)

    Nentjes, Lieke; Bernstein, David; Arntz, Arnoud; van Breukelen, Gerard; Slaats, Mariëtte

    2015-01-01

    Theory of Mind (ToM) is a social perceptual skill that refers to the ability to take someone else's perspective and infer what others think. The current study examined the effect of potential hostility biases, as well as controlled (slow) versus automatic (fast) processing on ToM performance in psychopathy. ToM abilities (as assessed with the Reading the Mind in the Eyes Test; RMET; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), was compared between 39 PCL-R diagnosed psychopathic offenders, 37 non-psychopathic offenders, and 26 nonoffender controls. Contrary to our hypothesis, psychopathic individuals presented with intact overall RMET performance when restrictions were imposed on how long task stimuli could be processed. In addition, psychopaths did not over-ascribe hostility to task stimuli (i.e., lack of hostility bias). However, there was a significant three-way interaction between hostility, processing speed, and psychopathy: when there was no time limit on stimulus presentation, psychopathic offenders made fewer errors in identifying more hostile eye stimuli compared to nonoffender controls, who seemed to be less accurate in detecting hostility. Psychopaths' more realistic appraisal of others' malevolent mental states is discussed in the light of theories that stress its potential adaptive function.

  15. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety.

    Science.gov (United States)

    Morelli, Sylvia A; Lieberman, Matthew D

    2013-01-01

    Although many studies have examined the neural basis of empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. Thirty-two participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, actively empathizing, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; and septal area, SA). Two key regions-the ventral AI and SA-were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching vs. empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others' emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy and social cognition (DMPFC, MPFC, TPJ, and amygdala). The results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  16. The role of automaticity and attention in neural processes underlying empathy for happiness, sadness, and anxiety

    Directory of Open Access Journals (Sweden)

    Sylvia A. Morelli

    2013-05-01

    Full Text Available Although many studies have examined the neural basis of experiencing empathy, relatively little is known about how empathic processes are affected by different attentional conditions. Thus, we examined whether instructions to empathize might amplify responses in empathy-related regions and whether cognitive load would diminish the involvement of these regions. 32 participants completed a functional magnetic resonance imaging session assessing empathic responses to individuals experiencing happy, sad, and anxious events. Stimuli were presented under three conditions: watching naturally, while instructed to empathize, and under cognitive load. Across analyses, we found evidence for a core set of neural regions that support empathic processes (dorsomedial prefrontal cortex, DMPFC; medial prefrontal cortex, MPFC; temporoparietal junction, TPJ; amygdala; ventral anterior insula, AI; septal area, SA. Two key regions – the ventral AI and SA – were consistently active across all attentional conditions, suggesting that they are automatically engaged during empathy. In addition, watching versus empathizing with targets was not markedly different and instead led to similar subjective and neural responses to others’ emotional experiences. In contrast, cognitive load reduced the subjective experience of empathy and diminished neural responses in several regions related to empathy (DMPFC, MPFC, TPJ, amygdala and social cognition. The current results reveal how attention impacts empathic processes and provides insight into how empathy may unfold in everyday interactions.

  17. Automatic screening and classification of diabetic retinopathy and maculopathy using fuzzy image processing.

    Science.gov (United States)

    Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina

    2016-12-01

    Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.

  18. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    Energy Technology Data Exchange (ETDEWEB)

    Bowler, Matthew W., E-mail: mbowler@embl.fr [European Molecular Biology Laboratory, Grenoble Outstation, 71 avenue des Martyrs, F-38042 Grenoble (France); Université Grenoble Alpes-EMBL-CNRS, 71 avenue des Martyrs, F-38042 Grenoble (France); Nurizzo, Didier, E-mail: mbowler@embl.fr; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine [European Synchrotron Radiation Facility, 71 avenue des Martyrs, F-38043 Grenoble (France)

    2015-10-03

    MASSIF-1 (ID30A-1) is a new beamline dedicated to the completely automatic characterization and data collection from crystals of biological macromolecules. MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined.

  19. Feature Extraction and Automatic Material Classification of Underground Objects from Ground Penetrating Radar Data

    Directory of Open Access Journals (Sweden)

    Qingqing Lu

    2014-01-01

    Full Text Available Ground penetrating radar (GPR is a powerful tool for detecting objects buried underground. However, the interpretation of the acquired signals remains a challenging task since an experienced user is required to manage the entire operation. Particularly difficult is the classification of the material type of underground objects in noisy environment. This paper proposes a new feature extraction method. First, discrete wavelet transform (DWT transforms A-Scan data and approximation coefficients are extracted. Then, fractional Fourier transform (FRFT is used to transform approximation coefficients into fractional domain and we extract features. The features are supplied to the support vector machine (SVM classifiers to automatically identify underground objects material. Experiment results show that the proposed feature-based SVM system has good performances in classification accuracy compared to statistical and frequency domain feature-based SVM system in noisy environment and the classification accuracy of features proposed in this paper has little relationship with the SVM models.

  20. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  1. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  2. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan;

    2016-01-01

    In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...... create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which......, by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark...

  3. Lifetime-Based Memory Management for Distributed Data Processing Systems

    DEFF Research Database (Denmark)

    Lu, Lu; Shi, Xuanhua; Zhou, Yongluan;

    2016-01-01

    , by automatically analyzing the user-defined functions and data types, obtains the expected lifetime of the data objects, and then allocates and releases memory space accordingly to minimize the garbage collection overhead. In particular, we present Deca, a concrete implementation of our proposal on top of Spark......In-memory caching of intermediate data and eager combining of data in shuffle buffers have been shown to be very effective in minimizing the re-computation and I/O cost in distributed data processing systems like Spark and Flink. However, it has also been widely reported that these techniques would...... create a large amount of long-living data objects in the heap, which may quickly saturate the garbage collector, especially when handling a large dataset, and hence would limit the scalability of the system. To eliminate this problem, we propose a lifetime-based memory management framework, which...

  4. Automatic multi-modal intelligent seizure acquisition (MISA) system for detection of motor seizures from electromyographic data and motion data

    DEFF Research Database (Denmark)

    Conradsen, Isa; Beniczky, Sándor; Wolf, Peter

    2012-01-01

    The objective is to develop a non-invasive automatic method for detection of epileptic seizures with motor manifestations. Ten healthy subjects who simulated seizures and one patient participated in the study. Surface electromyography (sEMG) and motion sensor features were extracted as energy...... of the seizure from the patient showed that the simulated seizures were visually similar to the epileptic one. The multi-modal intelligent seizure acquisition (MISA) system showed high sensitivity, short detection latency and low false detection rate. The results showed superiority of the multi- modal detection...... system compared to the uni-modal one. The presented system has a promising potential for seizure detection based on multi-modal data....

  5. Composable Data Processing in Environmental Science - A Process View

    NARCIS (Netherlands)

    Wombacher, A.

    2008-01-01

    Data processing in environmental science is essential for doing science. The heterogeneity of data sources, data processing operations and infrastructures results in a lot of manual data and process integration work done by each scientist individually. This is very inefficient and time consuming. Th

  6. Advancements in Big Data Processing in the ATLAS and CMS Experiments

    CERN Document Server

    Vaniachine, A.V.

    2012-01-01

    The ever-increasing volumes of scientific data present new challenges for distributed computing and Grid technologies. The emerging Big Data revolution drives exploration in scientific fields including nanotechnology, astrophysics, high-energy physics, biology and medicine. New initiatives are transforming data-driven scientific fields enabling massive data analysis in new ways. In petascale data processing scientists deal with datasets, not individual files. As a result, a task (comprised of many jobs) became a unit of petascale data processing on the Grid. Splitting of a large data processing task into jobs enabled fine-granularity checkpointing analogous to the splitting of a large file into smaller TCP/IP packets during data transfers. Transferring large data in small packets achieves reliability through automatic re-sending of the dropped TCP/IP packets. Similarly, transient job failures on the Grid can be recovered by automatic re-tries to achieve reliable six sigma production quality in petascale data ...

  7. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  8. Automatic cardiac gating of small-animal PET from list-mode data

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Udias, J.M. [Universidad Complutense de Madrid Univ. (Spain). Grupo de Fisica Nuclear; Vaquero, J.J.; Desco, M. [Universidad Carlos III de Madrid (Spain). Dept. de Bioingenieria e Ingenieria Aeroespacial; Cusso, L. [Hospital General Universitario Gregorio Maranon, Madrid (Spain). Unidad de Medicina y Cirugia Experimental

    2011-07-01

    This work presents a method to obtain automatically the cardiac gating signal in a PET study of rats, by employing the variation with time of the counts in the cardiac region, that can be extracted from list-mode data. In an initial step, the cardiac region is identified in the image space by backward-projecting a small fraction of the acquired data and studying the variation with time of the counts in each voxel inside said region, with frequencies within 2 and 8 Hz. The region obtained corresponds accurately to the left-ventricle of the heart of the rat. In a second step, the lines-of-response (LORs) connected with this region are found by forward-projecting this region. The time variation of the number of counts in these LORs contains the cardiac motion information that we want to extract. This variation of counts with time is band-pass filtered to reduce noise, and the time signal so obtained is used to create the gating signal. The result was compared with a cardiac gating signal obtained from an ECG acquired simultaneously to the PET study. Reconstructed gated images obtained from both gating information are similar. The method proposed demonstrates that valid cardiac gating signals can be obtained for rats from PET list-mode data. (orig.)

  9. AUTOMATIC MAPPING OF GLACIER BASED ON SAR IMAGERY BY BENEFITS OF FREELY OPTICAL AND THERMAL DATA

    Directory of Open Access Journals (Sweden)

    L. Fang

    2015-03-01

    Full Text Available For many research applications like water resources evaluation, determination of glacier specific changes, and for calculation of the past and future contribution of glaciers to sea-level change, parameters about the size and spatial distribution of glaciers is crucial. In this paper, an automatic method for determination of glacier surface area using single track high resolution TerraSAR-X imagery by benefits of low resolution optical and thermal data is presented. Based on the normalized difference snow index (NDSI and land surface temperature (LST map generated from optical and thermal data combined with a surface slope data, a low resolution binary mask was derived used for the supervised classification of glacier using SAR imagery. Then, a set of suitable features is derived from the SAR intensity image, such as the texture information generated based on the gray level co-occurrence matrix (GLCM, and the intensity values. With these features, the glacier surface is discriminated from the background by Random Forests (RF method.

  10. Automatic Classification of the Vestibulo-Ocular Reflex Nystagmus: Integration of Data Clustering and System Identification.

    Science.gov (United States)

    Ranjbaran, Mina; Smith, Heather L H; Galiana, Henrietta L

    2016-04-01

    The vestibulo-ocular reflex (VOR) plays an important role in our daily activities by enabling us to fixate on objects during head movements. Modeling and identification of the VOR improves our insight into the system behavior and improves diagnosis of various disorders. However, the switching nature of eye movements (nystagmus), including the VOR, makes dynamic analysis challenging. The first step in such analysis is to segment data into its subsystem responses (here slow and fast segment intervals). Misclassification of segments results in biased analysis of the system of interest. Here, we develop a novel three-step algorithm to classify the VOR data into slow and fast intervals automatically. The proposed algorithm is initialized using a K-means clustering method. The initial classification is then refined using system identification approaches and prediction error statistics. The performance of the algorithm is evaluated on simulated and experimental data. It is shown that the new algorithm performance is much improved over the previous methods, in terms of higher specificity.

  11. Automatic Mapping of Glacier Based on SAR Imagery by Benefits of Freely Optical and Thermal Data

    Science.gov (United States)

    Fang, L.; Hoegner, L.; Stilla, U.

    2015-03-01

    For many research applications like water resources evaluation, determination of glacier specific changes, and for calculation of the past and future contribution of glaciers to sea-level change, parameters about the size and spatial distribution of glaciers is crucial. In this paper, an automatic method for determination of glacier surface area using single track high resolution TerraSAR-X imagery by benefits of low resolution optical and thermal data is presented. Based on the normalized difference snow index (NDSI) and land surface temperature (LST) map generated from optical and thermal data combined with a surface slope data, a low resolution binary mask was derived used for the supervised classification of glacier using SAR imagery. Then, a set of suitable features is derived from the SAR intensity image, such as the texture information generated based on the gray level co-occurrence matrix (GLCM), and the intensity values. With these features, the glacier surface is discriminated from the background by Random Forests (RF) method.

  12. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  13. Development of Web Tools for the automatic Upload of Calibration Data into the CMS Condition Data

    Science.gov (United States)

    di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2010-04-01

    This article explains the recent evolution of Condition Database Application Service. The Condition Database Application Service is part of the condition database system of the CMS experiment, and it is used for handling and monitoring the CMS detector condition data, and the corresponding computing resources like Oracle Databases, storage service and network devices. We deployed a service, the offline Dropbox service, that will be used by Alignment and Calibration Group in order to upload from the offline network (GPN) the calibration constants produced by running offline analysis.

  14. Automatic polishing process of plastic injection molds on a 5-axis milling center

    CERN Document Server

    Pessoles, Xavier; 10.1016/j.jmatprotec.2008.08.034

    2010-01-01

    The plastic injection mold manufacturing process includes polishing operations when surface roughness is critical or mirror effect is required to produce transparent parts. This polishing operation is mainly carried out manually by skilled workers of subcontractor companies. In this paper, we propose an automatic polishing technique on a 5-axis milling center in order to use the same means of production from machining to polishing and reduce the costs. We develop special algorithms to compute 5-axis cutter locations on free-form cavities in order to imitate the skills of the workers. These are based on both filling curves and trochoidal curves. The polishing force is ensured by the compliance of the passive tool itself and set-up by calibration between displacement and force based on a force sensor. The compliance of the tool helps to avoid kinematical error effects on the part during 5-axis tool movements. The effectiveness of the method in terms of the surface roughness quality and the simplicity of impleme...

  15. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Science.gov (United States)

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.

    2012-12-01

    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  16. Effects of single cortisol administrations on human affect reviewed: Coping with stress through adaptive regulation of automatic cognitive processing.

    Science.gov (United States)

    Putman, Peter; Roelofs, Karin

    2011-05-01

    The human stress hormone cortisol may facilitate effective coping after psychological stress. In apparent agreement, administration of cortisol has been demonstrated to reduce fear in response to stressors. For anxious patients with phobias or posttraumatic stress disorder this has been ascribed to hypothetical inhibition of retrieval of traumatic memories. However, such stress-protective effects may also work via adaptive regulation of early cognitive processing of threatening information from the environment. This paper selectively reviews the available literature on effects of single cortisol administrations on affect and early cognitive processing of affectively significant information. The concluded working hypothesis is that immediate effects of high concentration of cortisol may facilitate stress-coping via inhibition of automatic processing of goal-irrelevant threatening information and through increased automatic approach-avoidance responses in early emotional processing. Limitations in the existing literature and suggestions for future directions are briefly discussed.

  17. Simple Approaches to Improve the Automatic Inventory of ZEBRA Crossing from Mls Data

    Science.gov (United States)

    Arias, P.; Riveiro, B.; Soilán, M.; Díaz-Vilariño, L.; Martínez-Sánchez, J.

    2015-08-01

    The city management is increasingly supported by information technologies, leading to paradigms such as smart cities, where decision-makers, companies and citizens are continuously interconnected. 3D modelling turns of great relevance when the city has to be managed making use of geospatial databases or Geographic Information Systems. On the other hand, laser scanning technology has experienced a significant growth in the last years, and particularly, terrestrial mobile laser scanning platforms are being more and more used with inventory purposes in both cities and road environments. Consequently, large datasets are available to produce the geometric basis for the city model; however, this data is not directly exploitable by management systems constraining the implementation of the technology for such applications. This paper presents a new algorithm for the automatic detection of zebra crossing. The algorithm is divided in three main steps: road segmentation (based on a PCA analysis of the points contained in each cycle of collected by a mobile laser system), rasterization (conversion of the point cloud to a raster image coloured as a function of intensity data), and zebra crossing detection (using the Hough Transform and logical constrains for line classification). After evaluating different datasets collected in three cities located in Northwest Spain (comprising 25 strips with 30 visible zebra crossings) a completeness of 83% was achieved.

  18. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  19. Processing multidimensional nuclear physics data

    Energy Technology Data Exchange (ETDEWEB)

    Becker, J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Modern Ge detector arrays for gamma-ray spectroscopy are producing data sets unprecedented in size and event multiplicity. Gammasphere, the DOE sponsored array, has the following characteristics: (1) High granularity (110 detectors); (2) High efficiency (10%); and (3) Precision energy measurements (Delta EE = 0.2%). Characteristics of detector line shape, the data set, and the standard practice in the nuclear physics community to the nuclear gamma-ray cascades from the 4096 times 4096 times 4096 data cube will be discussed.

  20. The speed of magnitude processing and executive functions in controlled and automatic number comparison in children: an electro-encephalography study

    Directory of Open Access Journals (Sweden)

    Jármi Éva

    2007-04-01

    Full Text Available Abstract Background In the numerical Stroop paradigm (NSP participants decide whether a digit is numerically or physically larger than another simultaneously presented digit. This paradigm is frequently used to assess the automatic number processing abilities of children. Currently it is unclear whether an equally refined evaluation of numerical magnitude occurs in both controlled (the numerical comparison task of the NSP and automatic (the physical comparison task of the NSP numerical comparison in both children and adults. One of our objectives was to respond this question by measuring the speed of controlled and automatic magnitude processing in children and adults in the NSP. Another objective was to determine how the immature executive functions of children affect their cognitive functions relative to adults in numerical comparison. Methods and results The speed of numerical comparison was determined by monitoring the electro-encephalographic (EEG numerical distance effect: The amplitude of EEG measures is modulated as a function of numerical distance between the to-be-compared digits. EEG numerical distance effects occurred between 140–320 ms after stimulus presentation in both controlled and automatic numerical comparison in all age groups. Executive functions were assessed by analyzing facilitation and interference effects on the latency of the P3b event-related potential component and the lateralized readiness potential (LRP. Interference effects were more related to response than to stimulus processing in children as compared with adults. The LRP revealed that the difficulty to inhibit irrelevant response tendencies was a major factor behind interference in the numerical task in children. Conclusion The timing of the EEG distance effect suggests that a refined evaluation of numerical magnitude happened at a similar speed in each age group during both controlled and automatic magnitude processing. The larger response interference in

  1. Using sensor data patterns from an automatic milking system to develop predictive variables for classifying clinical mastitis and abnormal milk

    NARCIS (Netherlands)

    Kamphuis, A.; Pietersma, D.; Tol, van der R.; Wiedermann, M.; Hogeveen, H.

    2008-01-01

    Dairy farmers using automatic milking are able to manage mastitis successfully with the help of mastitis attention lists. These attention lists are generated with mastitis detection models that make use of sensor data obtained throughout each quarter milking. The models tend to be limited to using t

  2. Automatic detection and agronomic characterization of olive groves using high-resolution imagery and LIDAR data

    Science.gov (United States)

    Caruso, T.; Rühl, J.; Sciortino, R.; Marra, F. P.; La Scalia, G.

    2014-10-01

    The Common Agricultural Policy of the European Union grants subsidies for olive production. Areas of intensified olive farming will be of major importance for the increasing demand for oil production of the next decades, and countries with a high ratio of intensively and super-intensively managed olive groves will be more competitive than others, since they are able to reduce production costs. It can be estimated that about 25-40% of the Sicilian oliviculture must be defined as "marginal". Modern olive cultivation systems, which permit the mechanization of pruning and harvest operations, are limited. Agronomists, landscape planners, policy decision-makers and other professionals have a growing need for accurate and cost-effective information on land use in general and agronomic parameters in the particular. The availability of high spatial resolution imagery has enabled researchers to propose analysis tools on agricultural parcel and tree level. In our study, we test the performance of WorldView-2 imagery relative to the detection of olive groves and the delineation of olive tree crowns, using an object-oriented approach of image classification in combined use with LIDAR data. We selected two sites, which differ in their environmental conditions and in their agronomic parameters of olive grove cultivation. The main advantage of the proposed methodology is the low necessary quantity of data input and its automatibility. However, it should be applied in other study areas to test if the good results of accuracy assessment can be confirmed. Data extracted by the proposed methodology can be used as input data for decision-making support systems for olive grove management.

  3. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  4. Adult attachment orientation and automatic processing of emotional information on a semantic level: A masked affective priming study.

    Science.gov (United States)

    Donges, Uta-Susan; Zeitschel, Frank; Kersting, Anette; Suslow, Thomas

    2015-09-30

    Early adverse social experiences leading to attachment insecurity could cause heightened sensitivity to emotional information. Automatic processing of emotional stimuli conveys information about positive-negative differentiation and the so-called possessor vs. other-relevance of valence. The aim of the present study was to examine automatic processing of emotional and relevance type information on a semantic level as a function of adult attachment avoidance and anxiety. A masked affective priming task, varying valence and relevance of prime and target adjectives, was presented to a sample of 153 healthy adults. The Experiences in Close Relationships scale was administered to assess attachment orientation. Significant priming effects for valence and relevance were observed. Attachment avoidance, but not attachment anxiety, was significantly related to affective priming independently of trait anxiety and depression. Specifically, attachment avoidance was found to be related to affective priming effects based on other-relevant words. It can be concluded that automatic processing of emotional adjectives used to characterize safe or risky social environments is heightened in avoidant individuals. The avoidantly attached processing style has similarities with repressive coping, which is characterized by an enhanced early response to emotion stimuli followed by avoidant biases at a controlled processing level.

  5. Automatic Censoring CFAR Detector Based on Ordered Data Difference for Low-Flying Helicopter Safety

    Directory of Open Access Journals (Sweden)

    Wen Jiang

    2016-07-01

    Full Text Available Being equipped with a millimeter-wave radar allows a low-flying helicopter to sense the surroundings in real time, which significantly increases its safety. However, nonhomogeneous clutter environments, such as a multiple target situation and a clutter edge environment, can dramatically affect the radar signal detection performance. In order to improve the radar signal detection performance in nonhomogeneous clutter environments, this paper proposes a new automatic censored cell averaging CFAR detector. The proposed CFAR detector does not require any prior information about the background environment and uses the hypothesis test of the first-order difference (FOD result of ordered data to reject the unwanted samples in the reference window. After censoring the unwanted ranked cells, the remaining samples are combined to form an estimate of the background power level, thus getting better radar signal detection performance. The simulation results show that the FOD-CFAR detector provides low loss CFAR performance in a homogeneous environment and also performs robustly in nonhomogeneous environments. Furthermore, the measured results of a low-flying helicopter validate the basic performance of the proposed method.

  6. Algorithm for the Automatic Estimation of Agricultural Tree Geometric Parameters Using Airborne Laser Scanning Data

    Science.gov (United States)

    Hadaś, E.; Borkowski, A.; Estornell, J.

    2016-06-01

    The estimation of dendrometric parameters has become an important issue for the agricultural planning and management. Since the classical field measurements are time consuming and inefficient, Airborne Laser Scanning (ALS) data can be used for this purpose. Point clouds acquired for orchard areas allow to determine orchard structures and geometric parameters of individual trees. In this research we propose an automatic method that allows to determine geometric parameters of individual olive trees using ALS data. The method is based on the α-shape algorithm applied for normalized point clouds. The algorithm returns polygons representing crown shapes. For points located inside each polygon, we select the maximum height and the minimum height and then we estimate the tree height and the crown base height. We use the first two components of the Principal Component Analysis (PCA) as the estimators for crown diameters. The α-shape algorithm requires to define the radius parameter R. In this study we investigated how sensitive are the results to the radius size, by comparing the results obtained with various settings of the R with reference values of estimated parameters from field measurements. Our study area was the olive orchard located in the Castellon Province, Spain. We used a set of ALS data with an average density of 4 points m-2. We noticed, that there was a narrow range of the R parameter, from 0.48 m to 0.80 m, for which all trees were detected and for which we obtained a high correlation coefficient (> 0.9) between estimated and measured values. We compared our estimates with field measurements. The RMSE of differences was 0.8 m for the tree height, 0.5 m for the crown base height, 0.6 m and 0.4 m for the longest and shorter crown diameter, respectively. The accuracy obtained with the method is thus sufficient for agricultural applications.

  7. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images

    Directory of Open Access Journals (Sweden)

    Kimori Yoshitaka

    2010-07-01

    Full Text Available Abstract Background A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. Results A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Conclusions Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  8. Differences in semantic category priming in the left and right cerebral hemispheres under automatic and controlled processing conditions.

    Science.gov (United States)

    Collins, M

    1999-08-01

    The contribution of each cerebral hemisphere to the generation of semantic category meanings at automatic and strategic levels of processing was investigated in a priming experiment where prime and target words were independently projected to the left or right visual fields (LVF or RVF). Non-associated category exemplars were employed as related pairs in a lexical decision task and presented in two experimental conditions. The first condition was designed to elicit automatic processing, so related pairs comprised 20% of the positive set, stimulus pairs were temporally separated by a stimulus onset asynchrony (SOA) of 250 ms, and there was no allusion to the presence of related pairs in the instructions to subjects. The second condition, designed to invoke controlled processing, incorporated a relatedness proportion of 50%, stimulus pairs separated by an SOA of 750 ms, and instructions which informed subjects of the presence and use of category exemplar pairs in the stimulus set. In the first condition, a prime directed to either visual field facilitated responses to categorically related targets subsequently projected to the RVF, while in the second condition a prime directed to either visual field facilitated responses to related targets projected to the LVF. The facilitation effects obtained in both conditions appeared to reflect automatic processes, while strategic processes were invoked in the left, but not the right hemisphere in the second condition. The results suggest that both hemispheres have automatic access to semantic category meanings, although the timecourse of activation of semantic category meanings is slower in the right hemisphere than in the left.

  9. Automatic instrument for chemical processing to detect microorganism in biological samples by measuring light reactions

    Science.gov (United States)

    Kelbaugh, B. N.; Picciolo, G. L.; Chappelle, E. W.; Colburn, M. E. (Inventor)

    1973-01-01

    An automated apparatus is reported for sequentially assaying urine samples for the presence of bacterial adenosine triphosphate (ATP) that comprises a rotary table which carries a plurality of sample containing vials and automatically dispenses fluid reagents into the vials preparatory to injecting a light producing luciferase-luciferin mixture into the samples. The device automatically measures the light produced in each urine sample by a bioluminescence reaction of the free bacterial adenosine triphosphate with the luciferase-luciferin mixture. The light measured is proportional to the concentration of bacterial adenosine triphosphate which, in turn, is proportional to the number of bacteria present in the respective urine sample.

  10. Measures of Information Processing in Rapid Automatized Naming (RAN) and Their Relation To Reading.

    Science.gov (United States)

    Neuhaus, Graham; Foorman, Barbara R.; Francis, David J.; Carlson, Coleen D.

    2001-01-01

    Examined articulation and interarticulation (pause) times on Rapid Automatized Naming Tests for first- and second-graders. Found that pause and articulation times for RAN letters and objects were not reliably related, compared to RAN numbers articulation and pause durations. Subtest pause durations were differentially related to reading. RAN…

  11. Automatic classification of the acrosome status of boar spermatozoa using digital image processing and LVQ

    NARCIS (Netherlands)

    Alegre, Enrique; Biehl, Michael; Petkov, Nicolai; Sanchez, Lidia

    2008-01-01

    We consider images of boar spermatozoa obtained with ail optical phase-contrast microscope. Our goal is to automatically classify single sperm cells as acrosome-intact (class 1) or acrosome-damaged (class 2). Such classification is important for the estimation of the fertilization potential of a spe

  12. The Obstacles in Big Data Process

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2017-04-01

    Full Text Available The increasing amount of data and a need to analyze the given data in a timely manner for multiple purposes has created a serious barrier in the big data analysis process. This article describes the challenges that big data creates at each step of the big data analysis process. These problems include typical analytical problems as well as the most uncommon challenges that are futuristic for the big data only. The article breaks down problems for each step of the big data analysis process and discusses these problems separately at each stage. It also offers some simplistic ways to solve these problems.

  13. Automatic Preprocessing of Tidal Gravity Observation Data%重力固体潮观测数据的自动化预处理

    Institute of Scientific and Technical Information of China (English)

    许闯; 罗志才; 林旭; 周波阳

    2013-01-01

    The preprocessing of tidal gravity observation data is very important to obtain high-quality tidal harmonic analysis results. The preprocessing methods of tidal gravity observation data are studied systematically, and average filtering method and wavelet filtering method for downsampling original tidal gravity observation data are given in the paper, as well as the linear interpolation method and the cubic spline interpolation method for processing interrupt data. The automatic preprocessing software of the tidal gravity observation data (APTsoft) is developed, which can calibrate and correct automatically abnormal data such as spikes, steps and interrupts. Finally, the experimental results show that the preprocessing methods and APTsoft are very effective, and APTsoft can be applied to the automatic preprocessing of tidal gravity observation data.%研究了重力固体潮汐观测数据的预处理方法,给出了对原始观测数据降采样的平均滤波和小波滤波处理方法以及处理中断数据的线性插值和三次样条插值方法,研制了重力固体潮汐观测数据自动化预处理软件APTsoft,实现了异常数据(包括尖峰、台阶、中断等)的自动标定与改正功能.实验结果验证了本文预处理方法及APTsoft软件的有效性,APTsoft可应用于重力固体潮观测数据的自动化预处理.

  14. Automatic and intentional number processing both rely on intact right parietal cortex: A combined fMRI and neuronavigated TMS study.

    Directory of Open Access Journals (Sweden)

    Roi eCohen Kadosh

    2012-02-01

    Full Text Available Practice and training usually lead to performance increase in a given task. In addition, a shift from intentional towards more automatic processing mechanisms is often observed. It is currently debated whether automatic and intentional processing is subserved by the same or by different mechanism(s, and whether the same or different regions in the brain are recruited. Previous correlational evidence provided by behavioural, neuroimaging, modelling, and neuropsychological studies addressing this question yielded conflicting results. Here we used transcranial magnetic stimulation (TMS to compare the causal influence of disrupting either left or right parietal cortex during automatic and intentional numerical processing, as reflected by the size congruity effect and the numerical distance effect, respectively. We found a functional hemispheric asymmetry within parietal cortex with only the TMS-induced right parietal disruption impairing both automatic and intentional numerical processing. In contrast, disrupting the left parietal lobe with TMS, or applying sham stimulation, did not affect performance during automatic or intentional numerical processing. The current results provide causal evidence for the functional relevance of right, but not left, parietal cortex for intentional and automatic numerical processing, implying that at least within the parietal cortices, automatic and intentional numerical processing rely on the same underlying hemispheric lateralization.

  15. Simplified Processing Method for Meter Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Kimberly M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Colotelo, Alison H. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Downs, Janelle L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ham, Kenneth D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Henderson, Jordan W. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Montgomery, Sadie A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vernon, Christopher R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Parker, Steven A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-11-01

    Simple/Quick metered data processing method that can be used for Army Metered Data Management System (MDMS) and Logistics Innovation Agency data, but may also be useful for other large data sets. Intended for large data sets when analyst has little information about the buildings.

  16. Seismic processing in the inverse data space

    NARCIS (Netherlands)

    Berkhout, A.J.

    2006-01-01

    Until now, seismic processing has been carried out by applying inverse filters in the forward data space. Because the acquired data of a seismic survey is always discrete, seismic measurements in the forward data space can be arranged conveniently in a data matrix (P). Each column in the data matrix

  17. Automatic Estimation of Excavation Volume from Laser Mobile Mapping Data for Mountain Road Widening

    Directory of Open Access Journals (Sweden)

    Massimo Menenti

    2013-09-01

    Full Text Available Roads play an indispensable role as part of the infrastructure of society. In recent years, society has witnessed the rapid development of laser mobile mapping systems (LMMS which, at high measurement rates, acquire dense and accurate point cloud data. This paper presents a way to automatically estimate the required excavation volume when widening a road from point cloud data acquired by an LMMS. Firstly, the input point cloud is down-sampled to a uniform grid and outliers are removed. For each of the resulting grid points, both on and off the road, the local surface normal and 2D slope are estimated. Normals and slopes are consecutively used to separate road from off-road points which enables the estimation of the road centerline and road boundaries. In the final step, the left and right side of the road points are sliced in 1-m slices up to a distance of 4 m, perpendicular to the roadside. Determining and summing each sliced volume enables the estimation of the required excavation for a widening of the road on the left or on the right side. The procedure, including a quality analysis, is demonstrated on a stretch of a mountain road that is approximately 132 m long as sampled by a Lynx LMMS. The results in this particular case show that the required excavation volume on the left side is 8% more than that on the right side. In addition, the error in the results is assessed in two ways. First, by adding up estimated local errors, and second, by comparing results from two different datasets sampling the same piece of road both acquired by the Lynx LMMS. Results of both approaches indicate that the error in the estimated volume is below 4%. The proposed method is relatively easy to implement and runs smoothly on a desktop PC. The whole workflow of the LMMS data acquisition and subsequent volume computation can be completed in one or two days and provides road engineers with much more detail than traditional single-point surveying methods such as

  18. Scheme of the Saik computer system of complex automatic interpretation of well logging data

    Energy Technology Data Exchange (ETDEWEB)

    Frydecki, J.

    1975-01-01

    The basis of the first Polish interpretation system is the method of ''autocalibration.'' The input data for the system Saik are noncalibrated gamma log, neutron log, resistivity log, and caliper log. The input data are autocalibrated by means of correlation cross-plotting of gamma versus neutron, gamma versus resistivity and resistivity versus neutron logs. The results of processing are tables and curves reflecting clay content, porosity and hydrocarbon saturation.

  19. Sentinel-1 automatic processing chain for volcanic and seismic areas monitoring within the Geohazards Exploitation Platform (GEP)

    Science.gov (United States)

    De Luca, Claudio; Zinno, Ivana; Manunta, Michele; Lanari, Riccardo; Casu, Francesco

    2016-04-01

    these issues, ESA recently funded the development of the Geohazards Exploitation Platform (GEP), a project aimed at putting together data, processing tools and results to make them accessible to the EO scientific community, with particular emphasis to the Geohazard Supersites & Natural Laboratories and the CEOS Seismic Hazards and Volcanoes Pilots. In this work we present the integration of the parallel version of a well-known DInSAR algorithm referred to as Small BAseline Subset (P-SBAS) within the GEP platform for processing Sentinel-1 data. The integration allowed us to set up an operational on-demand web tool, open to every user, aimed at automatically processing S1A data for the generation of SBAS displacement time-series. Main characteristics as well as a number of experimental results obtained by using the implemented web tool will be also shown. This work is partially supported by: the RITMARE project of Italian MIUR, the DPC-CNR agreement and the ESA GEP project.

  20. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    Science.gov (United States)

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng

    2013-01-01

    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed.

  1. Data near processing support for climate data analysis

    Science.gov (United States)

    Kindermann, Stephan; Ehbrecht, Carsten; Hempelmann, Nils

    2016-04-01

    Climate data repositories grow in size exponentially. Scalable data near processing capabilities are required to meet future data analysis requirements and to replace current "data download and process at home" workflows and approaches. On one hand side, these processing capabilities should be accessible via standardized interfaces (e.g. OGC WPS), on the other side a large variety of processing tools, toolboxes and deployment alternatives have to be supported and maintained at the data/processing center. We present a community approach of a modular and flexible system supporting the development, deployment and maintenace of OGC-WPS based web processing services. This approach is organized in an open source github project (called "bird-house") supporting individual processing services ("birds", e.g. climate index calculations, model data ensemble calculations), which rely on basic common infrastructural components (e.g. installation and deployment recipes, analysis code dependencies management). To support easy deployment at data centers as well as home institutes (e.g. for testing and development) the system supports the management of the often very complex package dependency chain of climate data analysis packages as well as docker based packaging and installation. We present a concrete deployment scenario at the German Climate Computing Center (DKRZ). The DKRZ one hand side hosts a multi-petabyte climate archive which is integrated e.g. into the european ENES and worldwide ESGF data infrastructure, and on the other hand hosts an HPC center supporting (model) data production and data analysis. The deployment scenario also includes openstack based data cloud services to support data import and data distribution for bird-house based WPS web processing services. Current challenges for inter-institutionnal deployments of web processing services supporting the european and international climate modeling community as well as the climate impact community are highlighted

  2. Development of Automatic Live Linux Rebuilding System with Flexibility in Science and Engineering Education and Applying to Information Processing Education

    Science.gov (United States)

    Sonoda, Jun; Yamaki, Kota

    We develop an automatic Live Linux rebuilding system for science and engineering education, such as information processing education, numerical analysis and so on. Our system is enable to easily and automatically rebuild a customized Live Linux from a ISO image of Ubuntu, which is one of the Linux distribution. Also, it is easily possible to install/uninstall packages and to enable/disable init daemons. When we rebuild a Live Linux CD using our system, we show number of the operations is 8, and the rebuilding time is about 33 minutes on CD version and about 50 minutes on DVD version. Moreover, we have applied the rebuilded Live Linux CD in a class of information processing education in our college. As the results of a questionnaires survey from our 43 students who used the Live Linux CD, we obtain that the our Live Linux is useful for about 80 percents of students. From these results, we conclude that our system is able to easily and automatically rebuild a useful Live Linux in short time.

  3. Analysis and synthesis of a system for optimal automatic regulation of the process of mechanical cutting by a combine

    Energy Technology Data Exchange (ETDEWEB)

    Pop, E.; Coroescu, T.; Poanta, A.; Pop, M.

    1978-01-01

    Uncontrollable dynamic operating regime of a combine has a negative effect. A consequence of the uncontrolled change in productivity and rate during cutting is total decrease in productivity. The cutters of the cutting mechanism are prematurely worn out. The quality of the coal decreases. Complications with combine control reduce productivity. The motor is exposed to the maximum loads, its service life decreases, and there is an inefficient consumption of electricity. Studies of the optimal automatic regulation of the cutting process were made by the method of modeled analysis on digital and analog machines. The method uses an electronic-automatic device with integrating circuit of domestic production (A-741, A-723). This device controls and regulates the current parameters of the acting motor. The device includes primarily an element of information type of the Hall TH traductor type, the regulating element is an electronic relay, electronic power distributor, etc.

  4. Data processing framework for decision making

    DEFF Research Database (Denmark)

    Larsen, Jan

    The aim of the talk is * to provide insight into some of the issues in data processing and detection systems * to hint at possible solutions using statistical signal processing and machine learning methodologies...

  5. Monitoring the Performance of the Pedestrian Transfer Function of Train Stations Using Automatic Fare Collection Data

    NARCIS (Netherlands)

    Van den Heuvel, J.P.A.; Hoogenraad, J.H.

    2014-01-01

    Over the last years all train stations in The Netherlands have been equipped with automatic fare collection gates and/or validators. All public transport passengers use a smart card to pay their fare. In this paper we present a monitor for the performance of the pedestrian function of train stations

  6. Automatic Cataloguing and Searching for Retrospective Data by Use of OCR Text.

    Science.gov (United States)

    Tseng, Yuen-Hsien

    2001-01-01

    Describes efforts in supporting information retrieval from OCR (optical character recognition) degraded text. Reports on approaches used in an automatic cataloging and searching contest for books in multiple languages, including a vector space retrieval model, an n-gram indexing method, and a weighting scheme; and discusses problems of Asian…

  7. Comparative analysis of automatic approaches to building detection from multi-source aerial data

    NARCIS (Netherlands)

    Frontoni, E.; Khoshelham, K.; Nardinocchi, C.; Nedkov, S.; Zingaretti, P.

    2008-01-01

    Automatic building detection has been a hot topic since the early 1990’s. Early approaches were based on a single aerial image. Detecting buildings is a difficult task so it can be more effective when multiple sources of information are obtained and fused. The objective of this paper is to provide a

  8. Automatic parameter optimization in epsilon-filter for acoustical signal processing utilizing correlation coefficient.

    Science.gov (United States)

    Abe, Tomomi; Hashimoto, Shuji; Matsumoto, Mitsuharu

    2010-02-01

    epsilon-filter can reduce most kinds of noise from a single-channel noisy signal while preserving signals that vary drastically such as speech signals. It can reduce not only stationary noise but also nonstationary noise. However, it has some parameters whose values are set empirically. So far, there have been few studies to evaluate the appropriateness of the parameter settings for epsilon-filter. This paper employs the correlation coefficient of the filter output and the difference between the filter input and output as the evaluation function of the parameter setting. This paper also describes the algorithm to set the optimal parameter value of epsilon-filter automatically. To evaluate the adequateness of the obtained parameter, the mean absolute error is calculated. The experimental results show that the adequate parameter in epsilon-filter can be obtained automatically by using the proposed method.

  9. Processing Solutions for Big Data in Astronomy

    Science.gov (United States)

    Fillatre, L.; Lepiller, D.

    2016-09-01

    This paper gives a simple introduction to processing solutions applied to massive amounts of data. It proposes a general presentation of the Big Data paradigm. The Hadoop framework, which is considered as the pioneering processing solution for Big Data, is described together with YARN, the integrated Hadoop tool for resource allocation. This paper also presents the main tools for the management of both the storage (NoSQL solutions) and computing capacities (MapReduce parallel processing schema) of a cluster of machines. Finally, more recent processing solutions like Spark are discussed. Big Data frameworks are now able to run complex applications while keeping the programming simple and greatly improving the computing speed.

  10. daptive Filter Used as a Dynamic Compensator in Automatic Gauge Control of Strip Rolling Processes

    Directory of Open Access Journals (Sweden)

    N. ROMAN

    2010-12-01

    Full Text Available The paper deals with a control structure of the strip thickness in a rolling mill of quarto type (AGC – Automatic Gauge Control. It performs two functions: the compensation of errors induced by unideal dynamics of the tracking systems lead by AGC system and the control adaptation to the change of dynamic properties of the tracking systems. The compensation of dynamical errors is achieved through inverse models of the tracking system, implemented as adaptive filters.

  11. Estimating the quality of pasturage in the municipality of Paragominas (PA) by means of automatic analysis of LANDSAT data

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Dossantos, A. P.; Novo, E. M. L. D.; Duarte, V.

    1981-01-01

    The use of LANDSAT data to evaluate pasture quality in the Amazon region is demonstrated. Pasture degradation in deforested areas of a traditional tropical forest cattle-raising region was estimated. Automatic analysis using interactive multispectral analysis (IMAGE-100) shows that 24% of the deforested areas were occupied by natural vegetation regrowth, 24% by exposed soil, 15% by degraded pastures, and 46% was suitable grazing land.

  12. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    OpenAIRE

    Du, Q.; Xie, D; Sun, Y.

    2015-01-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other o...

  13. THE METHOD OF DATA PROCESSING OF THE ELECTRICAL SURVEYING AND THE PROGRAM SYSTEM USED ON MICROCOMPUTER

    Institute of Scientific and Technical Information of China (English)

    李志聃; 高绋麟

    1990-01-01

    The ESS software package is prepared for electrical data processing in the fields of coal prospecting, hydrogeological engineering, and can be used in the other fields of electrical data processing. It can be operated on any kind of microcomputer which has an internal memories of moro than 512kB. The ESS software package would be leading the office operation to an automatic data processing period and the field work free from the tedious, repeated data treating and mapping, so that the engineers would have more time to analyse and interpret field data. Undoubtedly, it is of benefit to improving the relibility of the geological evaluation.

  14. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  15. Apache Flink: Distributed Stream Data Processing

    CERN Document Server

    Jacobs, Kevin; CERN. Geneva. IT Department

    2016-01-01

    The amount of data is growing significantly over the past few years. Therefore, the need for distributed data processing frameworks is growing. Currently, there are two well-known data processing frameworks with an API for data batches and an API for data streams which are named Apache Flink and Apache Spark. Both Apache Spark and Apache Flink are improving upon the MapReduce implementation of the Apache Hadoop framework. MapReduce is the first programming model for distributed processing on large scale that is available in Apache Hadoop. This report compares the Stream API and the Batch API for both frameworks.

  16. ACRF Data Collection and Processing Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Macduff, M; Egan, D

    2004-12-01

    We present a description of the data flow from measurement to long-term archive. We also discuss data communications infrastructure. The data handling processes presented include collection, transfer, ingest, quality control, creation of Value-Added Products (VAP), and data archiving.

  17. Hydat-A Hyperspectral Data Processing Tool for Field Spectroradiometer Data

    Science.gov (United States)

    Singh, S.; Dutta, D.; Singh, U.; Sharma, J. R.; Dadhwal, V. K.

    2014-11-01

    A hyperspectral data processing tool "HyDAT" has been developed in MATLAB environment for processing of Field Spectroradiometer data for vegetation studies. Several basic functions e.g. data visualization, pre-processing, noise removal and data transformation and features like automatic absorption feature recovery and their characterization have been introduced. A new concept of spectral geometry has been included as a separate module which is conceptualized as triangle formed over spectral space joining the vertices of green reflectance peak, red well and inflection point and is extremely useful for vegetation health analysis. A large variety of spectral indices both static and dynamic, have been introduced which is useful for remote estimation of foliar biochemicals. Keeping in view the computational requirement, MATLAB was used in the programming environment. It has various in-built functions for statistical and mathematical analysis, signal processing functions like FFT (Fast Fourier Transform), CWT (Continuous Wavelet Transform), direct smoothing function for moving average, Savitzky-Golay smoothing technique, etc. which can be used with ease for the signal processing and field data analysis. FSF (Field Spectroscopy Facility) Post processing Toolbox can also be freely downloaded and can be used for the direct importing and pre-processing of Spectroradiometer data for detector overlap correction, erroneous water band removal and smoothing. The complete package of the software has been bundled for standalone application of shared libraries with additional files for end users. The software is powered by creation of spectral library and customized report generation. An online help menu guides the user for performing different functions. The tool is capable of reducing the time required for processing field based hyperspectral data significantly and eliminate the need for different software to process the raw data and spectral features extraction.

  18. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    Science.gov (United States)

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets.

  19. Using data from automatic planetary stations for solving problems in astronomy and space physics

    Science.gov (United States)

    Stoeva, Penka; Stoev, Alexey; Bojurova, Eva

    The specific nature of the Astronomy and Space Physics problems promote students' interest in the relevant sciences and provoke their creativity. It is illustrated by numerous examples of positive response from the participants in the Astronomy Olympiad to extraordinary moments in problems, especially those related to space flight and scientific data and photographs from satellites and automatic interplanetary stations (AIS). Jupiter's satellite Io is one of the satellites with the highest volcano activity in the solar system. So far, the volcanoes of Io were photographed for a short time only by the interplanetary stations Voyager 1 and Galileo - sent by NASA, and New Horizons of ESA. By monitoring these often erupting volcanoes, however, one can quickly gather detailed information and establish methods for prediction of eruptions, including the Earth's volcanoes. This could push forward research on volcanism in the Solar system. Therefore, this issue was used for creation conditions for problems in astronomy. The report shows how through measurements on images of Io taken with AIS heights of the jets emitted by volcanoes are defined. Knowing the mass and radius of the satellite initial speed of the emitted particles is evaluated. Similarly, the initial rate of discharge of earth volcanoes and ice geysers on Saturn's satellite Enceladus are also evaluated. An attempt is made to explain the rings of ejection around the volcanoes on Io. The ratio of the diameter of the dispersion of the substance to the height of the stream is studied. Actually, maximum speed of the particles is evaluated as the boundaries of the volcanic "fountain" are determined by the fast moving particles reaching maximal height. The observed ratio is compared with the theoretical one derived by the students. The results show that although the volcanoes of Io , Earth's volcanoes and even ice geysers of Enceladus operate under very different conditions and arise from different causes, the initial

  20. Simultaneous all-optical phase noise mitigation and automatically locked homodyne reception of an incoming QPSK data signal.

    Science.gov (United States)

    Mohajerin-Ariaei, Amirhossein; Ziyadi, Morteza; Almaiman, Ahmed; Cao, Yinwen; Alishahi, Fatemeh; Chitgarha, Mohammad Reza; Fallahpour, Ahmad; Yang, Jeng-Yuan; Bao, Changjing; Liao, Peicheng; Shamee, Bishara; Akasaka, Youichi; Sekiya, Motoyoshi; Touch, Joseph D; Tur, Moshe; Langrock, Carsten; Fejer, Martin M; Willner, Alan E

    2016-10-15

    Simultaneous phase noise mitigation and automatic phase/frequency-locked homodyne reception is demonstrated for a 20-32 Gbaud QPSK signal. A phase quantization function is realized to squeeze the phase noise of the signal by optical wave mixing of the signal, its third-order harmonic, and their corresponding delayed variant conjugates, converting the noisy input into a noise-mitigated signal. In a simultaneous nonlinear process, the noise-mitigated signal is automatically phase- and frequency-locked with a "local" pump laser, avoiding the need for feedback or phase/frequency tracking for homodyne detection. Open eye-diagrams are obtained for in-phase and quadrature-phase components of the signal and ∼2  dB OSNR gain is achieved at BER 10-3.

  1. A dynamically reconfigurable data stream processing system

    Energy Technology Data Exchange (ETDEWEB)

    Nogiec, J.M.; Trombly-Freytag, K.; /Fermilab

    2004-11-01

    This paper describes a component-based framework for data stream processing that allows for configuration, tailoring, and runtime system reconfiguration. The system's architecture is based on a pipes and filters pattern, where data is passed through routes between components. A network of pipes and filters can be dynamically reconfigured in response to a preplanned sequence of processing steps, operator intervention, or a change in one or more data streams. This framework provides several mechanisms supporting dynamic reconfiguration and can be used to build static data stream processing applications such as monitoring or data acquisition systems, as well as self-adjusting systems that can adapt their processing algorithm, presentation layer, or data persistency layer in response to changes in input data streams.

  2. Efficient Information and Data Management in Synthesis and Design of Processing Netorks

    DEFF Research Database (Denmark)

    Quaglia, Alberto; Sin, Gürkan; Gani, Rafiqul

    ), ii) each of the process alternatives (in term of mass balance, waste emissions, operational and capital cost), iii) the optimality criterion (in terms of objective function coefficients such as prices), as well as iv) engineering, commercial and regulatory insights and context related information...... formulation, integrating automatic data consistency checks and connection to databases of physical properties and process data. Once all data have been specified, the problem is automatically converted into a GAMS readable program, which is executed to solve the optimization problem and identify the optimal...... studies. The case studies are selected from different industrial segments, such as food processing (soybean processing network), water and wastewater management (refinery wastewater treatment and reuse; municipal water treatment) and biorefinery....

  3. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  4. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  5. Yes! The Business Department Teaches Data Processing

    Science.gov (United States)

    Nord, Daryl; Seymour, Tom

    1978-01-01

    After a brief discussion of the history and current status of business data processing versus computer science, this article focuses on the characteristics of a business data processing curriculum as compared to a computer science curriculum, including distinctions between the FORTRAN and COBOL programming languages. (SH)

  6. Satellite radar altimetry over ice. Volume 1: Processing and corrections of Seasat data over Greenland

    Science.gov (United States)

    Zwally, H. Jay; Brenner, Anita C.; Major, Judith A.; Martin, Thomas V.; Bindschadler, Robert A.

    1990-01-01

    The data-processing methods and ice data products derived from Seasat radar altimeter measurements over the Greenland ice sheet and surrounding sea ice are documented. The corrections derived and applied to the Seasat radar altimeter data over ice are described in detail, including the editing and retracking algorithm to correct for height errors caused by lags in the automatic range tracking circuit. The methods for radial adjustment of the orbits and estimation of the slope-induced errors are given.

  7. Data processing and visualisation in the Rosetta Science Ground Segment

    Science.gov (United States)

    Geiger, Bernhard

    2016-09-01

    Rosetta is the first space mission to rendezvous with a comet. The spacecraft encountered its target 67P/Churyumov-Gerasimenko in 2014 and currently escorts the comet through a complete activity cycle during perihelion passage. The Rosetta Science Ground Segment (RSGS) is in charge of planning and coordinating the observations carried out by the scientific instruments on board the Rosetta spacecraft. We describe the data processing system implemented at the RSGS in order to support data analysis and science operations planning. The system automatically retrieves and processes telemetry data in near real-time. The generated products include spacecraft and instrument housekeeping parameters, scientific data for some instruments, and derived quantities. Based on spacecraft and comet trajectory information a series of geometric variables are calculated in order to assess the conditions for scheduling the observations of the scientific instruments and analyse the respective measurements obtained. Images acquired by the Rosetta Navigation Camera are processed and distributed in near real-time to the instrument team community. A quicklook web-page displaying the images allows the RSGS team to monitor the state of the comet and the correct acquisition and downlink of the images. Consolidated datasets are later delivered to the long-term archive.

  8. Using Probe Vehicle Data for Automatic Extraction of Road Traffic Parameters

    Directory of Open Access Journals (Sweden)

    Roman Popescu Maria Alexandra

    2016-12-01

    Full Text Available Through this paper the author aims to study and find solutions for automatic detection of traffic light position and for automatic calculation of the waiting time at traffic light. The first objective serves mainly the road transportation field, mainly because it removes the need for collaboration with local authorities to establish a national network of traffic lights. The second objective is important not only for companies which are providing navigation solutions, but especially for authorities, institutions, companies operating in road traffic management systems. Real-time dynamic determination of traffic queue length and of waiting time at traffic lights allow the creation of dynamic systems, intelligent and flexible, adapted to actual traffic conditions, and not to generic, theoretical models. Thus, cities can approach the Smart City concept by boosting, efficienting and greening the road transport, promoted in Europe through the Horizon 2020, Smart Cities, Urban Mobility initiative.

  9. Automatic reconstruction of molecular and genetic networks from discrete time series data.

    Science.gov (United States)

    Durzinsky, Markus; Wagler, Annegret; Weismantel, Robert; Marwan, Wolfgang

    2008-09-01

    We apply a mathematical algorithm which processes discrete time series data to generate a complete list of Petri net structures containing the minimal number of nodes required to reproduce the data set. The completeness of the list as guaranteed by a mathematical proof allows to define a minimal set of experiments required to discriminate between alternative network structures. This in principle allows to prove all possible minimal network structures by disproving all alternative candidate structures. The dynamic behaviour of the networks in terms of a switching rule for the transitions of the Petri net is part of the result. In addition to network reconstruction, the algorithm can be used to determine how many yet undetected components at least must be involved in a certain process. The algorithm also reveals all alternative structural modifications of a network that are required to generate a predefined behaviour.

  10. Utilization of a genetic algorithm for the automatic detection of oil spill from RADARSAT-2 SAR satellite data.

    Science.gov (United States)

    Marghany, Maged

    2014-12-15

    In this work, a genetic algorithm is applied for the automatic detection of oil spills. The procedure is implemented using sequences from RADARSAT-2 SAR ScanSAR Narrow single-beam data acquired in the Gulf of Mexico. The study demonstrates that the implementation of crossover allows for the generation of an accurate oil spill pattern. This conclusion is confirmed by the receiver-operating characteristic (ROC) curve. The ROC curve indicates that the existence of oil slick footprints can be identified using the area between the ROC curve and the no-discrimination line of 90%, which is greater than that of other surrounding environmental features. In conclusion, the genetic algorithm can be used as a tool for the automatic detection of oil spills, and the ScanSAR Narrow single-beam mode serves as an excellent sensor for oil spill detection and survey.

  11. Process mining data science in action

    CERN Document Server

    van der Aalst, Wil

    2016-01-01

    The first to cover this missing link between data mining and process modeling, this book provides real-world techniques for monitoring and analyzing processes in real time. It is a powerful new tool destined to play a key role in business process management.

  12. Age effects shrink when motor learning is predominantly supported by nondeclarative, automatic memory processes: evidence from golf putting.

    Science.gov (United States)

    Chauvel, Guillaume; Maquestiaux, François; Hartley, Alan A; Joubert, Sven; Didierjean, André; Masters, Rich S W

    2012-01-01

    Can motor learning be equivalent in younger and older adults? To address this question, 48 younger (M = 23.5 years) and 48 older (M = 65.0 years) participants learned to perform a golf-putting task in two different motor learning situations: one that resulted in infrequent errors or one that resulted in frequent errors. The results demonstrated that infrequent-error learning predominantly relied on nondeclarative, automatic memory processes whereas frequent-error learning predominantly relied on declarative, effortful memory processes: After learning, infrequent-error learners verbalized fewer strategies than frequent-error learners; at transfer, a concurrent, attention-demanding secondary task (tone counting) left motor performance of infrequent-error learners unaffected but impaired that of frequent-error learners. The results showed age-equivalent motor performance in infrequent-error learning but age deficits in frequent-error learning. Motor performance of frequent-error learners required more attention with age, as evidenced by an age deficit on the attention-demanding secondary task. The disappearance of age effects when nondeclarative, automatic memory processes predominated suggests that these processes are preserved with age and are available even early in motor learning.

  13. The Use of Computer Vision Algorithms for Automatic Orientation of Terrestrial Laser Scanning Data

    Science.gov (United States)

    Markiewicz, Jakub Stefan

    2016-06-01

    The paper presents analysis of the orientation of terrestrial laser scanning (TLS) data. In the proposed data processing methodology, point clouds are considered as panoramic images enriched by the depth map. Computer vision (CV) algorithms are used for orientation, which are applied for testing the correctness of the detection of tie points and time of computations, and for assessing difficulties in their implementation. The BRISK, FASRT, MSER, SIFT, SURF, ASIFT and CenSurE algorithms are used to search for key-points. The source data are point clouds acquired using a Z+F 5006h terrestrial laser scanner on the ruins of Iłża Castle, Poland. Algorithms allowing combination of the photogrammetric and CV approaches are also presented.

  14. MASSIF-1: a beamline dedicated to the fully automatic characterization and data collection from crystals of biological macromolecules

    Science.gov (United States)

    Bowler, Matthew W.; Nurizzo, Didier; Barrett, Ray; Beteva, Antonia; Bodin, Marjolaine; Caserotto, Hugo; Delagenière, Solange; Dobias, Fabian; Flot, David; Giraud, Thierry; Guichard, Nicolas; Guijarro, Mattias; Lentini, Mario; Leonard, Gordon A.; McSweeney, Sean; Oskarsson, Marcus; Schmidt, Werner; Snigirev, Anatoli; von Stetten, David; Surr, John; Svensson, Olof; Theveneau, Pascal; Mueller-Dieckmann, Christoph

    2015-01-01

    MASSIF-1 (ID30A-1) is an ESRF undulator beamline operating at a fixed wavelength of 0.969 Å (12.8 keV) that is dedicated to the completely automatic characterization of and data collection from crystals of biological macromolecules. The first of the ESRF Upgrade MASSIF beamlines to be commissioned, it has been open since September 2014, providing a unique automated data collection service to academic and industrial users. Here, the beamline characteristics and details of the new service are outlined. PMID:26524320

  15. Automatic processes in aesthetic judgment: Insights from the implicit association test

    Directory of Open Access Journals (Sweden)

    Pavlović Maša

    2012-01-01

    Full Text Available This study employed the Implicit Association Test (IAT with aim to examine the nature of automatic aesthetic judgment. The main hypothesis was that basic hedonic tone of artwork is one of important factors influencing automatic aesthetic evaluation. We conducted two experiments in which we varied hedonic valence of paintings of figural (Experiment 1 and abstract art (Experiment 2, measured participants’ implicit association between these paintings and evaluative attribute dimension via IAT and registered their explicit judgments of paintings’ hedonic tone. In both experiments we found that participants were significantly faster in those dual-categorization tasks in the IAT where preselected hedonically “positive” paintings were paired with the positive attributes and hedonically “negative” ones with the negative attributes than the other way around. We additionally found that explicit assessments of the hedonic tone were substantially related to the individual IAT effects in the case of abstract art, but not in the case of figural art. Implications of these findings are discussed. [Projekat Ministarstva nauke Republike Srbije, br. 179018 i br. 179033

  16. ARP: Automatic rapid processing for the generation of problem dependent SAS2H/ORIGEN-s cross section libraries

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Hermann, O.W.; Bowman, S.M.; Parks, C.V.

    1998-04-01

    In this report, a methodology is described which serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. ARP, Automatic Rapid Processing, is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent fuel isotopic compositions for PWR and BWR systems.

  17. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  18. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis

    Directory of Open Access Journals (Sweden)

    Cíntia Matsuda Toledo

    Full Text Available Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario.OBJECTIVE: The aims were to describe how to: (i develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii automatically identify the features that best distinguish the groups.METHODS: The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age. In this study, the descriptions by 144 of the subjects studied in Toledo18 were used, which included 200 healthy Brazilians of both genders.RESULTS AND CONCLUSION:A Support Vector Machine (SVM with a radial basis function (RBF kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS is a strong candidate to replace manual feature selection methods.

  19. Automatic Detection and Tracking of CMEs II: Multiscale Filtering of Coronagraph Data

    CERN Document Server

    Byrne, Jason P; Habbal, Shadia R; Gallagher, Peter T; 10.1088/0004-637X/752/2/145

    2012-01-01

    Studying CMEs in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the SOHO, STEREO, and future coronagraph missions, also makes manual cataloguing of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cata- loguing packages such as CACTus, SEEDS and ARTEMIS. Here we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of current catalogues. It works by first employing the dynamic CME separation technique outlined in a companion paper, and then characterising CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane-of-sky. The effectiveness of the...

  20. Automatic first-break picking using the instantaneous traveltime attribute

    KAUST Repository

    Saragiotis, Christos

    2012-01-01

    Picking the first breaks is an important step in seismic processing. The large volume of the seismic data calls for automatic and objective picking. We introduce a new automatic first-break picker, which uses specifically designed time windows and an iterative procedure based on the instantaneous traveltime attribute. The method is fast as it only uses a few FFT\\'s per trace. We demonstrate the effectiveness of this automatic method by applying it on real test data.

  1. Integrated data acquisition, storage, retrieval and processing using the COMPASS DataBase (CDB)

    Energy Technology Data Exchange (ETDEWEB)

    Urban, J., E-mail: urban@ipp.cas.cz [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Pipek, J.; Hron, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Janky, F.; Papřok, R.; Peterka, M. [Institute of Plasma Physics AS CR, v.v.i., Za Slovankou 3, 182 00 Praha 8 (Czech Republic); Department of Surface and Plasma Science, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, 180 00 Praha 8 (Czech Republic); Duarte, A.S. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade Técnica de Lisboa, 1049-001 Lisboa (Portugal)

    2014-05-15

    Highlights: • CDB is used as a new data storage solution for the COMPASS tokamak. • The software is light weight, open, fast and easily extensible and scalable. • CDB seamlessly integrates with any data acquisition system. • Rich metadata are stored for physics signals. • Data can be processed automatically, based on dependence rules. - Abstract: We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (COMPASS DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.

  2. Automatic effects of processing fluency in semantic coherence judgments and the role of transient and tonic affective states

    Directory of Open Access Journals (Sweden)

    Sweklej Joanna

    2015-03-01

    Full Text Available Recent literature reported that judgments of semantic coherence are influenced by a positive affective response due to increased fluency of processing. The presented paper investigates whether fluency of processing can be modified by affective responses to the coherent stimuli as well as an automaticity of processes involved in semantic coherence judgments. The studies employed the dyads of triads task in which participants are shown two word triads and asked to solve a semantically coherent one or indicate which of the two is semantically coherent. Across two studies in a dualtask paradigm we show that a attentional resources moderate insight into semantically coherent word triads, whereas b judgments of semantic coherence judgments are independent of attentional resources. We discuss implications of our findings for how people might form intuitive judgments of semantic coherence.

  3. Data curation + process curation=data integration + science.

    Science.gov (United States)

    Goble, Carole; Stevens, Robert; Hull, Duncan; Wolstencroft, Katy; Lopez, Rodrigo

    2008-11-01

    In bioinformatics, we are familiar with the idea of curated data as a prerequisite for data integration. We neglect, often to our cost, the curation and cataloguing of the processes that we use to integrate and analyse our data. Programmatic access to services, for data and processes, means that compositions of services can be made that represent the in silico experiments or processes that bioinformaticians perform. Data integration through workflows depends on being able to know what services exist and where to find those services. The large number of services and the operations they perform, their arbitrary naming and lack of documentation, however, mean that they can be difficult to use. The workflows themselves are composite processes that could be pooled and reused but only if they too can be found and understood. Thus appropriate curation, including semantic mark-up, would enable processes to be found, maintained and consequently used more easily. This broader view on semantic annotation is vital for full data integration that is necessary for the modern scientific analyses in biology. This article will brief the community on the current state of the art and the current challenges for process curation, both within and without the Life Sciences.

  4. US Air Force Data Processing Manuals

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data Processing Reference manual for United States Air Force surface stations, circa 1960s. TDF-13 stands for Tape Deck Format number 13, the format in which the...

  5. Lobster Processing and Sales Trip Report Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This is a federally mandated log which is required to be mailed in to NMFS after a fishing trip. This data set includes lobster processing and sales information...

  6. Automatic quantitative analysis of microstructure of ductile cast iron using digital image processing

    Directory of Open Access Journals (Sweden)

    Abhijit Malage

    2015-09-01

    Full Text Available Ductile cast iron is preferred as nodular iron or spheroidal graphite iron. Ductile cast iron contains graphite in form of discrete nodules and matrix of ferrite and perlite. In order to determine the mechanical properties, one needs to determine volume of phases in matrix and nodularity in the microstructure of metal sample. Manual methods available for this, are time consuming and accuracy depends on expertize. The paper proposes a novel method for automatic quantitative analysis of microstructure of Ferritic Pearlitic Ductile Iron which calculates volume of phases and nodularity of that sample. This gives results within a very short time (approximately 5 sec with 98% accuracy for volume phases of matrices and 90% of accuracy for nodule detection and analysis which are in the range of standard specified for SG 500/7 and validated by metallurgist.

  7. 深度图像自动配准点云的方法研究%A method of automatically registering point cloud data based on range images

    Institute of Scientific and Technical Information of China (English)

    田慧; 周绍光; 李浩

    2012-01-01

    点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准.本文给出一种配准策略,利用中心投影原理将单站扫描的点云转换为深度影像,借助教字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准.实验证明了本文方法的有效性.%Point cloud registration plays an essential role to process the data acquired with 3D laser scanner. One traditional registration scheme is based on targets that need to be scanned separately at each station. In this paper, an automatic registration strategy was developed that converted single station point clouds to range images by the center projection principle, utilized digital image processing technology to extract target automatically, calculated the coordinates of its center point, and made use of the knowledge of pho-togrammetry to achieve point cloud registration automatically. Experimental results showed the effectiveness of this method.

  8. Interactive data-processing system for metallurgy

    Science.gov (United States)

    Rathz, T. J.

    1978-01-01

    Equipment indicates that system can rapidly and accurately process metallurgical and materials-processing data for wide range of applications. Advantages include increase in contract between areas on image, ability to analyze images via operator-written programs, and space available for storing images.

  9. Innovative Technologies for Science Data Processing

    Science.gov (United States)

    Ramachandran, R.; Conover, H. T.; Graves, S. J.; Keiser, K.; Smith, M. R.

    2001-05-01

    The Information Technology and Systems Center (ITSC) at the University of Alabama has long been active in information technology research applied to Earth science data. This poster will showcase three key technologies being developed by ITSC: the Earth Science Markup Language (ESML), data mining applied to Earth science data, and data set independent subsetting tools. Each of these technologies is designed to ease data handling by Earth scientists, thereby freeing their time for research. ESML uses the eXtensible Markup Language (XML) as the basis for standardizing metadata or information about data formats, thus facilitating development of search, visualization, and analysis tools that are independent of data type or format. A unique feature of ESML is that it not only describes the content and structure of the data, but also provides semantic information, which allows an application to intelligently interpret the data. Thus, ESML provides a means for working with legacy, current, and future data sets in an integrated fashion, by defining a standard for external metadata to describe the content, structure, and semantics of a file. The Algorithm Development and Mining (ADaM) system applies data mining technologies to Earth science remote sensing data and other spatial data sets. The ADaM system consists of a series of interoperable data readers, preprocessing and analysis modules, and data writers, which can be linked together in many ways to create customized mining processes. This system has been applied to several Earth science problems including tropical cyclone detection, cloud classification, lightning detection, and mesoscale convective system identification. Current research is adapting data mining technologies for real-time processing on board satellites. ITSC has also developed several tools for science data subsetting. The HDF-EOS Web-Based Subsetter (HEW) is designed to work on any properly formatted HDF-EOS swath or grid data file. UAH is working with

  10. 隐喻自动处理研究进展%Advancement of the Automatic Metaphor Processing

    Institute of Scientific and Technical Information of China (English)

    贾玉祥; 俞士汶; 朱学锋

    2009-01-01

    Metaphor is pervasive in human language and must be treated for natural language understanding. Firstly, this paper discusses the nature of metaphor and the manifestation of metaphorical expressions in language. Then the automatic metaphor processing is divided into three subtasks: metaphor recognition, metaphor understanding and metaphor generation. This paper makes an extensive survey of researches on the automatic metaphor processing over the last three decades, emphasizing achievements in recent years. Researches on metaphor knowledge bases are also introduced, which are indispensable for metaphor processing. The applications of metaphor processing to natural language processing tasks are also discussed. Finally, this paper puts forward some suggestions for the future researches on the automatic Chinese metaphor processing.%隐喻在人类语言中普遍存在,是自然语言理解必须面对的问题.该文首先探讨了对隐喻的认识及语言中隐喻表达的分类.把隐喻自动处理分为隐喻识别、隐喻理解和隐喻生成三个子任务,对以往的研究成果进行梳理,着重介绍近几年来隐喻自动处理研究的新成果、新特点.隐喻自动处理离不开隐喻知识库的支持,文章也介绍了国内外隐喻知识库建设的主要成果.隐喻自动处理的目的是为了提高自然语言处理的智能化水平,文章探讨了隐喻处理在自然语言处理任务中的应用.最后展望了汉语隐喻自动处理研究的前景.

  11. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  12. EARLINET Single Calculus Chain – technical – Part 1: Pre-processing of raw lidar data

    Directory of Open Access Journals (Sweden)

    G. D'Amico

    2015-10-01

    Full Text Available In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor. It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC, the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  13. Word Automaticity of Tree Automatic Scattered Linear Orderings Is Decidable

    CERN Document Server

    Huschenbett, Martin

    2012-01-01

    A tree automatic structure is a structure whose domain can be encoded by a regular tree language such that each relation is recognisable by a finite automaton processing tuples of trees synchronously. Words can be regarded as specific simple trees and a structure is word automatic if it is encodable using only these trees. The question naturally arises whether a given tree automatic structure is already word automatic. We prove that this problem is decidable for tree automatic scattered linear orderings. Moreover, we show that in case of a positive answer a word automatic presentation is computable from the tree automatic presentation.

  14. Randomized Primitives for Big Data Processing

    DEFF Research Database (Denmark)

    Stöckel, Morten

    of such data intersection computations, such as approximating the set intersection size and multiplying two matrices. The improvements over the current state of the art methods are either in the form of less space required or less time needed to process the data to compute the answer to the query....

  15. Linking DICOM pixel data with radiology reports using automatic semantic annotation

    Science.gov (United States)

    Pathak, Sayan D.; Kim, Woojin; Munasinghe, Indeera; Criminisi, Antonio; White, Steve; Siddiqui, Khan

    2012-02-01

    Improved access to DICOM studies to both physicians and patients is changing the ways medical imaging studies are visualized and interpreted beyond the confines of radiologists' PACS workstations. While radiologists are trained for viewing and image interpretation, a non-radiologist physician relies on the radiologists' reports. Consequently, patients historically have been typically informed about their imaging findings via oral communication with their physicians, even though clinical studies have shown that patients respond to physician's advice significantly better when the individual patients are shown their own actual data. Our previous work on automated semantic annotation of DICOM Computed Tomography (CT) images allows us to further link radiology report with the corresponding images, enabling us to bridge the gap between image data with the human interpreted textual description of the corresponding imaging studies. The mapping of radiology text is facilitated by natural language processing (NLP) based search application. When combined with our automated semantic annotation of images, it enables navigation in large DICOM studies by clicking hyperlinked text in the radiology reports. An added advantage of using semantic annotation is the ability to render the organs to their default window level setting thus eliminating another barrier to image sharing and distribution. We believe such approaches would potentially enable the consumer to have access to their imaging data and navigate them in an informed manner.

  16. Big Data Components for Business Process Optimization

    Directory of Open Access Journals (Sweden)

    Mircea Raducu TRIFU

    2016-01-01

    Full Text Available In these days, more and more people talk about Big Data, Hadoop, noSQL and so on, but very few technical people have the necessary expertise and knowledge to work with those concepts and technologies. The present issue explains one of the concept that stand behind two of those keywords, and this is the map reduce concept. MapReduce model is the one that makes the Big Data and Hadoop so powerful, fast, and diverse for business process optimization. MapReduce is a programming model with an implementation built to process and generate large data sets. In addition, it is presented the benefits of integrating Hadoop in the context of Business Intelligence and Data Warehousing applications. The concepts and technologies behind big data let organizations to reach a variety of objectives. Like other new information technologies, the main important objective of big data technology is to bring dramatic cost reduction.

  17. USING A DIGITAL VIDEO CAMERA AS THE SMART SENSOR OF THE SYSTEM FOR AUTOMATIC PROCESS CONTROL OF GRANULAR FODDER MOLDING

    Directory of Open Access Journals (Sweden)

    M. M. Blagoveshchenskaya

    2014-01-01

    Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink

  18. Towards automatic lithological classification from remote sensing data using support vector machines

    Science.gov (United States)

    Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael

    2010-05-01

    Remote sensing data can be effectively used as a mean to build geological knowledge for poorly mapped terrains. Spectral remote sensing data from space- and air-borne sensors have been widely used to geological mapping, especially in areas of high outcrop density in arid regions. However, spectral remote sensing information by itself cannot be efficiently used for a comprehensive lithological classification of an area due to (1) diagnostic spectral response of a rock within an image pixel is conditioned by several factors including the atmospheric effects, spectral and spatial resolution of the image, sub-pixel level heterogeneity in chemical and mineralogical composition of the rock, presence of soil and vegetation cover; (2) only surface information and is therefore highly sensitive to the noise due to weathering, soil cover, and vegetation. Consequently, for efficient lithological classification, spectral remote sensing data needs to be supplemented with other remote sensing datasets that provide geomorphological and subsurface geological information, such as digital topographic model (DEM) and aeromagnetic data. Each of the datasets contain significant information about geology that, in conjunction, can potentially be used for automated lithological classification using supervised machine learning algorithms. In this study, support vector machine (SVM), which is a kernel-based supervised learning method, was applied to automated lithological classification of a study area in northwestern India using remote sensing data, namely, ASTER, DEM and aeromagnetic data. Several digital image processing techniques were used to produce derivative datasets that contained enhanced information relevant to lithological discrimination. A series of SVMs (trained using k-folder cross-validation with grid search) were tested using various combinations of input datasets selected from among 50 datasets including the original 14 ASTER bands and 36 derivative datasets (including 14

  19. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    Science.gov (United States)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  20. Automatic spline-smoothing approach applied to denoise Moroccan resistivity data phosphate deposit “disturbances” map

    Directory of Open Access Journals (Sweden)

    Saad Bakkali

    2010-04-01

    Full Text Available This paper focuses on presenting a method which is able to filter out noise and suppress outliers of sampled real functions under fairly general conditions. The automatic optimal spline-smoothing approach automatically determi-nes how a cubic spline should be adjusted in a least-squares optimal sense from an a priori selection of the number of points defining an adjusting spline, but not their location on that curve. The method is fast and easily allowed for selecting several knots, thereby adding desirable flexibility to the procedure. As an illustration, we apply the AOSSA method to Moroccan resistivity data phosphate deposit “disturbances” map. The AOSSA smoothing method is an e-fficient tool in interpreting geophysical potential field data which is particularly suitable in denoising, filtering and a-nalysing resistivity data singularities. The AOSSA smoothing and filtering approach was found to be consistently use-ful when applied to modeling surface phosphate “disturbances.”.

  1. An automatic modeling system of the reaction mechanisms for chemical vapor deposition processes using real-coded genetic algorithms.

    Science.gov (United States)

    Takahashi, Takahiro; Nakai, Hiroyuki; Kinpara, Hiroki; Ema, Yoshinori

    2011-09-01

    The identification of appropriate reaction models is very helpful for developing chemical vapor deposition (CVD) processes. In this study, we have developed an automatic system to model reaction mechanisms in the CVD processes by analyzing the experimental results, which are cross-sectional shapes of the deposited films on substrates with micrometer- or nanometer-sized trenches. We designed the inference engine to model the reaction mechanism in the system by the use of real-coded genetic algorithms (RCGAs). We studied the dependence of the system performance on two methods using simple genetic algorithms (SGAs) and the RCGAs; the one involves the conventional GA operators and the other involves the blend crossover operator (BLX-alpha). Although we demonstrated that the systems using both the methods could successfully model the reaction mechanisms, the RCGAs showed the better performance with respect to the accuracy and the calculation cost for identifying the models.

  2. Automatic chemical design using a data-driven continuous representation of molecules

    CERN Document Server

    Gómez-Bombarelli, Rafael; Hernández-Lobato, José Miguel; Aguilera-Iparraguirre, Jorge; Hirzel, Timothy D; Adams, Ryan P; Aspuru-Guzik, Alán

    2016-01-01

    We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. We train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our metho...

  3. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    CERN Document Server

    Melnikova, N B; Sloot, P M A

    2012-01-01

    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  4. A fully automatic tool to perform accurate flood mapping by merging remote sensing imagery and ancillary data

    Science.gov (United States)

    D'Addabbo, Annarita; Refice, Alberto; Lovergine, Francesco; Pasquariello, Guido

    2016-04-01

    Flooding is one of the most frequent and expansive natural hazard. High-resolution flood mapping is an essential step in the monitoring and prevention of inundation hazard, both to gain insight into the processes involved in the generation of flooding events, and from the practical point of view of the precise assessment of inundated areas. Remote sensing data are recognized to be useful in this respect, thanks to the high resolution and regular revisit schedules of state-of-the-art satellites, moreover offering a synoptic overview of the extent of flooding. In particular, Synthetic Aperture Radar (SAR) data present several favorable characteristics for flood mapping, such as their relative insensitivity to the meteorological conditions during acquisitions, as well as the possibility of acquiring independently of solar illumination, thanks to the active nature of the radar sensors [1]. However, flood scenarios are typical examples of complex situations in which different factors have to be considered to provide accurate and robust interpretation of the situation on the ground: the presence of many land cover types, each one with a particular signature in presence of flood, requires modelling the behavior of different objects in the scene in order to associate them to flood or no flood conditions [2]. Generally, the fusion of multi-temporal, multi-sensor, multi-resolution and/or multi-platform Earth observation image data, together with other ancillary information, seems to have a key role in the pursuit of a consistent interpretation of complex scenes. In the case of flooding, distance from the river, terrain elevation, hydrologic information or some combination thereof can add useful information to remote sensing data. Suitable methods, able to manage and merge different kind of data, are so particularly needed. In this work, a fully automatic tool, based on Bayesian Networks (BNs) [3] and able to perform data fusion, is presented. It supplies flood maps

  5. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    Science.gov (United States)

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (p3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  6. Contributions to a neurophysiology of meaning: the interpretation of written messages could be an automatic stimulus-reaction mechanism before becoming conscious processing of information.

    Science.gov (United States)

    Maffei, Roberto; Convertini, Livia S; Quatraro, Sabrina; Ressa, Stefania; Velasco, Annalisa

    2015-01-01

    Background. Even though the interpretation of natural language messages is generally conceived as the result of a conscious processing of the message content, the influence of unconscious factors is also well known. What is still insufficiently known is the way such factors work. We have tackled interpretation assuming it is a process, whose basic features are the same for the whole humankind, and employing a naturalistic approach (careful observation of phenomena in conditions the closest to "natural" ones, and precise description before and independently of data statistical analysis). Methodology. Our field research involved a random sample of 102 adults. We presented them with a complete real world-like case of written communication using unabridged message texts. We collected data (participants' written reports on their interpretations) in controlled conditions through a specially designed questionnaire (closed and opened answers); then, we treated it through qualitative and quantitative methods. Principal Findings. We gathered some evidence that, in written message interpretation, between reading and the attribution of conscious meaning, an intermediate step could exist (we named it "disassembling") which looks like an automatic reaction to the text words/expressions. Thus, the process of interpretation would be a discontinuous sequence of three steps having different natures: the initial "decoding" step (i.e., reading, which requires technical abilities), disassembling (the automatic reaction, an unconscious passage) and the final conscious attribution of meaning. If this is true, words and expressions would firstly function like physical stimuli, before being taken into account as symbols. Such hypothesis, once confirmed, could help explaining some links between the cultural (human communication) and the biological (stimulus-reaction mechanisms as the basis for meanings) dimension of humankind.

  7. Contributions to a neurophysiology of meaning: the interpretation of written messages could be an automatic stimulus-reaction mechanism before becoming conscious processing of information

    Directory of Open Access Journals (Sweden)

    Roberto Maffei

    2015-10-01

    Full Text Available Background. Even though the interpretation of natural language messages is generally conceived as the result of a conscious processing of the message content, the influence of unconscious factors is also well known. What is still insufficiently known is the way such factors work. We have tackled interpretation assuming it is a process, whose basic features are the same for the whole humankind, and employing a naturalistic approach (careful observation of phenomena in conditions the closest to “natural” ones, and precise description before and independently of data statistical analysis.Methodology. Our field research involved a random sample of 102 adults. We presented them with a complete real world-like case of written communication using unabridged message texts. We collected data (participants’ written reports on their interpretations in controlled conditions through a specially designed questionnaire (closed and opened answers; then, we treated it through qualitative and quantitative methods.Principal Findings. We gathered some evidence that, in written message interpretation, between reading and the attribution of conscious meaning, an intermediate step could exist (we named it “disassembling” which looks like an automatic reaction to the text words/expressions. Thus, the process of interpretation would be a discontinuous sequence of three steps having different natures: the initial “decoding” step (i.e., reading, which requires technical abilities, disassembling (the automatic reaction, an unconscious passage and the final conscious attribution of meaning. If this is true, words and expressions would firstly function like physical stimuli, before being taken into account as symbols. Such hypothesis, once confirmed, could help explaining some links between the cultural (human communication and the biological (stimulus-reaction mechanisms as the basis for meanings dimension of humankind.

  8. Fast and automatic depth control of iterative bone ablation based on optical coherence tomography data

    Science.gov (United States)

    Fuchs, Alexander; Pengel, Steffen; Bergmeier, Jan; Kahrs, Lüder A.; Ortmaier, Tobias

    2015-07-01

    Laser surgery is an established clinical procedure in dental applications, soft tissue ablation, and ophthalmology. The presented experimental set-up for closed-loop control of laser bone ablation addresses a feedback system and enables safe ablation towards anatomical structures that usually would have high risk of damage. This study is based on combined working volumes of optical coherence tomography (OCT) and Er:YAG cutting laser. High level of automation in fast image data processing and tissue treatment enables reproducible results and shortens the time in the operating room. For registration of the two coordinate systems a cross-like incision is ablated with the Er:YAG laser and segmented with OCT in three distances. The resulting Er:YAG coordinate system is reconstructed. A parameter list defines multiple sets of laser parameters including discrete and specific ablation rates as ablation model. The control algorithm uses this model to plan corrective laser paths for each set of laser parameters and dynamically adapts the distance of the laser focus. With this iterative control cycle consisting of image processing, path planning, ablation, and moistening of tissue the target geometry and desired depth are approximated until no further corrective laser paths can be set. The achieved depth stays within the tolerances of the parameter set with the smallest ablation rate. Specimen trials with fresh porcine bone have been conducted to prove the functionality of the developed concept. Flat bottom surfaces and sharp edges of the outline without visual signs of thermal damage verify the feasibility of automated, OCT controlled laser bone ablation with minimal process time.

  9. The MEM in Measuring Data Processing

    Institute of Scientific and Technical Information of China (English)

    L(U) Wen; TONG Ling; CHEN Guang-ju

    2004-01-01

    A kind of new method in measuring data processing called maximum entropy method(MEM) is introduced. The probability-density function (pdf) is deduced by MEM under the restraint of the square of the data. A group of experiment data is processed using the method and the result closed to the real distribution is got. The different pdf got under different order of square and by different content of samples is discussed. It draws conclusions that the pdf in the square restrain with more order than three using MEM can give the distribution of the data basically, and it is not suitable to use the square restraint of data when the samples are less.

  10. NASA Science Data Processing for SNPP

    Science.gov (United States)

    Hall, A.; Behnke, J.; Lowe, D. R.; Ho, E. L.

    2014-12-01

    NASA's ESDIS Project has been operating the Suomi National Polar-Orbiting Partnership (SNPP) Science Data Segment (SDS) since the launch in October 2011. The science data processing system includes a Science Data Depository and Distribution Element (SD3E) and five Product Evaluation and Analysis Tool Elements (PEATEs): Land, Ocean, Atmosphere, Ozone, and Sounder. The SDS has been responsible for assessing Environmental Data Records (EDRs) for climate quality, providing and demonstrating algorithm improvements/enhancements and supporting the calibration/validation activities as well as instrument calibration and sensor table uploads for mission planning. The SNPP also flies two NASA instruments: OMPS Limb and CERES. The SNPP SDS has been responsible for producing, archiving and distributing the standard products for those instruments in close association with their NASA science teams. The PEATEs leveraged existing science data processing techniques developed under the EOSDIS Program. This enabled he PEATEs to do an excellent job in supporting Science Team analysis for SNPP. The SDS acquires data from three sources: NESDIS IDPS (Raw Data Records (RDRs)), GRAVITE (Retained Intermediate Products (RIPs)), and the NOAA/CLASS (higher level products). The SD3E component aggregates the RDRs, and distributes them to each of the PEATEs for further analysis and processing. It provides a ~32 day rolling storage of data, available for pickup by the PEATEs. The current system used by NASA will be presented along with plans for streamlining the system in support of continuing the NASA's EOS measurements.

  11. Automatic Recognition of Isolated And Interacting Manufacturing Features In Milling Process

    Directory of Open Access Journals (Sweden)

    Abdelilah El Mesbahi

    2014-10-01

    Full Text Available Manufacturing features play an important role between design information and manufacturing activities. Recently, various efforts have been concentrated in development of automatic feature recognition systems. However, only limited number of features could be recognized, intersecting features were generally not involved. This paper presents a simple system, in which manufacturing features are easily detected using a Chain of Faces and Base of Faces (CF-BF graph. A feature is modeled by a series/parallel association of opened Chain of Faces (OCF or Closed chain of Faces (CCF that rest on a Base Face (BF. The feature is considered Perfect Manufacturing Feature (PMF if all Faces that participate in constitution of OCF/CCF are blank faces, else it is an Imperfect Manufacturing Feature (IMF. In order to establish news Virtual Faces to satisfy this necessaries condition, a judicious analysis of orientation of frontier faces that rest on BF is performed. The technique was tested on several parts taken from literature and the results were satisfying.

  12. NISAR ISRO science data processing and products

    Science.gov (United States)

    Agrawal, Krishna Murari; Mehra, Raghav; Ryali, Usha Sundari

    2016-05-01

    NASA-ISRO Synthetic Aperture Radar (NISAR) is a Dual Frequency (L & S band) mission which will be operating in SweepSAR mode. As compared to traditional SAR imaging modes in which Swath and resolution are at trade-off, SweepSAR imaging concept can acquire data over large swath (240 Km) without compromising azimuth resolution (6m approximately). NISAR L-band & S-band sensors will be developed by JPL-NASA and ISRO respectively. NISAR science data will be downloaded at both NASA and ISRO ground stations. SAC-ISRO will develop the SAR processor for both L & S band data to generate products in compliance with science requirements. Moreover, JPL will develop L-band SAR processor and all data products will be available to users. Distributed data processing architecture will be used for handling large volume of data resulting from moderate resolution and larger swath in SweepSAR mode. Data products will be available in multiple processing levels like raw signal products, signal processed single-look and multi-look products, ground range products and Geo-Referenced products in HDF5 & GeoTiff formats. Derived Geo-Referenced Polarimetric and Interferometric data products will also be available for dissemination to the users. A rigorous calibration exercise will be performed by acquiring data over reference targets like Amazon rain-forest & corner reflectors sites for the generation of calibrated data products. Furthermore, various science data products (for science applications) will also be derived from basic data products for operational dissemination.

  13. Flash ADC data processing with correlation coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, D.; Gibson, M.; Mcfarland, D.; Comfort, J.R., E-mail: Joseph.Comfort@asu.edu

    2014-02-21

    The large growth of flash ADC techniques for processing signals, especially in applications of streaming data, raises issues such as data flow through an acquisition system, long-term storage, and greater complexity in data analysis. In addition, experiments that push the limits of sensitivity need to distinguish legitimate signals from noise. The use of correlation coefficients is examined to address these issues. They are found to be quite successful well into the noise region. The methods can also be extended to Field Programmable Gate Array modules for compressing the data flow and greatly enhancing the event rate capabilities.

  14. Simultaneous estimation of absolute and relative permeability by automatic history matching of three-phase flow production data

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A.C.; Li, R.; Oliver, D.S. [Tulsa Univ., Tulsa, OK (United States)

    2001-06-01

    A study was conducted in petroleum engineering to determine the feasibility of estimating absolute permeability fields and parameters that define relative permeability functions by automatic history matching of production data obtained under multiphase flow conditions. A prior model is used to assume irreducible water saturation, critical gas saturation and residual oil saturations. The three-phase oil relative permeability curve was calculated from the two sets of two-phase curves using Stone's Model II. The study considered data regarding pressure, gas-oil-ratio or water-oil ratio. It was concluded that when the parameters that characterize the relative permeability functions of a reservoir are known, then it is possible to estimate the relative permeability curves and log-permeability fields by history matching production data derived under three-phase flow conditions. 30 refs., 5 tabs., 14 figs.

  15. Optimizing ISOCAM data processing using spatial redundancy

    CERN Document Server

    Miville-Deschênes, M A; Abergel, A; Bernard, J P

    2000-01-01

    We present new data processing techniques that allow to correct the main instrumental effects that degrade the images obtained by ISOCAM, the camera on board the Infrared Space Observatory (ISO). Our techniques take advantage of the fact that a position on the sky has been observed by several pixels at different times. We use this information (1) to correct the long term variation of the detector response, (2) to correct memory effects after glitches and point sources, and (3) to refine the deglitching process. Our new method allows the detection of faint extended emission with contrast smaller than 1% of the zodiacal background. The data reduction corrects instrumental effects to the point where the noise in the final map is dominated by the readout and the photon noises. All raster ISOCAM observations can benefit from the data processing described here. These techniques could also be applied to other raster type observations (e.g. ISOPHOT or IRAC on SIRTF).

  16. Improved SDT Process Data Compression Algorithm

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.

  17. Data processing system of GA and PPPL

    Energy Technology Data Exchange (ETDEWEB)

    Oshima, Takayuki [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2001-11-01

    Results of research in 1997 to General Atomics (GA) and Princeton Plasma Physics Laboratory (PPPL) are reported. The author visited the computer system of fusion group in GA. He joined the tokamak experiment in DIII-D, especially on the demonstration of the remote experiment inside U.S., and investigated the data processing system of DIII-D and the computer network, etc. After the visit to GA, He visited PPPL and exchanged the information about the equipment of remote experiment between JAERI and PPPL based on the US-Japan fusion energy research cooperation. He also investigated the data processing system of TFTR tokamak, the computer network and so on. Results of research of the second visit to GA in 2000 are also reported, which describes a rapid progress of each data processing equipment by the advance on the computer technology in just three years. (author)

  18. Data Processing at the Pierre Auger Observatory

    CERN Document Server

    Vicha, J

    2015-01-01

    Cosmic-ray particles with ultra-high energies (above $10^{18}$ eV) are studied through the properties of extensive air showers which they initiate in the atmosphere. The Pierre Auger Observatory detects these showers with unprecedented exposure and precision and the collected data are processed via dedicated software codes. Monte Carlo simulations of extensive air showers are very computationally expensive, especially at the highest energies and calculations are performed on the GRID for this purpose. The processing of measured and simulated data is described, together with a brief list of physics results which have been achieved.

  19. Advanced instrumentation for the collection, retrieval, and processing of urban stormwater data

    Science.gov (United States)

    Robinson, Jerald B.; Bales, Jerad D.; Young, Wendi S.; ,

    1995-01-01

    The U.S. Geological Survey, in cooperation with the City of Charlotte and Mecklenburg County, North Carolina, has developed a data-collection network that uses advanced instrumentation to automatically collect, retrieve, and process urban stormwater data. Precipitation measurement and water-quality networks provide data for (1) planned watershed simulation models, (2) early warning of possible flooding, (3) computation of material export, and (4) characterization of water quality in relation to basin conditions. Advantages of advanced instrumentation include remote access to real-time data, reduced demands on and more efficient use of limited human resources, and direct importation of data into a geographical information system for display and graphic analysis.

  20. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Science.gov (United States)

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.

    2011-12-01

    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  1. Big Bicycle Data Processing: from Personal Data to Urban Applications

    Science.gov (United States)

    Pettit, C. J.; Lieske, S. N.; Leao, S. Z.

    2016-06-01

    Understanding the flows of people moving through the built environment is a vital source of information for the planners and policy makers who shape our cities. Smart phone applications enable people to trace themselves through the city and these data can potentially be then aggregated and visualised to show hot spots and trajectories of macro urban movement. In this paper our aim is to develop procedures for cleaning, aggregating and visualising human movement data and translating this into policy relevant information. In conducting this research we explore using bicycle data collected from a smart phone application known as RiderLog. We focus on the RiderLog application initially in the context of Sydney, Australia and discuss the procedures and challenges in processing and cleaning this data before any analysis can be made. We then present some preliminary map results using the CartoDB online mapping platform where data are aggregated and visualised to show hot spots and trajectories of macro urban movement. We conclude the paper by highlighting some of the key challenges in working with such data and outline some next steps in processing the data and conducting higher volume and more extensive analysis.

  2. IceMap250—Automatic 250 m Sea Ice Extent Mapping Using MODIS Data

    Directory of Open Access Journals (Sweden)

    Charles Gignac

    2017-01-01

    Full Text Available The sea ice cover in the North evolves at a rapid rate. To adequately monitor this evolution, tools with high temporal and spatial resolution are needed. This paper presents IceMap250, an automatic sea ice extent mapping algorithm using MODIS reflective/emissive bands. Hybrid cloud-masking using both the MOD35 mask and a visibility mask, combined with downscaling of Bands 3–7 to 250 m, are utilized to delineate sea ice extent using a decision tree approach. IceMap250 was tested on scenes from the freeze-up, stable cover, and melt seasons in the Hudson Bay complex, in Northeastern Canada. IceMap250 first product is a daily composite sea ice presence map at 250 m. Validation based on comparisons with photo-interpreted ground-truth show the ability of the algorithm to achieve high classification accuracy, with kappa values systematically over 90%. IceMap250 second product is a weekly clear sky map that provides a synthesis of 7 days of daily composite maps. This map, produced using a majority filter, makes the sea ice presence map even more accurate by filtering out the effects of isolated classification errors. The synthesis maps show spatial consistency through time when compared to passive microwave and national ice services maps.

  3. Multi-Objective Differential Evolution for Automatic Clustering with Application to Micro-Array Data Analysis

    Directory of Open Access Journals (Sweden)

    Sang Yong Han

    2009-05-01

    Full Text Available This paper applies the Differential Evolution (DE algorithm to the task of automatic fuzzy clustering in a Multi-objective Optimization (MO framework. It compares the performances of two multi-objective variants of DE over the fuzzy clustering problem, where two conflicting fuzzy validity indices are simultaneously optimized. The resultant Pareto optimal set of solutions from each algorithm consists of a number of non-dominated solutions, from which the user can choose the most promising ones according to the problem specifications. A real-coded representation of the search variables, accommodating variable number of cluster centers, is used for DE. The performances of the multi-objective DE-variants have also been contrasted to that of two most well-known schemes of MO clustering, namely the Non Dominated Sorting Genetic Algorithm (NSGA II and Multi-Objective Clustering with an unknown number of Clusters K (MOCK. Experimental results using six artificial and four real life datasets of varying range of complexities indicate that DE holds immense promise as a candidate algorithm for devising MO clustering schemes.

  4. Processing Marine Gravity Data Around Korea

    Science.gov (United States)

    Lee, Y.; Choi, K.; Kim, Y.; Ahn, Y.; Chang, M.

    2008-12-01

    In Korea currently 4 research ships are under operating in Korea, after the first research vessel equipped shipborne gravity meter was introduced in 1990s. These are Onnuri(launch 1991) of KORDI(Korea Ocean Research & Development Institute), Haeyang2000(launch 1996), Badaro1(launch 2002) of NORI(National Oceanographic Research Institute) and Tamhae2(launch 1997) of KIGAM(Korea Institute of Geoscience and Mineral Resources). Those of research vessel, Haeyang2000 have observed marine gravity data over 150,000 points each year from year 1996 to year 2003. Haeyang2000, about 2,500 tons, is unable to operate onshore so NORI has constructed another 600 tons research ship Badaro1 that has observed marine gravity data onshore since year 2002. Haeyang2000 finished observing marine gravity data offshore within Korean territorial waters until year 2003. Currently Badaro1 is observing marine gravity data onshore. These shipborne gravity data will be very useful and important on geodesy and geophysics research also those data can make a contribution to developing these studies. In this study NORI's shipbrne gravity data from 1996 to 2007 has been processed for fundamental data to compute Korean precise geoid. Marine gravity processing steps as followed. 1. Check the time sequence, latitude and longitude position, etc. of shipborne gravity data 2. Arrangement of the tide level below the pier and meter drift correction of each cruise. 3. Elimination of turning points. 4. The time lag correction. 5. Computation of RV's velocities, Heading angles and the Eötvös correction. 6. Kalman filtering of GPS navigation data using cross-over points. 7. Cross-over correction using least square adjustment. About 2,058,000 points have been processed with NORI's marine gravity data from 1996 to 2007 in this study. The distribution of free-air anomalies was -41.0 mgal to 136.0 mgal(mean 8.90mgal) within Korean territorial waters. The free-air anomalies processed with the marine gravity data are

  5. Multivariate Process Control with Autocorrelated Data

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2011-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control and monitoring. This new high dimensional data...... often exhibit not only cross-­‐correlation among the quality characteristics of interest but also serial dependence as a consequence of high sampling frequency and system dynamics. In practice, the most common method of monitoring multivariate data is through what is called the Hotelling’s T2 statistic...

  6. Internally- and Externally-Driven Network Transitions as a Basis for Automatic and Strategic Processes in Semantic Priming: Theory and Experimental Validation

    Directory of Open Access Journals (Sweden)

    Itamar eLerner

    2014-04-01

    Full Text Available For the last four decades, semantic priming – the facilitation in recognition of a target word when it follows the presentation of a semantically related prime word – has been a central topic in research of human cognitive processing. Studies have drawn a complex picture of findings which demonstrated the sensitivity of this priming effect to a unique combination of variables, including, but not limited to, the type of relatedness between primes and targets, the prime-target SOA, the relatedness proportion in the stimuli list and the specific task subjects are required to perform. Automatic processes depending on the activation patterns of semantic representations in memory and controlled strategies adapted by individuals when attempting to maximize their recognition performance have both been implicated in contributing to the results. Lately, we have published a new model of semantic priming that addresses the majority of these findings within one conceptual framework. In our model, semantic memory is depicted as an attractor neural network in which stochastic transitions from one stored pattern to another are continually taking place due to synaptic depression mechanisms. We have shown how such transitions, in combination with a reinforcement-learning rule that adjusts their pace, resemble the classic automatic and controlled processes involved in semantic priming and account for a great number of the findings in the literature. Here, we review the core findings of our model and present new simulations that show how similar principles of parameter-adjustments could account for additional data not addressed in our previous studies, such as the relation between expectancy and inhibition in priming, target frequency and target degradation effects. Finally, we describe two human experiments that validate several key predictions of the model.

  7. An Algorithm for Automatic Road Asphalt Edge Delineation from Mobile Laser Scanner Data Using the Line Clouds Concept

    Directory of Open Access Journals (Sweden)

    Carlos Cabo

    2016-09-01

    Full Text Available Accurate road asphalt extent delineation is needed for road and street planning, road maintenance, and road safety assessment. In this article, a new approach for automatic roadside delineation is developed based on the line clouds concept. The method relies on line cloud grouping from point cloud laser data. Using geometric criteria, the initial 3D LiDAR point data is structured in lines covering the road surface. These lines are then grouped according to a set of quasi-planar restriction rules. Road asphalt edge limits are extracted from the end points of lines belonging to these groups. Finally a two-stage smoothing procedure is applied to correct for edge occlusions and other anomalies. The method was tested on a 2.1 km stretch of road, and the results were checked using a RTK-GNSS measured dataset as ground truth. Correctness and completeness were 99% and 97%, respectively.

  8. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    Science.gov (United States)

    Du, Q.; Xie, D.; Sun, Y.

    2015-06-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other one is Position and Orientation System (POS) data supported aerotriangulation. The high precision registration points are selected as Ground Control Points (GCPs) instead of measuring GCPs manually during aerotriangulation. The registration experiments indicate that the method which registering aerial images and LiDAR points has a great advantage in higher automation and precision compare with manual registration.

  9. Automatic Generation of Assembly Sequence for the Planning of Outfitting Processes in Shipbuilding

    NARCIS (Netherlands)

    Wei, Y.

    2012-01-01

    The most important characteristics of the outfitting processes in shipbuilding are: 1. The processes involve many interferences between yard and different subcontractors. In recent years, the use of outsourcing and subcontracting has become a widespread strategy of western shipyards. There exists no

  10. A Web-based Tool for Automatizing the Software Process Improvement Initiatives in Small Software Enterprises

    NARCIS (Netherlands)

    Garcia, I.; Pacheco, C.

    2010-01-01

    Top-down process improvement approaches provide a high-level model of what the process of a software development organization should be. Such models are based on the consensus of a designated working group on how software should be developed or maintained. They are very useful in that they provide g

  11. Near Real Time Processing Chain for Suomi NPP Satellite Data

    Science.gov (United States)

    Monsorno, Roberto; Cuozzo, Giovanni; Costa, Armin; Mateescu, Gabriel; Ventura, Bartolomeo; Zebisch, Marc

    2014-05-01

    Since 2009, the EURAC satellite receiving station, located at Corno del Renon, in a free obstacle site at 2260 m a.s.l., has been acquiring data from Aqua and Terra NASA satellites equipped with Moderate Resolution Imaging Spectroradiometer (MODIS) sensors. The experience gained with this local ground segmenthas given the opportunity of adapting and modifying the processing chain for MODIS data to the Suomi NPP, the natural successor to Terra and Aqua satellites. The processing chain, initially implemented by mean of a proprietary system supplied by Seaspace and Advanced Computer System, was further developed by EURAC's Institute for Applied Remote Sensing engineers. Several algorithms have been developed using MODIS and Visible Infrared Imaging Radiometer Suite (VIIRS) data to produce Snow Cover, Particulate Matter estimation and Meteo maps. These products are implemented on a common processor structure based on the use of configuration files and a generic processor. Data and products have then automatically delivered to the customers such as the Autonomous Province of Bolzano-Civil Protection office. For the processing phase we defined two goals: i) the adaptation and implementation of the products already available for MODIS (and possibly new ones) to VIIRS, that is one of the sensors onboard Suomi NPP; ii) the use of an open source processing chain in order to process NPP data in Near Real Time, exploiting the knowledge we acquired on parallel computing. In order to achieve the second goal, the S-NPP data received and ingested are sent as input to RT-STPS (Real-time Software Telemetry Processing System) software developed by the NASA Direct Readout Laboratory 1 (DRL) that gives as output RDR files (Raw Data Record) for VIIRS, ATMS (Advanced Technology Micorwave Sounder) and CrIS (Cross-track Infrared Sounder)sensors. RDR are then transferred to a server equipped with CSPP2 (Community Satellite Processing Package) software developed by the University of

  12. The effects of total sleep deprivation on semantic priming: event-related potential evidence for automatic and controlled processing strategies.

    Science.gov (United States)

    López Zunini, Rocío; Muller-Gass, Alexandra; Campbell, Kenneth

    2014-02-01

    There is general consensus that performance on a number of cognitive tasks deteriorates following total sleep deprivation. At times, however, subjects manage to maintain performance. This may be because of an ability to switch cognitive strategies including the exertion of compensatory effort. The present study examines the effects of total sleep deprivation on a semantic word priming task. Word priming is unique because it can be carried out using different strategies involving either automatic, effortless or controlled, effortful processing. Twelve subjects were presented with word pairs, a prime and a target, that were either highly semantically associated (cat…dog), weakly associated (cow…barn) or unassociated (apple…road). In order to increase the probability of the use of controlled processing following normal sleep, the subject's task was to determine if the target word was semantically related to the prime. Furthermore, the time between the offset of the prime and the onset of the target was relatively long, permitting the use of an effortful, expectancy-predictive strategy. Event-related potentials (ERPs) were recorded from 64 electrode sites. After normal sleep, RTs were faster and accuracy higher to highly associated targets; this performance advantage was also maintained following sleep deprivation. A large negative deflection, the N400, was larger to weakly associated and unassociated targets in both sleep-deprived and normal conditions. The overall N400 was however larger in the normal sleep condition. Moreover, a long-lasting negative slow wave developed between the offset of the prime and the onset of the target. These physiological measures are consistent with the use of an effortful, predictive strategy following normal sleep but an automatic, effortless strategy following total sleep deprivation. A picture priming task was also run. This task benefits less from the use of a predictive strategy. Accordingly, in this task, ERPs following the

  13. Brain activation associated with automatic processing of alcohol‐related cues in young heavy drinkers and its modulation by alcohol administration

    NARCIS (Netherlands)

    F. Kreusch; V. Goffaux; N. Siep; K. Houben; E. Quertemont; R.W. Wiers

    2015-01-01

    Background: While the automatic processing of alcohol-related cues by alcohol abusers is well established in experimental psychopathology approaches, the cerebral regions involved in this phenomenon and the influence of alcohol intake on this process remain unknown. The aim of this functional magnet

  14. Automatic terrain modeling using transfinite element analysis

    KAUST Repository

    Collier, Nathaniel O.

    2010-05-31

    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  15. AUTOMATIC EXTRACTION OF BUILDING ROOF PLANES FROM AIRBORNE LIDAR DATA APPLYING AN EXTENDED 3D RANDOMIZED HOUGH TRANSFORM

    Directory of Open Access Journals (Sweden)

    E. Maltezos

    2016-06-01

    Full Text Available This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT. The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  16. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    Science.gov (United States)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  17. Love thy neighbour: automatic animal behavioural classification of acceleration data using the K-nearest neighbour algorithm.

    Directory of Open Access Journals (Sweden)

    Owen R Bidder

    Full Text Available Researchers hoping to elucidate the behaviour of species that aren't readily observed are able to do so using biotelemetry methods. Accelerometers in particular are proving particularly effective and have been used on terrestrial, aquatic and volant species with success. In the past, behavioural modes were detected in accelerometer data through manual inspection, but with developments in technology, modern accelerometers now record at frequencies that make this impractical. In light of this, some researchers have suggested the use of various machine learning approaches as a means to classify accelerometer data automatically. We feel uptake of this approach by the scientific community is inhibited for two reasons; 1 Most machine learning algorithms require selection of summary statistics which obscure the decision mechanisms by which classifications are arrived, and 2 they are difficult to implement without appreciable computational skill. We present a method which allows researchers to classify accelerometer data into behavioural classes automatically using a primitive machine learning algorithm, k-nearest neighbour (KNN. Raw acceleration data may be used in KNN without selection of summary statistics, and it is easily implemented using the freeware program R. The method is evaluated by detecting 5 behavioural modes in 8 species, with examples of quadrupedal, bipedal and volant species. Accuracy and Precision were found to be comparable with other, more complex methods. In order to assist in the application of this method, the script required to run KNN analysis in R is provided. We envisage that the KNN method may be coupled with methods for investigating animal position, such as GPS telemetry or dead-reckoning, in order to implement an integrated approach to movement ecology research.

  18. Robust processing of mining subsidence monitoring data

    Energy Technology Data Exchange (ETDEWEB)

    Wang Mingzhong; Huang Guogang [Pingdingshan Mining Bureau (China); Wang Yunjia; Guogangli [China Univ. of Mining and Technology, Xuzhou (China)

    1996-12-31

    Since China began to do research on mining subsidence in 1950s, more than one thousand lines have been observed. Yet, monitoring data sometimes contain quite a lot of outliers because of the limit of observation and geological mining conditions. In China, nowdays, the method of processing mining subsidence monitoring data is based on the principle of the least square method. It is possible to produce lower accuracy, less reliability, or even errors. For reason given above, the authors, according to Chinese actual situation, have done some research work on the robust processing of mining subsidence monitoring data in respect of how to get prediction parameters. The authors have derived related formulas, designed some computational programmes, done a great quantity of actual calculation and simulation, and achieved good results. (orig.)

  19. Square Kilometre Array Science Data Processing

    Science.gov (United States)

    Nikolic, Bojan; SDP Consortium, SKA

    2014-04-01

    The Square Kilometre Array (SKA) is planned to be, by a large factor, the largest and most sensitive radio telescope ever constructed. The first phase of the telescope (SKA1), now in the design phase, will in itself represent a major leap in capabilities compared to current facilities. These advances are to a large extent being made possible by advances in available computer processing power so that that larger numbers of smaller, simpler and cheaper receptors can be used. As a result of greater reliance and demands on computing, ICT is becoming an ever more integral part of the telescope. The Science Data Processor is the part of the SKA system responsible for imaging, calibration, pulsar timing, confirmation of pulsar candidates, derivation of some further derived data products, archiving and providing the data to the users. It will accept visibilities at data rates at several TB/s and require processing power for imaging in range 100 petaFLOPS -- ~1 ExaFLOPS, putting SKA1 into the regime of exascale radio astronomy. In my talk I will present the overall SKA system requirements and how they drive these high data throughput and processing requirements. Some of the key challenges for the design of SDP are: - Identifying sufficient parallelism to utilise very large numbers of separate compute cores that will be required to provide exascale computing throughput - Managing efficiently the high internal data flow rates - A conceptual architecture and software engineering approach that will allow adaptation of the algorithms as we learn about the telescope and the atmosphere during the commissioning and operational phases - System management that will deal gracefully with (inevitably frequent) failures of individual units of the processing system In my talk I will present possible initial architectures for the SDP system that attempt to address these and other challenges.

  20. ICESat-2 Data Management Services and Processes

    Science.gov (United States)

    Tanner, S.; Fowler, D. K.; Bond, C.; Stowe, M.; Webster, D.; Steiker, A. E.; Fowler, C.; McAllister, M.

    2015-12-01

    NASA'S Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) will be launching in 2017 and will quickly begin generating an enormous amount of data. Close to a terabyte (TB) of data per day will be associated with the satellite's Advanced Topographic Laser Altimeter System (ATLAS) instrument. These data will be archived and made available for the public through NASA's Distributed Active Archive Center (DAAC) located at the National Snow and Ice Data Center (NSIDC) in Boulder, Colorado. Because of the expected volume of data, NSIDC and its partners are working on new capabilities and preparations that will be required to fully support the user community. These include using new processes and protocols simply to move the data to the NSIDC, as well as new tools for helping users find and download only the data they need. Subsetting, visualization and analysis capabilities across all of the ICESat-2 data products will be critical to dealing with data. This presentation will explore the steps being taken by NSIDC and others to implement and make these capabilities available.

  1. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    Science.gov (United States)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  2. A Prototype Expert System for Automatic Generation of Image Processing Programs

    Institute of Scientific and Technical Information of China (English)

    宋茂强; FelixGrimm; 等

    1991-01-01

    A prototype expert system for generating image processing programs using the subroutine package SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program using SPIDER routines.

  3. Automatic assist feature placement optimization based on process-variability reduction

    Science.gov (United States)

    Jayaram, Srividya; Yehia, Ayman; Bahnas, Mohamed; Maaty Omar, Hesham A.; Bozkus, Zeki; Sturtevant, John L.

    2007-10-01

    To maximize the process window and CD control of main features, sizing and placement rules for sub-resolution assist features (SRAF) need to be optimized, subject to the constraint that the SRAFs not print through the process window. With continuously shrinking target dimensions, generation of traditional rule-based SRAFs is becoming an expensive process in terms of time, cost and complexity. This has created an interest in other rule optimization methodologies, such as image contrast and other edge- and image-based objective functions. In this paper, we propose using an automated model-based flow to obtain the optimal SRAF insertion rules for a design and reduce the time and effort required to define the best rules. In this automated flow, SRAF placement is optimized by iteratively generating the space-width rules and assessing their performance against process variability metrics. Multiple metrics are used in the flow. Process variability (PV) band thickness is a good indicator of the process window enhancement. Depth of focus (DOF), the total range of focus that can be tolerated, is also a highly descriptive metric for the effectiveness of the sizing and placement rules generated. Finally, scatter bar (SB) printing margin calculations assess the allowed exposure range that prevents scatter bars from printing on the wafer.

  4. BepiColombo Science Data Processing and Archiving System

    Science.gov (United States)

    Martinez, Santa; Ortiz de Landaluce, Inaki

    2015-12-01

    The approach selected for BepiColombo for the processing, analysis and archiving of the science data represents a significant change with respect to previous ESA planetary missions, and the Science Ground Segment (SGS), located at ESAC, will play a key role in these activities. This contribution will summarise the key features of the selected approach, and will describe its implementation, with focus on the following aspects: - The use of state-of-the-art virtualisation technology for automatic build, deployment and execution of the pipelines as independent application containers. This will allow specific software environments, and underlying hardware resources, to be isolated, scaled and accessed in a homogeneous fashion. - A set of core libraries under development at the SGS (e.g. telemetry decoding, PDS product generation/validation, conversion to engineering units, Java to SPICE binding, geometry computations) aimed to be reused for certain processing steps in different pipelines. The implementation follows a quite generic and modular architecture providing a high level of flexibility and adaptability, which will allow its re-usability by future ESA planetary missions.

  5. Bistatic sAR data processing algorithms

    CERN Document Server

    Qiu, Xiaolan; Hu, Donghui

    2013-01-01

    Synthetic Aperture Radar (SAR) is critical for remote sensing. It works day and night, in good weather or bad. Bistatic SAR is a new kind of SAR system, where the transmitter and receiver are placed on two separate platforms. Bistatic SAR is one of the most important trends in SAR development, as the technology renders SAR more flexible and safer when used in military environments. Imaging is one of the most difficult and important aspects of bistatic SAR data processing. Although traditional SAR signal processing is fully developed, bistatic SAR has a more complex system structure, so sign

  6. Telemedicine optoelectronic biomedical data processing system

    Science.gov (United States)

    Prosolovska, Vita V.

    2010-08-01

    The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.

  7. Algorithm of Dynamic Operation Process of Hydraulic Automatically Operated Canals with Constant-Downstream Level Gates

    Institute of Scientific and Technical Information of China (English)

    ZHANG Li-wei; FENG Xiao-bo; WANG Chang-de

    2005-01-01

    On the basis of analysis the governing process of downstream water level gates AVIO and AVIS, a mathematical model for simulation of dynamic operation process of hydraulically automated irrigation canals installed with AVIO and AVIS gates is presented. the main point of this mathematical model is firstly applying a set of unsteady flow equations (St. Venant equations here) and treating the condition of gate movement as its dynamic boundary, and then decoupling this interaction of gate movement with the change of canal flow. In this process, it is necessary to give the gates' open-loop transfer function whose input is water level deviation and output is gate discharge. The result of this simulation for a practical reach has shown it has satisfactory accuracy.

  8. Automatic Information Processing and High-Performance Skills: Applications to Training, Transfer, and Retention

    Science.gov (United States)

    1991-04-01

    Ms. Kristie Holahan helped with data collection and report preparation, and Mr. Jeffrey Schmidt assisted in data collection. ii TABLE OF CONTENTS I...approximately every second. Updates or so- called refreshes to the pattern occurred every 7 seconds. When a refresh occirred, a new dot was added to the...the response button. Response times were collected only for correct target identifications or so- called "hits." If a subject failed to find the

  9. Arc sensing system for automatic weld seam tracking (II) ——Signal processing

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Due to violent disturbance of the welding arc, signal processingis the key problem of application of the sensor. By means of the new technique the arc sensor can recognize not only V-groove but also lap joint and butt joint. The sensor has good recognition ability even for welding process with very large current disturbance, e.g. pulsed arc and short circuit welding process, etc. The proposed technique is developed on the basis of modern digital filtering theory and mathematic transformation of the signals from time domain into frequency domain.

  10. Information processing of earth resources data

    Science.gov (United States)

    Zobrist, A. L.; Bryant, N. A.

    1982-01-01

    Current trends in the use of remotely sensed data include integration of multiple data sources of various formats and use of complex models. These trends have placed a strain on information processing systems because an enormous number of capabilities are needed to perform a single application. A solution to this problem is to create a general set of capabilities which can perform a wide variety of applications. General capabilities for the Image-Based Information System (IBIS) are outlined in this report. They are then cross-referenced for a set of applications performed at JPL.

  11. DATA PROCESSING FOR GPS PRECISE POINT POSITIONING

    Institute of Scientific and Technical Information of China (English)

    HUCong-wei; CHENWu; GAOShah; CHENYong-qi; DINGXiao-li

    2005-01-01

    In the data processing of the precise point positioning (PPP), the un-difference method is commonly used. However, GPS measurements can be differenced with satellites or different observation epochs. In theory,these differencing approaches should be mathematically equivalent. The positioning performance of different PPP data models, including un-difference (UD), satellite difference (SD), time difference (TD) and time-satellite difference (TSD), is examined using the 24 h GPS observation. The positioning accuracy, convergence of ambiguity, and tropspheric delay estimation with these four models are compared with each other.

  12. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity

    NARCIS (Netherlands)

    Cserjesi, Renata; Vermeulen, Nicolas; Lenard, Laszlo; Luminet, Olivier

    2011-01-01

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with

  13. Experiences with automatic N and P measurements of an activated sludge process in a research environment

    DEFF Research Database (Denmark)

    Isaacs, Steven Howard; Temmink, H.

    1996-01-01

    by Flow Injection Analysis (FIA). Two batch set-ups are described. In the first, one of the two 800 1 nitrifying/denitrifying tanks of a pilot-scale alternating process is employed as batch reactor, which has the advantage of a high measurement frequency and little preparatory and clean-up effort...

  14. Development of the Concise Data Processing Assessment

    Directory of Open Access Journals (Sweden)

    James Day

    2011-06-01

    Full Text Available The Concise Data Processing Assessment (CDPA was developed to probe student abilities related to the nature of measurement and uncertainty and to handling data. The diagnostic is a ten question, multiple-choice test that can be used as both a pre-test and post-test. A key component of the development process was interviews with students, which were used to both uncover common modes of student thinking and validate item wording. To evaluate the reliability and discriminatory power of this diagnostic, we performed statistical tests focusing on both item analysis (item difficulty index, item discrimination index, and point-biserial coefficient and on the entire test (test reliability and Ferguson’s delta. Scores on the CDPA range from chance (for novices to about 80% (for experts, indicating that it possesses good dynamic range. Overall, the results indicate that the CDPA is a reliable assessment tool for measuring targeted abilities in undergraduate physics students.

  15. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  16. Air conditioning for data processing system areas

    Directory of Open Access Journals (Sweden)

    Hernando Camacho García

    2011-06-01

    Full Text Available The appropiate selection of air conditioners for data processing system areas requires the knowledge of the environmental desing conditions, the air conditioning systems succssfully used computer and the cooling loads to handle. This work contains information about a wide variety of systems designed for computer room applications. a complete example of calculation to determine the amount of heat to be removed for satisfactory operation, is also included.

  17. Using Historical Data to Automatically Identify Air-Traffic Control Behavior

    Science.gov (United States)

    Lauderdale, Todd A.; Wu, Yuefeng; Tretto, Celeste

    2014-01-01

    This project seeks to develop statistical-based machine learning models to characterize the types of errors present when using current systems to predict future aircraft states. These models will be data-driven - based on large quantities of historical data. Once these models are developed, they will be used to infer situations in the historical data where an air-traffic controller intervened on an aircraft's route, even when there is no direct recording of this action.

  18. Archival Automatic Identification System (AIS) Data for Navigation Project Performance Evaluation

    Science.gov (United States)

    2015-08-01

    and available to USACE practitioners via the MOU mentioned above provides several of these parameters at a cost that is significantly lower than...performance information can be screened for a variety of embedded factors in the context of navigation features, such as inbound or outbound vessels. Vessel...collection, yet AIS data provides triple the data volume for this single transit, with no explicit cost incurred. Each historical data request from

  19. SoilJ - An ImageJ plugin for semi-automatized image-processing of 3-D X-ray images of soil columns

    Science.gov (United States)

    Koestel, John

    2016-04-01

    3-D X-ray imaging is a formidable tool for quantifying soil structural properties which are known to be extremely diverse. This diversity necessitates the collection of large sample sizes for adequately representing the spatial variability of soil structure at a specific sampling site. One important bottleneck of using X-ray imaging is however the large amount of time required by a trained specialist to process the image data which makes it difficult to process larger amounts of samples. The software SoilJ aims at removing this bottleneck by automatizing most of the required image processing steps needed to analyze image data of cylindrical soil columns. SoilJ is a plugin of the free Java-based image-processing software ImageJ. The plugin is designed to automatically process all images located with a designated folder. In a first step, SoilJ recognizes the outlines of the soil column upon which the column is rotated to an upright position and placed in the center of the canvas. Excess canvas is removed from the images. Then, SoilJ samples the grey values of the column material as well as the surrounding air in Z-direction. Assuming that the column material (mostly PVC of aluminium) exhibits a spatially constant density, these grey values serve as a proxy for the image illumination at a specific Z-coordinate. Together with the grey values of the air they are used to correct image illumination fluctuations which often occur along the axis of rotation during image acquisition. SoilJ includes also an algorithm for beam-hardening artefact removal and extended image segmentation options. Finally, SoilJ integrates the morphology analyses plugins of BoneJ (Doube et al., 2006, BoneJ Free and extensible bone image analysis in ImageJ. Bone 47: 1076-1079) and provides an ASCII file summarizing these measures for each investigated soil column, respectively. In the future it is planned to integrate SoilJ into FIJI, the maintained and updated edition of ImageJ with selected

  20. Automatic control strategy for step feed anoxic/aerobic biological nitrogen removal process

    Institute of Scientific and Technical Information of China (English)

    ZHU Gui-bing; PENG Yong-zhen; WU Shu-yun; WANG Shu-ying

    2005-01-01

    Control of sludge age and mixed liquid suspended solids concentration in the activated sludge process is critical for ensuring effective wastewater treatment. A nonlinear dynamic model for a step-feed activated sludge process was developed in this study. The system is based on the control of the sludge age and mixed liquor suspended solids in the aerator of last stage by adjusting the sludge recycle and wastage flow rates respectively. The simulation results showed that the sludge age remained nearly constant at a value of 16 d in the variation of the influent characteristics. The mixed liquor suspended solids in the aerator of last stage were also maintained to a desired value of 2500 g/m3 by adjusting wastage flow rates.