WorldWideScience

Sample records for automatically-generated template features

  1. Automatic capture of attention by conceptually generated working memory templates.

    Science.gov (United States)

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  2. Methodology for Automatic Ontology Generation Using Database Schema Information

    Directory of Open Access Journals (Sweden)

    JungHyen An

    2018-01-01

    Full Text Available An ontology is a model language that supports the functions to integrate conceptually distributed domain knowledge and infer relationships among the concepts. Ontologies are developed based on the target domain knowledge. As a result, methodologies to automatically generate an ontology from metadata that characterize the domain knowledge are becoming important. However, existing methodologies to automatically generate an ontology using metadata are required to generate the domain metadata in a predetermined template, and it is difficult to manage data that are increased on the ontology itself when the domain OWL (Ontology Web Language individuals are continuously increased. The database schema has a feature of domain knowledge and provides structural functions to efficiently process the knowledge-based data. In this paper, we propose a methodology to automatically generate ontologies and manage the OWL individual through an interaction of the database and the ontology. We describe the automatic ontology generation process with example schema and demonstrate the effectiveness of the automatically generated ontology by comparing it with existing ontologies using the ontology quality score.

  3. Multimodal biometric approach for cancelable face template generation

    Science.gov (United States)

    Paul, Padma Polash; Gavrilova, Marina

    2012-06-01

    Due to the rapid growth of biometric technology, template protection becomes crucial to secure integrity of the biometric security system and prevent unauthorized access. Cancelable biometrics is emerging as one of the best solutions to secure the biometric identification and verification system. We present a novel technique for robust cancelable template generation algorithm that takes advantage of the multimodal biometric using feature level fusion. Feature level fusion of different facial features is applied to generate the cancelable template. A proposed algorithm based on the multi-fold random projection and fuzzy communication scheme is used for this purpose. In cancelable template generation, one of the main difficulties is keeping interclass variance of the feature. We have found that interclass variations of the features that are lost during multi fold random projection can be recovered using fusion of different feature subsets and projecting in a new feature domain. Applying the multimodal technique in feature level, we enhance the interclass variability hence improving the performance of the system. We have tested the system for classifier fusion for different feature subset and different cancelable template fusion. Experiments have shown that cancelable template improves the performance of the biometric system compared with the original template.

  4. Applying automatic item generation to create cohesive physics testlets

    Science.gov (United States)

    Mindyarto, B. N.; Nugroho, S. E.; Linuwih, S.

    2018-03-01

    Computer-based testing has created the demand for large numbers of items. This paper discusses the production of cohesive physics testlets using an automatic item generation concepts and procedures. The testlets were composed by restructuring physics problems to reveal deeper understanding of the underlying physical concepts by inserting a qualitative question and its scientific reasoning question. A template-based testlet generator was used to generate the testlet variants. Using this methodology, 1248 testlet variants were effectively generated from 25 testlet templates. Some issues related to the effective application of the generated physics testlets in practical assessments were discussed.

  5. Template Authoring Environment for the Automatic Generation of Narrative Content

    Science.gov (United States)

    Caropreso, Maria Fernanda; Inkpen, Diana; Keshtkar, Fazel; Khan, Shahzad

    2012-01-01

    Natural Language Generation (NLG) systems can make data accessible in an easily digestible textual form; but using such systems requires sophisticated linguistic and sometimes even programming knowledge. We have designed and implemented an environment for creating and modifying NLG templates that requires no programming knowledge, and can operate…

  6. Identification of individual features in areal surface topography data by means of template matching and the ring projection transform

    International Nuclear Information System (INIS)

    Senin, Nicola; Moretti, Michele; Blunt, Liam A

    2014-01-01

    Starting from areal surface topography data as provided by current commercial three-dimensional (3D) profilometers and 3D digital microscopes, this work investigates the problem of automatically identifying and extracting functionally relevant, individual features within the acquisition area. Feature identification is achieved by adopting an original template-matching algorithmic procedure, based on applying the ring projection transform in combination with a parametric template. The proposed algorithmic procedure addresses in particular template-matching scenarios where significant variability may be associated with the features to be compared to the reference template. The algorithm is applied to a test case involving the characterization of the surface texture of a superabrasive polishing tool used in hard-disk manufacturing. (paper)

  7. Code Generation with Templates

    CERN Document Server

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  8. Automatic Generation of English-Japanese Translation Pattern Utilizing Genetic Programming Technique

    Science.gov (United States)

    Matsumura, Koki; Tamekuni, Yuji; Kimura, Shuhei

    There are a lot of constructional differences in an English-Japanese phrase template, and that often makes the act of translation difficult. Moreover, there exist various and tremendous phrase templates and sentence to be refered to. It is not easy to prepare the corpus that covers the all. Therefore, it is very significant to generate the translation pattern of the sentence pattern automatically from a viewpoint of the translation success rate and the capacity of the pattern dictionary. Then, for the purpose of realizing the automatic generation of the translation pattern, this paper proposed the new method for the generation of the translation pattern by using the genetic programming technique (GP). The technique tries to generate the translation pattern of various sentences which are not registered in the phrase template dictionary automatically by giving the genetic operation to the parsing tree of a basic pattern. The tree consists of the pair of the English-Japanese sentence generated as the first stage population. The analysis tree data base with 50,100,150,200 pairs was prepared as the first stage population. And this system was applied and executed for an English input of 1,555 sentences. As a result, the analysis tree increases from 200 to 517, and the accuracy rate of the translation pattern has improved from 42.57% to 70.10%. And, 86.71% of the generated translations was successfully done, whose meanings are enough acceptable and understandable. It seemed that this proposal technique became a clue to raise the translation success rate, and to find the possibility of the reduction of the analysis tree data base.

  9. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  10. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric modalities of the user and protect the raw images. To minimize spoof attacks on biometric systems by unauthorised users one of the solutions is to use multi-biometric systems. Multi-modal biometric system works by using fusion technique to merge feature templates generated from different modalities of the human. In this work a new scheme is proposed to secure template during feature fusion level. Scheme is based on union operation of fuzzy relations of templates of modalities during fusion process of multimodal biometric systems. This approach serves dual purpose of feature fusion as well as transformation of templates into a single secured non invertible template. The proposed technique is cancelable and experimentally tested on a bimodal biometric system comprising of fingerprint and hand geometry. Developed scheme removes the problem of an attacker learning the original minutia position in fingerprint and various measurements of hand geometry. Given scheme provides improved performance of the system with reduction in false accept rate and improvement in genuine accept rate.

  11. Template-based automatic breast segmentation on MRI by excluding the chest region

    OpenAIRE

    Lin, M; Chen, JH; Wang, X; Chan, S; Chen, S; Su, MY

    2013-01-01

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as th e template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the c...

  12. Template Generation and Selection Algorithms

    NARCIS (Netherlands)

    Guo, Y.; Smit, Gerardus Johannes Maria; Broersma, Haitze J.; Heysters, P.M.; Badaway, W.; Ismail, Y.

    The availability of high-level design entry tooling is crucial for the viability of any reconfigurable SoC architecture. This paper presents a template generation method to extract functional equivalent structures, i.e. templates, from a control data flow graph. By inspecting the graph the algorithm

  13. Template Assembly for Detailed Urban Reconstruction

    KAUST Repository

    Nan, Liangliang

    2015-05-04

    We propose a new framework to reconstruct building details by automatically assembling 3D templates on coarse textured building models. In a preprocessing step, we generate an initial coarse model to approximate a point cloud computed using Structure from Motion and Multi View Stereo, and we model a set of 3D templates of facade details. Next, we optimize the initial coarse model to enforce consistency between geometry and appearance (texture images). Then, building details are reconstructed by assembling templates on the textured faces of the coarse model. The 3D templates are automatically chosen and located by our optimization-based template assembly algorithm that balances image matching and structural regularity. In the results, we demonstrate how our framework can enrich the details of coarse models using various data sets.

  14. Automatic target detection using binary template matching

    Science.gov (United States)

    Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook

    2005-03-01

    This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.

  15. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  16. Automatic generation of statistical pose and shape models for articulated joints.

    Science.gov (United States)

    Xin Chen; Graham, Jim; Hutchinson, Charles; Muir, Lindsay

    2014-02-01

    Statistical analysis of motion patterns of body joints is potentially useful for detecting and quantifying pathologies. However, building a statistical motion model across different subjects remains a challenging task, especially for a complex joint like the wrist. We present a novel framework for simultaneous registration and segmentation of multiple 3-D (CT or MR) volumes of different subjects at various articulated positions. The framework starts with a pose model generated from 3-D volumes captured at different articulated positions of a single subject (template). This initial pose model is used to register the template volume to image volumes from new subjects. During this process, the Grow-Cut algorithm is used in an iterative refinement of the segmentation of the bone along with the pose parameters. As each new subject is registered and segmented, the pose model is updated, improving the accuracy of successive registrations. We applied the algorithm to CT images of the wrist from 25 subjects, each at five different wrist positions and demonstrated that it performed robustly and accurately. More importantly, the resulting segmentations allowed a statistical pose model of the carpal bones to be generated automatically without interaction. The evaluation results show that our proposed framework achieved accurate registration with an average mean target registration error of 0.34 ±0.27 mm. The automatic segmentation results also show high consistency with the ground truth obtained semi-automatically. Furthermore, we demonstrated the capability of the resulting statistical pose and shape models by using them to generate a measurement tool for scaphoid-lunate dissociation diagnosis, which achieved 90% sensitivity and specificity.

  17. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  18. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  19. Working memory templates are maintained as feature-specific perceptual codes.

    Science.gov (United States)

    Sreenivasan, Kartik K; Sambhara, Deepak; Jha, Amishi P

    2011-07-01

    Working memory (WM) representations serve as templates that guide behavior, but the neural basis of these templates remains elusive. We tested the hypothesis that WM templates are maintained by biasing activity in sensoriperceptual neurons that code for features of items being held in memory. Neural activity was recorded using event-related potentials (ERPs) as participants viewed a series of faces and responded when a face matched a target face held in WM. Our prediction was that if activity in neurons coding for the features of the target is preferentially weighted during maintenance of the target, then ERP activity evoked by a nontarget probe face should be commensurate with the visual similarity between target and probe. Visual similarity was operationalized as the degree of overlap in visual features between target and probe. A face-sensitive ERP response was modulated by target-probe similarity. Amplitude was largest for probes that were similar to the target, and decreased monotonically as a function of decreasing target-probe similarity. These results indicate that neural activity is weighted in favor of visual features that comprise an actively held memory representation. As such, our findings support the notion that WM templates rely on neural populations involved in forming percepts of memory items.

  20. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    Science.gov (United States)

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  1. Volume Decomposition and Feature Recognition for Hexahedral Mesh Generation

    Energy Technology Data Exchange (ETDEWEB)

    GADH,RAJIT; LU,YONG; TAUTGES,TIMOTHY J.

    1999-09-27

    Considerable progress has been made on automatic hexahedral mesh generation in recent years. Several automatic meshing algorithms have proven to be very reliable on certain classes of geometry. While it is always worth pursuing general algorithms viable on more general geometry, a combination of the well-established algorithms is ready to take on classes of complicated geometry. By partitioning the entire geometry into meshable pieces matched with appropriate meshing algorithm the original geometry becomes meshable and may achieve better mesh quality. Each meshable portion is recognized as a meshing feature. This paper, which is a part of the feature based meshing methodology, presents the work on shape recognition and volume decomposition to automatically decompose a CAD model into meshable volumes. There are four phases in this approach: (1) Feature Determination to extinct decomposition features, (2) Cutting Surfaces Generation to form the ''tailored'' cutting surfaces, (3) Body Decomposition to get the imprinted volumes; and (4) Meshing Algorithm Assignment to match volumes decomposed with appropriate meshing algorithms. The feature determination procedure is based on the CLoop feature recognition algorithm that is extended to be more general. Results are demonstrated over several parts with complicated topology and geometry.

  2. Binary palmprint representation for feature template protection

    NARCIS (Netherlands)

    Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2012-01-01

    The major challenge of biometric template protection comes from the intraclass variations of biometric data. The helper data scheme aims to solve this problem by employing the Error Correction Codes (ECC). However, many reported biometric binary features from the same user reach bit error rate (BER)

  3. Template-based automatic breast segmentation on MRI by excluding the chest region

    International Nuclear Information System (INIS)

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong; Su, Min-Ying; Chan, Siwa; Chen, Siping

    2013-01-01

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The

  4. Template-based automatic breast segmentation on MRI by excluding the chest region

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Muqing [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 and National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, 518060 China (China); Chen, Jeon-Hor [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 and Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung 82445, Taiwan (China); Wang, Xiaoyong; Su, Min-Ying, E-mail: msu@uci.edu [Tu and Yuen Center for Functional Onco-Imaging, Department of Radiological Sciences, University of California, Irvine, California 92697-5020 (United States); Chan, Siwa [Department of Radiology, Taichung Veterans General Hospital, Taichung 40407, Taiwan (China); Chen, Siping [National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, 518060 China (China)

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary of the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1

  5. Binary gabor statistical features for palmprint template protection

    NARCIS (Netherlands)

    Mu, Meiru; Ruan, Qiuqi; Shao, X.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2012-01-01

    The biometric template protection system requires a highquality biometric channel and a well-designed error correction code (ECC). Due to the intra-class variations of biometric data, an efficient fixed-length binary feature extractor is required to provide a high-quality biometric channel so that

  6. Automatic generation of gene finders for eukaryotic species

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Krogh, A.

    2006-01-01

    and quality of reliable gene annotation grows. Results We present a procedure, Agene, that automatically generates a species-specific gene predictor from a set of reliable mRNA sequences and a genome. We apply a Hidden Markov model (HMM) that implements explicit length distribution modelling for all gene......Background The number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity...... structure blocks using acyclic discrete phase type distributions. The state structure of the each HMM is generated dynamically from an array of sub-models to include only gene features represented in the training set. Conclusion Acyclic discrete phase type distributions are well suited to model sequence...

  7. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  8. Automatic Generation of Setup for CNC Spring Coiler Based on Case-based Reasoning

    Institute of Scientific and Technical Information of China (English)

    KU Xiangchen; WANG Runxiao; LI Jishun; WANG Dongbo

    2006-01-01

    When producing special-shape spring in CNC spring coiler, the setup of the coiler is often a manual work using a trial-and-error method. As a result, the setup of coiler consumes so much time and becomes the bottleneck of the spring production process. In order to cope with this situation, this paper proposes an automatic generation system of setup for CNC spring coiler using case-based reasoning (CBR). The core of the study contains: (1) integrated reasoning model of CBR system;(2) spatial shape describe of special-shape spring based on feature;(3) coiling case representation using shape feature matrix; and (4) case similarity measure algorithm. The automatic generation system has implemented with C++ Builder 6.0 and is helpful in improving the automaticity and efficiency of spring coiler.

  9. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  10. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    Science.gov (United States)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  11. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning...... mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players....

  12. AutoWIG: automatic generation of python bindings for C++ libraries

    Directory of Open Access Journals (Sweden)

    Pierre Fernique

    2018-04-01

    Full Text Available Most of Python and R scientific packages incorporate compiled scientific libraries to speed up the code and reuse legacy libraries. While several semi-automatic solutions exist to wrap these compiled libraries, the process of wrapping a large library is cumbersome and time consuming. In this paper, we introduce AutoWIG, a Python package that wraps automatically compiled libraries into high-level languages using LLVM/Clang technologies and the Mako templating engine. Our approach is automatic, extensible, and applies to complex C++ libraries, composed of thousands of classes or incorporating modern meta-programming constructs.

  13. Automatic feature learning using multichannel ROI based on deep structured algorithms for computerized lung cancer diagnosis.

    Science.gov (United States)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2017-10-01

    This study aimed to analyze the ability of extracting automatically generated features using deep structured algorithms in lung nodule CT image diagnosis, and compare its performance with traditional computer aided diagnosis (CADx) systems using hand-crafted features. All of the 1018 cases were acquired from Lung Image Database Consortium (LIDC) public lung cancer database. The nodules were segmented according to four radiologists' markings, and 13,668 samples were generated by rotating every slice of nodule images. Three multichannel ROI based deep structured algorithms were designed and implemented in this study: convolutional neural network (CNN), deep belief network (DBN), and stacked denoising autoencoder (SDAE). For the comparison purpose, we also implemented a CADx system using hand-crafted features including density features, texture features and morphological features. The performance of every scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). The observed highest area under the curve (AUC) was 0.899±0.018 achieved by CNN, which was significantly higher than traditional CADx with the AUC=0.848±0.026. The results from DBN was also slightly higher than CADx, while SDAE was slightly lower. By visualizing the automatic generated features, we found some meaningful detectors like curvy stroke detectors from deep structured schemes. The study results showed the deep structured algorithms with automatically generated features can achieve desirable performance in lung nodule diagnosis. With well-tuned parameters and large enough dataset, the deep learning algorithms can have better performance than current popular CADx. We believe the deep learning algorithms with similar data preprocessing procedure can be used in other medical image analysis areas as well. Copyright © 2017. Published by Elsevier Ltd.

  14. Forensic Automatic Speaker Recognition Based on Likelihood Ratio Using Acoustic-phonetic Features Measured Automatically

    Directory of Open Access Journals (Sweden)

    Huapeng Wang

    2015-01-01

    Full Text Available Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence. This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence. The proposed method uses a reference database to calculate the within- and between-speaker variability. Some acoustic-phonetic features are extracted automatically using the software VoiceSauce. The effectiveness of the approach was tested using two Mandarin databases: A mobile telephone database and a landline database. The experiment's results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination. The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.

  15. Automatic feature-based grouping during multiple object tracking.

    Science.gov (United States)

    Erlikhman, Gennady; Keane, Brian P; Mettler, Everett; Horowitz, Todd S; Kellman, Philip J

    2013-12-01

    Contour interpolation automatically binds targets with distractors to impair multiple object tracking (Keane, Mettler, Tsoi, & Kellman, 2011). Is interpolation special in this regard or can other features produce the same effect? To address this question, we examined the influence of eight features on tracking: color, contrast polarity, orientation, size, shape, depth, interpolation, and a combination (shape, color, size). In each case, subjects tracked 4 of 8 objects that began as undifferentiated shapes, changed features as motion began (to enable grouping), and returned to their undifferentiated states before halting. We found that intertarget grouping improved performance for all feature types except orientation and interpolation (Experiment 1 and Experiment 2). Most importantly, target-distractor grouping impaired performance for color, size, shape, combination, and interpolation. The impairments were, at times, large (>15% decrement in accuracy) and occurred relative to a homogeneous condition in which all objects had the same features at each moment of a trial (Experiment 2), and relative to a "diversity" condition in which targets and distractors had different features at each moment (Experiment 3). We conclude that feature-based grouping occurs for a variety of features besides interpolation, even when irrelevant to task instructions and contrary to the task demands, suggesting that interpolation is not unique in promoting automatic grouping in tracking tasks. Our results also imply that various kinds of features are encoded automatically and in parallel during tracking.

  16. Programmable imprint lithography template

    Science.gov (United States)

    Cardinale, Gregory F [Oakland, CA; Talin, Albert A [Livermore, CA

    2006-10-31

    A template for imprint lithography (IL) that reduces significantly template production costs by allowing the same template to be re-used for several technology generations. The template is composed of an array of spaced-apart moveable and individually addressable rods or plungers. Thus, the template can be configured to provide a desired pattern by programming the array of plungers such that certain of the plungers are in an "up" or actuated configuration. This arrangement of "up" and "down" plungers forms a pattern composed of protruding and recessed features which can then be impressed onto a polymer film coated substrate by applying a pressure to the template impressing the programmed configuration into the polymer film. The pattern impressed into the polymer film will be reproduced on the substrate by subsequent processing.

  17. Automatic digital surface model (DSM) generation from aerial imagery data

    Science.gov (United States)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  18. Automatic feature design for optical character recognition using an evolutionary search procedure.

    Science.gov (United States)

    Stentiford, F W

    1985-03-01

    An automatic evolutionary search is applied to the problem of feature extraction in an OCR application. A performance measure based on feature independence is used to generate features which do not appear to suffer from peaking effects [17]. Features are extracted from a training set of 30 600 machine printed 34 class alphanumeric characters derived from British mail. Classification results on the training set and a test set of 10 200 characters are reported for an increasing number of features. A 1.01 percent forced decision error rate is obtained on the test data using 316 features. The hardware implementation should be cheap and fast to operate. The performance compares favorably with current low cost OCR page readers.

  19. A proposal for a drug information database and text templates for generating package inserts

    Directory of Open Access Journals (Sweden)

    Okuya R

    2013-07-01

    Full Text Available Ryo Okuya,1 Masaomi Kimura,2 Michiko Ohkura,2 Fumito Tsuchiya3 1Graduate School of Engineering and Science, 2Faculty of Engineering, Shibaura Institute of Technology, Tokyo, 3School of Pharmacy, International University of Health and Welfare, Tokyo, Japan Abstract: To prevent prescription errors caused by information systems, a database to store complete and accurate drug information in a user-friendly format is needed. In previous studies, the primary method for obtaining data stored in a database is to extract drug information from package inserts by employing pattern matching or more sophisticated methods such as text mining. However, it is difficult to obtain a complete database because there is no strict rule concerning expressions used to describe drug information in package inserts. The authors' strategy was to first build a database and then automatically generate package inserts by embedding data in the database using templates. To create this database, the support of pharmaceutical companies to input accurate data is required. It is expected that this system will work, because these companies can earn merit for newly developed drugs to decrease the effort to create package inserts from scratch. This study designed the table schemata for the database and text templates to generate the package inserts. To handle the variety of drug-specific information in the package inserts, this information in drug composition descriptions was replaced with labels and the replacement descriptions utilizing cluster analysis were analyzed. To improve the method by which frequently repeated ingredient information and/or supplementary information are stored, the method was modified by introducing repeat tags in the templates to indicate repetition and improving the insertion of data into the database. The validity of this method was confirmed by inputting the drug information described in existing package inserts and checking that the method could

  20. Analysing and evaluating the task of automatic tweet generation: Knowledge to business

    OpenAIRE

    Lloret, Elena; Palomar, Manuel

    2016-01-01

    In this paper a study concerning the evaluation and analysis of natural language tweets is presented. Based on our experience in text summarisation, we carry out a deep analysis on user's perception through the evaluation of tweets manual and automatically generated from news. Specifically, we consider two key issues of a tweet: its informativeness and its interestingness. Therefore, we analyse: (1) do users equally perceive manual and automatic tweets?; (2) what linguistic features a good tw...

  1. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    Science.gov (United States)

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.

  2. Template security analysis of multimodal biometric frameworks based on fingerprint and hand geometry

    Directory of Open Access Journals (Sweden)

    Arvind Selwal

    2016-09-01

    Full Text Available Biometric systems are automatic tools used to provide authentication during various applications of modern computing. In this work, three different design frameworks for multimodal biometric systems based on fingerprint and hand geometry modalities are proposed. An analysis is also presented to diagnose various types of template security issues in the proposed system. Fuzzy analytic hierarchy process (FAHP is applied with five decision parameters on all the designs and framework 1 is found to be better in terms of template data security, templates fusion and computational efficiency. It is noticed that template data security before storage in database is a challenging task. An important observation is that a template may be secured at feature fusion level and an indexing technique may be used to improve the size of secured templates.

  3. Finite element speaker-specific face model generation for the study of speech production.

    Science.gov (United States)

    Bucki, Marek; Nazari, Mohammad Ali; Payan, Yohan

    2010-08-01

    In situations where automatic mesh generation is unsuitable, the finite element (FE) mesh registration technique known as mesh-match-and-repair (MMRep) is an interesting option for quickly creating a subject-specific FE model by fitting a predefined template mesh onto the target organ. The irregular or poor quality elements produced by the elastic deformation are corrected by a 'mesh reparation' procedure ensuring that the desired regularity and quality standards are met. Here, we further extend the MMRep capabilities and demonstrate the possibility of taking into account additional relevant anatomical features. We illustrate this approach with an example of biomechanical model generation of a speaker's face comprising face muscle insertions. While taking advantage of the a priori knowledge about tissues conveyed by the template model, this novel, fast and automatic mesh registration technique makes it possible to achieve greater modelling realism by accurately representing the organ surface as well as inner anatomical or functional structures of interest.

  4. Dynamic Frames Based Generation of 3D Scenes and Applications

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2015-05-01

    Full Text Available Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features from a 3D tool. This approach enables higher level of code generation flexibility and the reusability of the main code and scene artifacts in a form of textual templates. An example of the generated application is presented and discussed.

  5. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  6. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    Science.gov (United States)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  7. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  8. A SCHEME FOR TEMPLATE SECURITY AT FEATURE FUSION LEVEL IN MULTIMODAL BIOMETRIC SYSTEM

    OpenAIRE

    Arvind Selwal; Sunil Kumar Gupta; Surender Kumar

    2016-01-01

    Biometric is the science of human recognition based upon using their biological, chemical or behavioural traits. These systems are used in many real life applications simply from biometric based attendance system to providing security at very sophisticated level. A biometric system deals with raw data captured using a sensor and feature template extracted from raw image. One of the challenges being faced by designers of these systems is to secure template data extracted from the biometric mod...

  9. Automatic generation of randomized trial sequences for priming experiments.

    Science.gov (United States)

    Ihrke, Matthias; Behrendt, Jörg

    2011-01-01

    In most psychological experiments, a randomized presentation of successive displays is crucial for the validity of the results. For some paradigms, this is not a trivial issue because trials are interdependent, e.g., priming paradigms. We present a software that automatically generates optimized trial sequences for (negative-) priming experiments. Our implementation is based on an optimization heuristic known as genetic algorithms that allows for an intuitive interpretation due to its similarity to natural evolution. The program features a graphical user interface that allows the user to generate trial sequences and to interactively improve them. The software is based on freely available software and is released under the GNU General Public License.

  10. Cuypers : a semi-automatic hypermedia generation system

    OpenAIRE

    Ossenbruggen, Jacco; Cornelissen, F.J.; Geurts, Joost; Rutledge, Lloyd; Hardman, Lynda

    2000-01-01

    textabstractThe report describes the architecture of emph{Cuypers, a system supporting second and third generation Web-based multimedia. First generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in templates with content retrieved dynamically from a database or transformation of structured documents using style sheets (e.g. XSLT). Third generation Web pages will make use of rich markup (e.g. ...

  11. Lung metastases detection in CT images using 3D template matching

    International Nuclear Information System (INIS)

    Wang, Peng; DeNunzio, Andrea; Okunieff, Paul; O'Dell, Walter G.

    2007-01-01

    The aim of this study is to demonstrate a novel, fully automatic computer detection method applicable to metastatic tumors to the lung with a diameter of 4-20 mm in high-risk patients using typical computed tomography (CT) scans of the chest. Three-dimensional (3D) spherical tumor appearance models (templates) of various sizes were created to match representative CT imaging parameters and to incorporate partial volume effects. Taking into account the variability in the location of CT sampling planes cut through the spherical models, three offsetting template models were created for each appearance model size. Lung volumes were automatically extracted from computed tomography images and the correlation coefficients between the subregions around each voxel in the lung volume and the set of appearance models were calculated using a fast frequency domain algorithm. To determine optimal parameters for the templates, simulated tumors of varying sizes and eccentricities were generated and superposed onto a representative human chest image dataset. The method was applied to real image sets from 12 patients with known metastatic disease to the lung. A total of 752 slices and 47 identifiable tumors were studied. Spherical templates of three sizes (6, 8, and 10 mm in diameter) were used on the patient image sets; all 47 true tumors were detected with the inclusion of only 21 false positives. This study demonstrates that an automatic and straightforward 3D template-matching method, without any complex training or postprocessing, can be used to detect small lung metastases quickly and reliably in the clinical setting

  12. Generating Protocol Software from CPN Models Annotated with Pragmatics

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars M.; Kindler, Ekkart

    2013-01-01

    and verify protocol software, but limited work exists on using CPN models of protocols as a basis for automated code generation. The contribution of this paper is a method for generating protocol software from a class of CPN models annotated with code generation pragmatics. Our code generation method...... consists of three main steps: automatically adding so-called derived pragmatics to the CPN model, computing an abstract template tree, which associates pragmatics with code templates, and applying the templates to generate code which can then be compiled. We illustrate our method using a unidirectional...

  13. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  14. Development of automatic navigation measuring system using template-matching software in image guided neurosurgery

    International Nuclear Information System (INIS)

    Watanabe, Yohei; Hayashi, Yuichiro; Fujii, Masazumi; Wakabayashi, Toshihiko; Kimura, Miyuki; Tsuzaka, Masatoshi; Sugiura, Akihiro

    2010-01-01

    An image-guided neurosurgery and neuronavigation system based on magnetic resonance imaging has been used as an indispensable tool for resection of brain tumors. Therefore, accuracy of the neuronavigation system, provided by periodic quality assurance (QA), is essential for image-guided neurosurgery. Two types of accuracy index, fiducial registration error (FRE) and target registration error (TRE), have been used to evaluate navigation accuracy. FRE shows navigation accuracy on points that have been registered. On the other hand, TRE shows navigation accuracy on points such as tumor, skin, and fiducial markers. This study shows that TRE is more reliable than FRE. However, calculation of TRE is a time-consuming, subjective task. Software for QA was developed to compute TRE. This software calculates TRE automatically by an image processing technique, such as automatic template matching. TRE was calculated by the software and compared with the results obtained by manual calculation. Using the software made it possible to achieve a reliable QA system. (author)

  15. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  16. Applying Hierarchical Model Calibration to Automatically Generated Items.

    Science.gov (United States)

    Williamson, David M.; Johnson, Matthew S.; Sinharay, Sandip; Bejar, Isaac I.

    This study explored the application of hierarchical model calibration as a means of reducing, if not eliminating, the need for pretesting of automatically generated items from a common item model prior to operational use. Ultimately the successful development of automatic item generation (AIG) systems capable of producing items with highly similar…

  17. The Role of Item Models in Automatic Item Generation

    Science.gov (United States)

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  18. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    Science.gov (United States)

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among

  19. Adaptive template generation for amyloid PET using a deep learning approach.

    Science.gov (United States)

    Kang, Seung Kwan; Seo, Seongho; Shin, Seong A; Byun, Min Soo; Lee, Dong Young; Kim, Yu Kyeong; Lee, Dong Soo; Lee, Jae Sung

    2018-05-11

    Accurate spatial normalization (SN) of amyloid positron emission tomography (PET) images for Alzheimer's disease assessment without coregistered anatomical magnetic resonance imaging (MRI) of the same individual is technically challenging. In this study, we applied deep neural networks to generate individually adaptive PET templates for robust and accurate SN of amyloid PET without using matched 3D MR images. Using 681 pairs of simultaneously acquired 11 C-PIB PET and T1-weighted 3D MRI scans of AD, MCI, and cognitively normal subjects, we trained and tested two deep neural networks [convolutional auto-encoder (CAE) and generative adversarial network (GAN)] that produce adaptive best PET templates. More specifically, the networks were trained using 685,100 pieces of augmented data generated by rotating 527 randomly selected datasets and validated using 154 datasets. The input to the supervised neural networks was the 3D PET volume in native space and the label was the spatially normalized 3D PET image using the transformation parameters obtained from MRI-based SN. The proposed deep learning approach significantly enhanced the quantitative accuracy of MRI-less amyloid PET assessment by reducing the SN error observed when an average amyloid PET template is used. Given an input image, the trained deep neural networks rapidly provide individually adaptive 3D PET templates without any discontinuity between the slices (in 0.02 s). As the proposed method does not require 3D MRI for the SN of PET images, it has great potential for use in routine analysis of amyloid PET images in clinical practice and research. © 2018 Wiley Periodicals, Inc.

  20. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    Science.gov (United States)

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  1. Development of Total Knee Replacement Digital Templating Software

    Science.gov (United States)

    Yusof, Siti Fairuz; Sulaiman, Riza; Thian Seng, Lee; Mohd. Kassim, Abdul Yazid; Abdullah, Suhail; Yusof, Shahril; Omar, Masbah; Abdul Hamid, Hamzaini

    In this study, by taking full advantage of digital X-ray and computer technology, we have developed a semi-automated procedure to template knee implants, by making use of digital templating method. Using this approach, a software system called OrthoKneeTMhas been designed and developed. The system is to be utilities as a study in the Department of Orthopaedic and Traumatology in medical faculty, UKM (FPUKM). OrthoKneeTMtemplating process employs uses a technique similar to those used by many surgeons, using acetate templates over X-ray films. Using template technique makes it easy to template various implant from every Implant manufacturers who have with a comprehensive database of templates. The templating functionality includes, template (knee) and manufactures templates (Smith & Nephew; and Zimmer). From an image of patient x-ray OrthoKneeTMtemplates help in quickly and easily reads to the approximate template size needed. The visual templating features then allow us quickly review multiple template sizes against the X-ray and thus obtain the nearly precise view of the implant size required. The system can assist by templating on one patient image and will generate reports that can accompany patient notes. The software system was implemented in Visual basic 6.0 Pro using the object-oriented techniques to manage the graphics and objects. The approaches for image scaling will be discussed. Several of measurement in orthopedic diagnosis process have been studied and added in this software as measurement tools features using mathematic theorem and equations. The study compared the results of the semi-automated (using digital templating) method to the conventional method to demonstrate the accuracy of the system.

  2. Liquid as template for next generation micro devices

    International Nuclear Information System (INIS)

    Charmet, Jerome; Haquette, Henri; Laux, Edith; Keppner, Herbert; Gorodyska, Ganna; Textor, Marcus; Durante, Guido Spinola; Portuondo-Campa, Erwin; Knapp, Helmut; Bitterli, Roland; Noell, Wilfried

    2009-01-01

    Liquids have fascinated generations of scientists and engineers. Since ancient Greece, the perfect natural shape of liquids has been used to create optical systems. Nowadays, the natural shape of liquid is used in the fabrication of microlens arrays that rely on the melting of glass or photoresist to generate high quality lenses. However shrinkage normally associated to the liquid to solid phase transition will affect the initial shape and quality of the liquid structure. In this contribution, a novel fabrication technique that enables the encapsulation and replication of liquid templates without affecting their natural shape is presented. The SOLID (SOlid on LIquid Deposition) process allows for a transparent solid film to be deposited and grown onto a liquid template (droplet, film, line) in a way that the liquid shapes the overgrowing solid layer. The resulting configuration of the SOLID devices is chemically and mechanically stable and is the base of a huge variety of new micro-nano systems in the field of microfluidics, biomedical devices and micro-optics among others. The SOLID process enables in a one step process the encapsulation of liquid microlenses, fluidics channels, drug reservoir or any naturally driven liquid structure. The phenomenon and solid-liquid interface resulting from the SOLID process is new and still unexploited. The solid layer used for the SOLID process chosen in this paper is poly-para-xylylene called Parylene, a transparent biocompatible polymer with excellent mechanical and chemical properties. Moreover, as the solid layer is growing over a liquid template, atomically smooth surfaces channels can be obtained. The polymerization of Parylene does not exert stress and does not change the shape of the liquid; this latter aspect is particularly interesting for manufacturing naturally driven liquid structures. In this paper the authors explore the limits of this new method by testing different designs of SOLID encapsulated structures and

  3. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  4. Real versus template-based Natural Language Generation: a false opposition?

    NARCIS (Netherlands)

    van Deemter, Kees; Krahmer, Emiel; Theune, Mariet

    2005-01-01

    This paper challenges the received wisdom that template-based approaches to the generation of language are necessarily inferior to other approaches as regards their maintainability, linguistic well-foundedness and quality of output. Some recent NLG systems that call themselves `templatebased' will

  5. Template match using local feature with view invariance

    Science.gov (United States)

    Lu, Cen; Zhou, Gang

    2013-10-01

    Matching the template image in the target image is the fundamental task in the field of computer vision. Aiming at the deficiency in the traditional image matching methods and inaccurate matching in scene image with rotation, illumination and view changing, a novel matching algorithm using local features are proposed in this paper. The local histograms of the edge pixels (LHoE) are extracted as the invariable feature to resist view and brightness changing. The merits of the LHoE is that the edge points have been little affected with view changing, and the LHoE can resist not only illumination variance but also the polution of noise. For the process of matching are excuded only on the edge points, the computation burden are highly reduced. Additionally, our approach is conceptually simple, easy to implement and do not need the training phase. The view changing can be considered as the combination of rotation, illumination and shear transformation. Experimental results on simulated and real data demonstrated that the proposed approach is superior to NCC(Normalized cross-correlation) and Histogram-based methods with view changing.

  6. Using Polarization features of visible light for automatic landmine detection

    NARCIS (Netherlands)

    Jong, W. de; Schavemaker, J.G.M.

    2007-01-01

    This chapter describes the usage of polarization features of visible light for automatic landmine detection. The first section gives an introduction to land-mine detection and the usage of camera systems. In section 2 detection concepts and methods that use polarization features are described.

  7. Automatic quantification of defect size using normal templates: a comparative clinical study of three commercially available algorithms

    International Nuclear Information System (INIS)

    Sutter, J. de; Wiele, C. van de; Bondt, P. de; Dierckx, R.; D'Asseler, Y.; Backer, G. de; Rigo, P.

    2000-01-01

    Infarct size assessed by myocardial single-photon emission tomography (SPET) imaging is an important prognostic parameter after myocardial infarction (MI). We compared three commercially available automatic quantification algorithms that make use of normal templates for the evaluation of infarct extent and severity in a large population of patients with remote MI. We studied 100 consecutive patients (80 men, mean age 63±11 years, mean LVEF 47%±15%) with a remote MI who underwent resting technetium-99m tetrofosmin gated SPET study for infarct extent and severity quantification. The quantification algorithms used for comparison were a short-axis algorithm (Cedars-Emory quantitative analysis software, CEqual), a vertical long-axis algorithm (VLAX) and a three-dimensional fitting algorithm (Perfit). Semiquantitative visual infarct extent and severity assessment using a 20-segment model with a 5-point score and the relation of infarct extent and severity with rest LVEF determined by quantitative gated SPET (QGS) were used as standards to compare the different algorithms. Mean infarct extent was similar for visual analysis (30%±21%) and the VLAX algorithm (25%±17%), but CEqual (15%±11%) and Perfit (5%±6%) mean infarct extents were significantly lower compared with visual analysis and the VLAX algorithm. Moreover, infarct extent determined by Perfit was significantly lower than infarct extent determined by CEqual. Correlations between automatic and visual infarct extent and severity evaluations were moderate (r=0.47, P 2 , n=32) compared with anterior infarctions and non-obese patients for all three algorithms. In this large series of post-MI patients, results of infarct extent and severity determination by automatic quantification algorithms that make use of normal templates were not interchangeable and correlated only moderately with semiquantitative visual analysis and LVEF. (orig.)

  8. Templated Dry Printing of Conductive Metal Nanoparticles

    Science.gov (United States)

    Rolfe, David Alexander

    Printed electronics can lower the cost and increase the ubiquity of electrical components such as batteries, sensors, and telemetry systems. Unfortunately, the advance of printed electronics has been held back by the limited minimum resolution, aspect ratio, and feature fidelity of present printing techniques such as gravure, screen printing and inkjet printing. Templated dry printing offers a solution to these problems by patterning nanoparticle inks into templates before drying. This dissertation shows advancements in two varieties of templated dry nanoprinting. The first, advective micromolding in vapor-permeable templates (AMPT) is a microfluidic approach that uses evaporation-driven mold filling to create submicron features with a 1:1 aspect ratio. We will discuss submicron surface acoustic wave (SAW) resonators made through this process, and the refinement process in the template manufacturing process necessary to make these devices. We also present modeling techniques that can be applied to future AMPT templates. We conclude with a modified templated dry printing that improves throughput and isolated feature patterning by transferring dry-templated features with laser ablation. This method utilizes surface energy-defined templates to pattern features via doctor blade coating. Patterned and dried features can be transferred to a polymer substrate with an Nd:YAG MOPA fiber laser, and printed features can be smaller than the laser beam width.

  9. Production of 99Tc generators with automatic elution

    International Nuclear Information System (INIS)

    Mengatti, J.; Yanagawa, S.T.I.; Mazzarro, E.; Gasiglia, H.T.; Rela, P.R.; Silva, C.P.G. da; Pereira, N.P.S. de.

    1983-10-01

    The improvements performed on the routine production of sup(99m) Tc-generators at the Instituto de Pesquisas Energeticas e Nucleares-CNEN/SP, are described. The old model generators (manual elution of sup(99m) Tc) were substituted by automatically eluted generators (Vacuum system). The alumina column, elution system and acessories were modified; the elution time was reduced from 60 to 20-30 seconds. The new generators give 80-90% elution yields using six mililiters of sodium chloride 0,9% as sup(99m) Tc eluant instead of the 10 mililiters necessary to eluate the old generators. So, the radioactive concentrations are now 70% higher. The radioactive, radiochemical, chemical and microbiological criteria were examinated for sup(99m) Tc solutions. Like old generators, automatic generators were considered safe for medical purpose. (Author) [pt

  10. Automatic motion inhibit system for a nuclear power generating system

    International Nuclear Information System (INIS)

    Musick, C.R.; Torres, J.M.

    1977-01-01

    Disclosed is an automatic motion inhibit system for a nuclear power generating system for inhibiting automatic motion of the control elements to reduce reactor power in response to a turbine load reduction. The system generates a final reactor power level setpoint signal which is continuously compared with a reactor power signal. The final reactor power level setpoint is a setpoint within the capacity of the bypass valves to bypass steam which in no event is lower in value than the lower limit of automatic control of the reactor. If the final reactor power level setpoint is greater than the reactor power, an inhibit signal is generated to inhibit automatic control of the reactor. 6 claims, 5 figures

  11. AUTO-LAY: automatic layout generation for procedure flow diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Forzano, P; Castagna, P [Ansaldo SpA, Genoa (Italy)

    1996-12-31

    Nuclear Power Plant Procedures can be seen from essentially two viewpoints: the process and the information management. From the first point of view, it is important to supply the knowledge apt to solve problems connected with the control of the process, from the second one the focus of attention is on the knowledge representation, its structure, elicitation and maintenance, formal quality assurance. These two aspects of procedure representation can be considered and solved separately. In particular, methodological, formal and management issues require long and tedious activities, that in most cases constitute a great barrier for procedures development and upgrade. To solve these problems, Ansaldo is developing DIAM, a wide integrated tool for procedure management to support in procedure writing, updating, usage and documentation. One of the most challenging features of DIAM is AUTO-LAY, a CASE sub-tool that, in a complete automatical way, structures parts or complete flow diagrams. This is a feature that is partially present in some other CASE products, that, anyway, do not allow complex graph handling and isomorphism between video and paper representation AUTO-LAY has the unique prerogative to draw graphs of any complexity, to section them in pages, and to automatically compose a document. This has been recognized in the literature as the most important second-generation CASE improvement. (author). 5 refs., 9 figs.

  12. AUTO-LAY: automatic layout generation for procedure flow diagrams

    International Nuclear Information System (INIS)

    Forzano, P.; Castagna, P.

    1995-01-01

    Nuclear Power Plant Procedures can be seen from essentially two viewpoints: the process and the information management. From the first point of view, it is important to supply the knowledge apt to solve problems connected with the control of the process, from the second one the focus of attention is on the knowledge representation, its structure, elicitation and maintenance, formal quality assurance. These two aspects of procedure representation can be considered and solved separately. In particular, methodological, formal and management issues require long and tedious activities, that in most cases constitute a great barrier for procedures development and upgrade. To solve these problems, Ansaldo is developing DIAM, a wide integrated tool for procedure management to support in procedure writing, updating, usage and documentation. One of the most challenging features of DIAM is AUTO-LAY, a CASE sub-tool that, in a complete automatical way, structures parts or complete flow diagrams. This is a feature that is partially present in some other CASE products, that, anyway, do not allow complex graph handling and isomorphism between video and paper representation AUTO-LAY has the unique prerogative to draw graphs of any complexity, to section them in pages, and to automatically compose a document. This has been recognized in the literature as the most important second-generation CASE improvement. (author). 5 refs., 9 figs

  13. Automatic Grasp Generation and Improvement for Industrial Bin-Picking

    DEFF Research Database (Denmark)

    Kraft, Dirk; Ellekilde, Lars-Peter; Rytz, Jimmy Alison

    2014-01-01

    and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented......This paper presents work on automatic grasp generation and grasp learning for reducing the manual setup time and increase grasp success rates within bin-picking applications. We propose an approach that is able to generate good grasps automatically using a dynamic grasp simulator, a newly developed...

  14. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  15. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    Science.gov (United States)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of

  16. High-speed particle tracking in nuclear emulsion by last-generation automatic microscopes

    International Nuclear Information System (INIS)

    Armenise, N.; De Serio, M.; Ieva, M.; Muciaccia, M.T.; Pastore, A.; Simone, S.; Damet, J.; Kreslo, I.; Savvinov, N.; Waelchli, T.; Consiglio, L.; Cozzi, M.; Di Ferdinando, D.; Esposito, L.S.; Giacomelli, G.; Giorgini, M.; Mandrioli, G.; Patrizii, L.; Sioli, M.; Sirri, G.; Arrabito, L.; Laktineh, I.; Royole-Degieux, P.; Buontempo, S.; D'Ambrosio, N.; De Lellis, G.; De Rosa, G.; Di Capua, F.; Coppola, D.; Formisano, F.; Marotta, A.; Migliozzi, P.; Pistillo, C.; Scotto Lavina, L.; Sorrentino, G.; Strolin, P.; Tioukov, V.; Juget, F.; Hauger, M.; Rosa, G.; Barbuto, E.; Bozza, C.; Grella, G.; Romano, G.; Sirignano, C.

    2005-01-01

    The technique of nuclear emulsions for high-energy physics experiments is being revived, thanks to the remarkable progress in measurement automation achieved in the past years. The present paper describes the features and performances of the European Scanning System, a last-generation automatic microscope working at a scanning speed of 20cm 2 /h. The system has been developed in the framework of the OPERA experiment, designed to unambigously detect ν μ ->ν τ oscillations in nuclear emulsions

  17. Automatic 3d Building Model Generations with Airborne LiDAR Data

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D

  18. AUTOMATIC 3D BUILDING MODEL GENERATIONS WITH AIRBORNE LiDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2017-11-01

    Full Text Available LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified

  19. Automatic attraction of visual attention by supraletter features of former target strings

    DEFF Research Database (Denmark)

    Kyllingsbæk, Søren; Lommel, Sven Van; Bundesen, Claus

    2014-01-01

    , performance (d’) degraded on trials in which former targets were present, suggesting that the former targets automatically drew processing resources away from the current targets. Apparently, the two experiments showed automatic attraction of visual attention by supraletter features of former target strings....

  20. The Automatic Test Features of the IDiPS Reactor Protection System

    International Nuclear Information System (INIS)

    Hur, Seop; Kim, Dong-Hoon; Hwang, In-Koo; Lee, Cheol-Kwon; Lee, Dong-Young

    2007-01-01

    The reactor protection system (RPS) is designed to minimize a propagation of abnormal or accident conditions of nuclear power plants. A digital RPS (Integrated Digital Protection System (IDiPS) RPS) is being developed in the Korea Nuclear Instrumentation and Control System (KNICS) R and D project. To make good use of the advantages of the digital technology, it is necessary to improve the reliability and availability of a system through automatic test features including an on-line testing, a self-diagnostics, an auto calibration, etc. This paper summarizes the system test strategy and the automatic test features of the IDiPS RPS

  1. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  2. Novel Automatic Filter-Class Feature Selection for Machine Learning Regression

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Hallam, John; Jørgensen, Bo Nørregaard

    2017-01-01

    With the increased focus on application of Big Data in all sectors of society, the performance of machine learning becomes essential. Efficient machine learning depends on efficient feature selection algorithms. Filter feature selection algorithms are model-free and therefore very fast, but require...... model in the feature selection process. PCA is often used in machine learning litterature and can be considered the default feature selection method. RDESF outperformed PCA in both experiments in both prediction error and computational speed. RDESF is a new step into filter-based automatic feature...

  3. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  4. Template based rodent brain extraction and atlas mapping.

    Science.gov (United States)

    Weimin Huang; Jiaqi Zhang; Zhiping Lin; Su Huang; Yuping Duan; Zhongkang Lu

    2016-08-01

    Accurate rodent brain extraction is the basic step for many translational studies using MR imaging. This paper presents a template based approach with multi-expert refinement to automatic rodent brain extraction. We first build the brain appearance model based on the learning exemplars. Together with the template matching, we encode the rodent brain position into the search space to reliably locate the rodent brain and estimate the rough segmentation. With the initial mask, a level-set segmentation and a mask-based template learning are implemented further to the brain region. The multi-expert fusion is used to generate a new mask. We finally combine the region growing based on the histogram distribution learning to delineate the final brain mask. A high-resolution rodent atlas is used to illustrate that the segmented low resolution anatomic image can be well mapped to the atlas. Tested on a public data set, all brains are located reliably and we achieve the mean Jaccard similarity score at 94.99% for brain segmentation, which is a statistically significant improvement compared to two other rodent brain extraction methods.

  5. Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van

    2007-01-01

    In this study, we investigated automatic laughter segmentation in meetings. We first performed laughterspeech discrimination experiments with traditional spectral features and subsequently used acousticphonetic features. In segmentation, we used Gaussian Mixture Models that were trained with

  6. Automatic presentation generation for scholarly hypermedia

    NARCIS (Netherlands)

    S. Bocconi

    2003-01-01

    textabstractAutomatic hypermedia presentation generation uses an information source semantic network first to select the content and then to compose it in the presentation so that the semantic relations between the information items are conveyed to the user. A hypermedia presentation can be

  7. Automatically Generated Vegetation Density Maps with LiDAR Survey for Orienteering Purpose

    Science.gov (United States)

    Petrovič, Dušan

    2018-05-01

    The focus of our research was to automatically generate the most adequate vegetation density maps for orienteering purpose. Application Karttapullatuin was used for automated generation of vegetation density maps, which requires LiDAR data to process an automatically generated map. A part of the orienteering map in the area of Kazlje-Tomaj was used to compare the graphical display of vegetation density. With different settings of parameters in the Karttapullautin application we changed the way how vegetation density of automatically generated map was presented, and tried to match it as much as possible with the orienteering map of Kazlje-Tomaj. Comparing more created maps of vegetation density the most suitable parameter settings to automatically generate maps on other areas were proposed, too.

  8. Automatic lip reading by using multimodal visual features

    Science.gov (United States)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  9. Design of Cancelable Palmprint Templates Based on Look Up Table

    Science.gov (United States)

    Qiu, Jian; Li, Hengjian; Dong, Jiwen

    2018-03-01

    A novel cancelable palmprint templates generation scheme is proposed in this paper. Firstly, the Gabor filter and chaotic matrix are used to extract palmprint features. It is then arranged into a row vector and divided into equal size blocks. These blocks are converted to corresponding decimals and mapped to look up tables, forming final cancelable palmprint features based on the selected check bits. Finally, collaborative representation based classification with regularized least square is used for classification. Experimental results on the Hong Kong PolyU Palmprint Database verify that the proposed cancelable templates can achieve very high performance and security levels. Meanwhile, it can also satisfy the needs of real-time applications.

  10. Learning templates for artistic portrait lighting analysis.

    Science.gov (United States)

    Chen, Xiaowu; Jin, Xin; Wu, Hongyu; Zhao, Qinping

    2015-02-01

    Lighting is a key factor in creating impressive artistic portraits. In this paper, we propose to analyze portrait lighting by learning templates of lighting styles. Inspired by the experience of artists, we first define several novel features that describe the local contrasts in various face regions. The most informative features are then selected with a stepwise feature pursuit algorithm to derive the templates of various lighting styles. After that, the matching scores that measure the similarity between a testing portrait and those templates are calculated for lighting style classification. Furthermore, we train a regression model by the subjective scores and the feature responses of a template to predict the score of a portrait lighting quality. Based on the templates, a novel face illumination descriptor is defined to measure the difference between two portrait lightings. Experimental results show that the learned templates can well describe the lighting styles, whereas the proposed approach can assess the lighting quality of artistic portraits as human being does.

  11. Automatic Generation of Heuristics for Scheduling

    Science.gov (United States)

    Morris, Robert A.; Bresina, John L.; Rodgers, Stuart M.

    1997-01-01

    This paper presents a technique, called GenH, that automatically generates search heuristics for scheduling problems. The impetus for developing this technique is the growing consensus that heuristics encode advice that is, at best, useful in solving most, or typical, problem instances, and, at worst, useful in solving only a narrowly defined set of instances. In either case, heuristic problem solvers, to be broadly applicable, should have a means of automatically adjusting to the idiosyncrasies of each problem instance. GenH generates a search heuristic for a given problem instance by hill-climbing in the space of possible multi-attribute heuristics, where the evaluation of a candidate heuristic is based on the quality of the solution found under its guidance. We present empirical results obtained by applying GenH to the real world problem of telescope observation scheduling. These results demonstrate that GenH is a simple and effective way of improving the performance of an heuristic scheduler.

  12. Using an ontology to automatically generate questions for the determination of situations

    NARCIS (Netherlands)

    Teitsma, Marten; Sandberg, Jacobijn; Maris, Martinus; Wielinga, Bob; Hameurlain, Abdelkader; Liddle, Stephen W.; Schewe, Klaus-Dieter; Zhou, Xiaofang

    2011-01-01

    We investigate whether the automatic generation of questions from an ontology leads to a trustworthy determination of a situation. With our Situation Awareness Question Generator (SAQG) we automatically generate questions from an ontology. The experiment shows that people with no previous experience

  13. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  14. 2D vector-cyclic deformable templates

    DEFF Research Database (Denmark)

    Schultz, Nette; Conradsen, Knut

    1998-01-01

    In this paper the theory of deformable templates is a vector cycle in 2D is described. The deformable template model originated in (Grenander, 1983) and was further investigated in (Grenander et al., 1991). A template vector distribution is induced by parameter distribution from transformation...... matrices applied to the vector cycle. An approximation in the parameter distribution is introduced. The main advantage by using the deformable template model is the ability to simulate a wide range of objects trained by e.g. their biological variations, and thereby improve restoration, segmentation...... and probabillity measurement. The case study concerns estimation of meat percent in pork carcasses. Given two cross-sectional images - one at the front and one near the ham of the carcass - the areas of lean and fat and a muscle in the lean area are measured automatically by the deformable templates....

  15. Elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    International Nuclear Information System (INIS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-01-01

    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes - the so-called 'ringdown' signals - and are also encountered in some numerically computed supernova signals. First, the number of templates N computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM 3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM - and indeed the number of templates and the computing power - without losing as many events as expected on average. This should be of great

  16. Automatic Generalizability Method of Urban Drainage Pipe Network Considering Multi-Features

    Science.gov (United States)

    Zhu, S.; Yang, Q.; Shao, J.

    2018-05-01

    Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  17. Generating One Biometric Feature from Another: Faces from Fingerprints

    Directory of Open Access Journals (Sweden)

    Seref Sagiroglu

    2010-04-01

    Full Text Available This study presents a new approach based on artificial neural networks for generating one biometric feature (faces from another (only fingerprints. An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces.

  18. Generating One Biometric Feature from Another: Faces from Fingerprints

    Science.gov (United States)

    Ozkaya, Necla; Sagiroglu, Seref

    2010-01-01

    This study presents a new approach based on artificial neural networks for generating one biometric feature (faces) from another (only fingerprints). An automatic and intelligent system was designed and developed to analyze the relationships among fingerprints and faces and also to model and to improve the existence of the relationships. The new proposed system is the first study that generates all parts of the face including eyebrows, eyes, nose, mouth, ears and face border from only fingerprints. It is also unique and different from similar studies recently presented in the literature with some superior features. The parameter settings of the system were achieved with the help of Taguchi experimental design technique. The performance and accuracy of the system have been evaluated with 10-fold cross validation technique using qualitative evaluation metrics in addition to the expanded quantitative evaluation metrics. Consequently, the results were presented on the basis of the combination of these objective and subjective metrics for illustrating the qualitative properties of the proposed methods as well as a quantitative evaluation of their performances. Experimental results have shown that one biometric feature can be determined from another. These results have once more indicated that there is a strong relationship between fingerprints and faces. PMID:22399877

  19. Feature generation and representations for protein-protein interaction classification.

    Science.gov (United States)

    Lan, Man; Tan, Chew Lim; Su, Jian

    2009-10-01

    Automatic detecting protein-protein interaction (PPI) relevant articles is a crucial step for large-scale biological database curation. The previous work adopted POS tagging, shallow parsing and sentence splitting techniques, but they achieved worse performance than the simple bag-of-words representation. In this paper, we generated and investigated multiple types of feature representations in order to further improve the performance of PPI text classification task. Besides the traditional domain-independent bag-of-words approach and the term weighting methods, we also explored other domain-dependent features, i.e. protein-protein interaction trigger keywords, protein named entities and the advanced ways of incorporating Natural Language Processing (NLP) output. The integration of these multiple features has been evaluated on the BioCreAtIvE II corpus. The experimental results showed that both the advanced way of using NLP output and the integration of bag-of-words and NLP output improved the performance of text classification. Specifically, in comparison with the best performance achieved in the BioCreAtIvE II IAS, the feature-level and classifier-level integration of multiple features improved the performance of classification 2.71% and 3.95%, respectively.

  20. Automatic Recognition of Chinese Personal Name Using Conditional Random Fields and Knowledge Base

    Directory of Open Access Journals (Sweden)

    Chuan Gu

    2015-01-01

    Full Text Available According to the features of Chinese personal name, we present an approach for Chinese personal name recognition based on conditional random fields (CRF and knowledge base in this paper. The method builds multiple features of CRF model by adopting Chinese character as processing unit, selects useful features based on selection algorithm of knowledge base and incremental feature template, and finally implements the automatic recognition of Chinese personal name from Chinese document. The experimental results on open real corpus demonstrated the effectiveness of our method and obtained high accuracy rate and high recall rate of recognition.

  1. Device for positioning and generation of an element of track model and slice of the MELAS automatic equipment

    International Nuclear Information System (INIS)

    Kryutchenko, E.V.; Fedotov, V.S.

    1979-01-01

    The structure and organization of the device for positioning and generation of element of track model and slice of the MELAS automatic equipment which is developed for measuring films from big bubble chambers, is described. Main features of the device are studied and characteristics are given as well

  2. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.; Hardman, H.L.

    2008-01-01

    In this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical presentation

  3. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    S. Bocconi; F.-M. Nack (Frank); L. Hardman (Lynda)

    2008-01-01

    textabstractIn this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical

  4. Open set recognition of aircraft in aerial imagery using synthetic template models

    Science.gov (United States)

    Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

    2017-05-01

    Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

  5. A strategy for automatically generating programs in the lucid programming language

    Science.gov (United States)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  6. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  7. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of ...

  8. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  9. A price based automatic generation control using unscheduled ...

    African Journals Online (AJOL)

    In this paper, a model for price based automatic generation control is presented. A modified control scheme is proposed which will prevent unintended unscheduled interchanges among the participants. The proposed scheme is verified by simulating it on a model of isolated area system having four generators. It has been ...

  10. AUTOMATIC GENERALIZABILITY METHOD OF URBAN DRAINAGE PIPE NETWORK CONSIDERING MULTI-FEATURES

    Directory of Open Access Journals (Sweden)

    S. Zhu

    2018-05-01

    Full Text Available Urban drainage systems are indispensable dataset for storm-flooding simulation. Given data availability and current computing power, the structure and complexity of urban drainage systems require to be simplify. However, till data, the simplify procedure mainly depend on manual operation that always leads to mistakes and lower work efficiency. This work referenced the classification methodology of road system, and proposed a conception of pipeline stroke. Further, length of pipeline, angle between two pipelines, the pipeline belonged road level and diameter of pipeline were chosen as the similarity criterion to generate the pipeline stroke. Finally, designed the automatic method to generalize drainage systems with the concern of multi-features. This technique can improve the efficiency and accuracy of the generalization of drainage systems. In addition, it is beneficial to the study of urban storm-floods.

  11. MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection

    International Nuclear Information System (INIS)

    Ghose, Soumya; Mitra, Jhimli; Rivest-Hénault, David; Fazlollahi, Amir; Fripp, Jurgen; Dowling, Jason A.; Stanwell, Peter; Pichler, Peter; Sun, Jidi; Greer, Peter B.

    2016-01-01

    Purpose: The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Methods: Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contours were selected in a spectral clustering manifold learning framework. This aids in clustering “similar” gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. Results: A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). Conclusions: An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold

  12. MRI-alone radiation therapy planning for prostate cancer: Automatic fiducial marker detection

    Energy Technology Data Exchange (ETDEWEB)

    Ghose, Soumya, E-mail: soumya.ghose@case.edu; Mitra, Jhimli [Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106 and CSIRO Health and Biosecurity, The Australian e-Health & Research Centre, Herston, QLD 4029 (Australia); Rivest-Hénault, David; Fazlollahi, Amir; Fripp, Jurgen; Dowling, Jason A. [CSIRO Health and Biosecurity, The Australian e-Health & Research Centre, Herston, QLD 4029 (Australia); Stanwell, Peter [School of health sciences, The University of Newcastle, Newcastle, NSW 2308 (Australia); Pichler, Peter [Department of Radiation Oncology, Cavalry Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia); Sun, Jidi; Greer, Peter B. [School of Mathematical and Physical Sciences, The University of Newcastle, Newcastle, NSW 2308, Australia and Department of Radiation Oncology, Cavalry Mater Newcastle Hospital, Newcastle, NSW 2298 (Australia)

    2016-05-15

    Purpose: The feasibility of radiation therapy treatment planning using substitute computed tomography (sCT) generated from magnetic resonance images (MRIs) has been demonstrated by a number of research groups. One challenge with an MRI-alone workflow is the accurate identification of intraprostatic gold fiducial markers, which are frequently used for prostate localization prior to each dose delivery fraction. This paper investigates a template-matching approach for the detection of these seeds in MRI. Methods: Two different gradient echo T1 and T2* weighted MRI sequences were acquired from fifteen prostate cancer patients and evaluated for seed detection. For training, seed templates from manual contours were selected in a spectral clustering manifold learning framework. This aids in clustering “similar” gold fiducial markers together. The marker with the minimum distance to a cluster centroid was selected as the representative template of that cluster during training. During testing, Gaussian mixture modeling followed by a Markovian model was used in automatic detection of the probable candidates. The probable candidates were rigidly registered to the templates identified from spectral clustering, and a similarity metric is computed for ranking and detection. Results: A fiducial detection accuracy of 95% was obtained compared to manual observations. Expert radiation therapist observers were able to correctly identify all three implanted seeds on 11 of the 15 scans (the proposed method correctly identified all seeds on 10 of the 15). Conclusions: An novel automatic framework for gold fiducial marker detection in MRI is proposed and evaluated with detection accuracies comparable to manual detection. When radiation therapists are unable to determine the seed location in MRI, they refer back to the planning CT (only available in the existing clinical framework); similarly, an automatic quality control is built into the automatic software to ensure that all gold

  13. LipidPioneer : A Comprehensive User-Generated Exact Mass Template for Lipidomics

    Science.gov (United States)

    Ulmer, Candice Z.; Koelmel, Jeremy P.; Ragland, Jared M.; Garrett, Timothy J.; Bowden, John A.

    2017-03-01

    Lipidomics, the comprehensive measurement of lipid species in a biological system, has promising potential in biomarker discovery and disease etiology elucidation. Advances in chromatographic separation, mass spectrometric techniques, and novel substrate applications continue to expand the number of lipid species observed. The total number and type of lipid species detected in a given sample are generally indicative of the sample matrix examined (e.g., serum, plasma, cells, bacteria, tissue, etc.). Current exact mass lipid libraries are static and represent the most commonly analyzed matrices. It is common practice for users to manually curate their own lists of lipid species and adduct masses; however, this process is time-consuming. LipidPioneer, an interactive template, can be used to generate exact masses and molecular formulas of lipid species that may be encountered in the mass spectrometric analysis of lipid profiles. Over 60 lipid classes are present in the LipidPioneer template and include several unique lipid species, such as ether-linked lipids and lipid oxidation products. In the template, users can add any fatty acyl constituents without limitation in the number of carbons or degrees of unsaturation. LipidPioneer accepts naming using the lipid class level (sum composition) and the LIPID MAPS notation for fatty acyl structure level. In addition to lipid identification, user-generated lipid m/z values can be used to develop inclusion lists for targeted fragmentation experiments. Resulting lipid names and m/z values can be imported into software such as MZmine or Compound Discoverer to automate exact mass searching and isotopic pattern matching across experimental data.

  14. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  15. A lightweight approach for biometric template protection

    Science.gov (United States)

    Al-Assam, Hisham; Sellahewa, Harin; Jassim, Sabah

    2009-05-01

    Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving, the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on face biometric templates and compare the efficiency of our approach with that of the well-known random projection techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments conducted on a number of well-known face image databases, e.g. Yale and ORL databases.

  16. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  17. Generating XML schemas for DICOM structured reporting templates.

    Science.gov (United States)

    Zhao, Luyin; Lee, Kwok Pun; Hu, Jingkun

    2005-01-01

    In this paper, the authors describe a methodology to transform programmatically structured reporting (SR) templates defined by the Digital Imaging and Communications for Medicine (DICOM) standard into an XML schema representation. Such schemas can be used in the creation and validation of XML-encoded SR documents that use templates. Templates are a means to put additional constraints on an SR document to promote common formats for specific reporting applications or domains. As the use of templates becomes more widespread in the production of SR documents, it is important to ensure validity of such documents. The work described in this paper is an extension of the authors' previous work on XML schema representation for DICOM SR. Therefore, this paper inherits and partially modifies the structure defined in the earlier work.

  18. Using Semantic Web technologies for the generation of domain-specific templates to support clinical study metadata standards.

    Science.gov (United States)

    Jiang, Guoqian; Evans, Julie; Endle, Cory M; Solbrig, Harold R; Chute, Christopher G

    2016-01-01

    The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.

  19. Using automatic item generation to create multiple-choice test items.

    Science.gov (United States)

    Gierl, Mark J; Lai, Hollis; Turner, Simon R

    2012-08-01

    Many tests of medical knowledge, from the undergraduate level to the level of certification and licensure, contain multiple-choice items. Although these are efficient in measuring examinees' knowledge and skills across diverse content areas, multiple-choice items are time-consuming and expensive to create. Changes in student assessment brought about by new forms of computer-based testing have created the demand for large numbers of multiple-choice items. Our current approaches to item development cannot meet this demand. We present a methodology for developing multiple-choice items based on automatic item generation (AIG) concepts and procedures. We describe a three-stage approach to AIG and we illustrate this approach by generating multiple-choice items for a medical licensure test in the content area of surgery. To generate multiple-choice items, our method requires a three-stage process. Firstly, a cognitive model is created by content specialists. Secondly, item models are developed using the content from the cognitive model. Thirdly, items are generated from the item models using computer software. Using this methodology, we generated 1248 multiple-choice items from one item model. Automatic item generation is a process that involves using models to generate items using computer technology. With our method, content specialists identify and structure the content for the test items, and computer technology systematically combines the content to generate new test items. By combining these outcomes, items can be generated automatically. © Blackwell Publishing Ltd 2012.

  20. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    Science.gov (United States)

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  1. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  2. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  3. Generation of 3D templates of active sites of proteins with rigid prosthetic groups.

    Science.gov (United States)

    Nebel, Jean-Christophe

    2006-05-15

    With the increasing availability of protein structures, the generation of biologically meaningful 3D patterns from the simultaneous alignment of several protein structures is an exciting prospect: active sites could be better understood, protein functions and protein 3D structures could be predicted more accurately. Although patterns can already be generated at the fold and topological levels, no system produces high-resolution 3D patterns including atom and cavity positions. To address this challenge, our research focuses on generating patterns from proteins with rigid prosthetic groups. Since these groups are key elements of protein active sites, the generated 3D patterns are expected to be biologically meaningful. In this paper, we present a new approach which allows the generation of 3D patterns from proteins with rigid prosthetic groups. Using 237 protein chains representing proteins containing porphyrin rings, our method was validated by comparing 3D templates generated from homologues with the 3D structure of the proteins they model. Atom positions were predicted reliably: 93% of them had an accuracy of 1.00 A or less. Moreover, similar results were obtained regarding chemical group and cavity positions. Results also suggested our system could contribute to the validation of 3D protein models. Finally, a 3D template was generated for the active site of human cytochrome P450 CYP17, the 3D structure of which is unknown. Its analysis showed that it is biologically meaningful: our method detected the main patterns of the cytochrome P450 superfamily and the motifs linked to catalytic reactions. The 3D template also suggested the position of a residue, which could be involved in a hydrogen bond with CYP17 substrates and the shape and location of a cavity. Comparisons with independently generated 3D models comforted these hypotheses. Alignment software (Nestor3D) is available at http://www.kingston.ac.uk/~ku33185/Nestor3D.html

  4. Performance of peaky template matching under additive white Gaussian noise and uniform quantization

    Science.gov (United States)

    Horvath, Matthew S.; Rigling, Brian D.

    2015-05-01

    Peaky template matching (PTM) is a special case of a general algorithm known as multinomial pattern matching originally developed for automatic target recognition of synthetic aperture radar data. The algorithm is a model- based approach that first quantizes pixel values into Nq = 2 discrete values yielding generative Beta-Bernoulli models as class-conditional templates. Here, we consider the case of classification of target chips in AWGN and develop approximations to image-to-template classification performance as a function of the noise power. We focus specifically on the case of a uniform quantization" scheme, where a fixed number of the largest pixels are quantized high as opposed to using a fixed threshold. This quantization method reduces sensitivity to the scaling of pixel intensities and quantization in general reduces sensitivity to various nuisance parameters difficult to account for a priori. Our performance expressions are verified using forward-looking infrared imagery from the Army Research Laboratory Comanche dataset.

  5. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    Science.gov (United States)

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  6. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    NARCIS (Netherlands)

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written

  7. System for Automatic Generation of Examination Papers in Discrete Mathematics

    Science.gov (United States)

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  8. An Efficient Metric of Automatic Weight Generation for Properties in Instance Matching Technique

    OpenAIRE

    Seddiqui, Md. Hanif; Nath, Rudra Pratap Deb; Aono, Masaki

    2015-01-01

    The proliferation of heterogeneous data sources of semantic knowledge base intensifies the need of an automatic instance matching technique. However, the efficiency of instance matching is often influenced by the weight of a property associated to instances. Automatic weight generation is a non-trivial, however an important task in instance matching technique. Therefore, identifying an appropriate metric for generating weight for a property automatically is nevertheless a formidab...

  9. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  10. CargoCBM – Feature Generation and Classification for a Condition Monitoring System for Freight Wagons

    International Nuclear Information System (INIS)

    Gericke, C; Hecht, M

    2012-01-01

    Despite the fact that rail freight transport is one of the most environmentally friendly matters of transport, its growth has been far behind the growth of freight transport in general. Studies showed that a competitive disadvantage is caused by a low availability of rolling stock, especially freight wagons. Changing from a time based to a condition based maintenance strategy is believed to decrease down times by at least one third. To make condition based maintenance for freight wagons possible the TU Berlin and five industry partners started the research project CargoCBM. One task in this project is to develop algorithms for the automatic on-board diagnosis of wheel flats. The focus of the work is on the process of feature generation and feature selection as well as the application of different classifiers to automatically evaluate the data. Based on the results of measured data, features were selected and tested with different classifiers. Thought advanced classifiers such as neural networks have been analysed in accordance to their classification accuracy. It can be shown that with carefully constructed and selected features comparatively simple classifiers can lead to excellent results.

  11. Automatic generation of Fortran programs for algebraic simulation models

    International Nuclear Information System (INIS)

    Schopf, W.; Rexer, G.; Ruehle, R.

    1978-04-01

    This report documents a generator program by which econometric simulation models formulated in an application-orientated language can be transformed automatically in a Fortran program. Thus the model designer is able to build up, test and modify models without the need of a Fortran programmer. The development of a computer model is therefore simplified and shortened appreciably; in chapter 1-3 of this report all rules are presented for the application of the generator to the model design. Algebraic models including exogeneous and endogeneous time series variables, lead and lag function can be generated. In addition, to these language elements, Fortran sequences can be applied to the formulation of models in the case of complex model interrelations. Automatically the generated model is a module of the program system RSYST III and is therefore able to exchange input and output data with the central data bank of the system and in connection with the method library modules can be used to handle planning problems. (orig.) [de

  12. Templated sequence insertion polymorphisms in the human genome

    Science.gov (United States)

    Onozawa, Masahiro; Aplan, Peter

    2016-11-01

    Templated Sequence Insertion Polymorphism (TSIP) is a recently described form of polymorphism recognized in the human genome, in which a sequence that is templated from a distant genomic region is inserted into the genome, seemingly at random. TSIPs can be grouped into two classes based on nucleotide sequence features at the insertion junctions; Class 1 TSIPs show features of insertions that are mediated via the LINE-1 ORF2 protein, including 1) target-site duplication (TSD), 2) polyadenylation 10-30 nucleotides downstream of a “cryptic” polyadenylation signal, and 3) preference for insertion at a 5’-TTTT/A-3’ sequence. In contrast, class 2 TSIPs show features consistent with repair of a DNA double-strand break via insertion of a DNA “patch” that is derived from a distant genomic region. Survey of a large number of normal human volunteers demonstrates that most individuals have 25-30 TSIPs, and that these TSIPs track with specific geographic regions. Similar to other forms of human polymorphism, we suspect that these TSIPs may be important for the generation of human diversity and genetic diseases.

  13. Automatic user customization for improving the performance of a self-paced brain interface system.

    Science.gov (United States)

    Fatourechi, Mehrdad; Bashashati, Ali; Birch, Gary E; Ward, Rabab K

    2006-12-01

    Customizing the parameter values of brain interface (BI) systems by a human expert has the advantage of being fast and computationally efficient. However, as the number of users and EEG channels grows, this process becomes increasingly time consuming and exhausting. Manual customization also introduces inaccuracies in the estimation of the parameter values. In this paper, the performance of a self-paced BI system whose design parameter values were automatically user customized using a genetic algorithm (GA) is studied. The GA automatically estimates the shapes of movement-related potentials (MRPs), whose features are then extracted to drive the BI. Offline analysis of the data of eight subjects revealed that automatic user customization improved the true positive (TP) rate of the system by an average of 6.68% over that whose customization was carried out by a human expert, i.e., by visually inspecting the MRP templates. On average, the best improvement in the TP rate (an average of 9.82%) was achieved for four individuals with spinal cord injury. In this case, the visual estimation of the parameter values of the MRP templates was very difficult because of the highly noisy nature of the EEG signals. For four able-bodied subjects, for which the MRP templates were less noisy, the automatic user customization led to an average improvement of 3.58% in the TP rate. The results also show that the inter-subject variability of the TP rate is also reduced compared to the case when user customization is carried out by a human expert. These findings provide some primary evidence that automatic user customization leads to beneficial results in the design of a self-paced BI for individuals with spinal cord injury.

  14. Task relevance modulates the cortical representation of feature conjunctions in the target template.

    Science.gov (United States)

    Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan

    2017-07-03

    Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.

  15. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  16. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  17. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  18. Algorithm for Automatic Generation of Curved and Compound Twills

    Institute of Scientific and Technical Information of China (English)

    WANG Mei-zhen; WANG Fu-mei; WANG Shan-yuan

    2005-01-01

    A new arithmetic using matrix left-shift functions for the quicker generation of curved and compound twills is introduced in this paper. A matrix model for the generation of regular, curved and compound twill structures is established and its computing simulation realization are elaborated. Examples of the algorithm applying in the simulation and the automatic generation of curved and compound twills in fabric CAD are obtained.

  19. The Automatic Generation of Knowledge Spaces From Problem Solving Strategies

    NARCIS (Netherlands)

    Milovanovic, Ivica; Jeuring, Johan

    2016-01-01

    In this paper, we explore theoretical and practical aspects of the automatic generation of knowledge spaces from problem solving strategies. We show how the generated spaces can be used for adapting strategy-based problem solving learning environments (PSLEs).

  20. NuFTA: A CASE Tool for Automatic Software Fault Tree Analysis

    International Nuclear Information System (INIS)

    Yun, Sang Hyun; Lee, Dong Ah; Yoo, Jun Beom

    2010-01-01

    Software fault tree analysis (SFTA) is widely used for analyzing software requiring high-reliability. In SFTA, experts predict failures of system through HA-ZOP (Hazard and Operability study) or FMEA (Failure Mode and Effects Analysis) and draw software fault trees about the failures. Quality and cost of the software fault tree, therefore, depend on knowledge and experience of the experts. This paper proposes a CASE tool NuFTA in order to assist experts of safety analysis. The NuFTA automatically generate software fault trees from NuSCR formal requirements specification. NuSCR is a formal specification language used for specifying software requirements of KNICS RPS (Reactor Protection System) in Korea. We used the SFTA templates proposed by in order to generate SFTA automatically. The NuFTA also generates logical formulae summarizing the failure's cause, and we have a plan to use the formulae usefully through formal verification techniques

  1. A population MRI brain template and analysis tools for the macaque.

    Science.gov (United States)

    Seidlitz, Jakob; Sponheim, Caleb; Glen, Daniel; Ye, Frank Q; Saleem, Kadharbatcha S; Leopold, David A; Ungerleider, Leslie; Messinger, Adam

    2018-04-15

    The use of standard anatomical templates is common in human neuroimaging, as it facilitates data analysis and comparison across subjects and studies. For non-human primates, previous in vivo templates have lacked sufficient contrast to reliably validate known anatomical brain regions and have not provided tools for automated single-subject processing. Here we present the "National Institute of Mental Health Macaque Template", or NMT for short. The NMT is a high-resolution in vivo MRI template of the average macaque brain generated from 31 subjects, as well as a neuroimaging tool for improved data analysis and visualization. From the NMT volume, we generated maps of tissue segmentation and cortical thickness. Surface reconstructions and transformations to previously published digital brain atlases are also provided. We further provide an analysis pipeline using the NMT that automates and standardizes the time-consuming processes of brain extraction, tissue segmentation, and morphometric feature estimation for anatomical scans of individual subjects. The NMT and associated tools thus provide a common platform for precise single-subject data analysis and for characterizations of neuroimaging results across subjects and studies. Copyright © 2017 ElsevierCompany. All rights reserved.

  2. Automatic detection of solar features in HSOS full-disk solar images using guided filter

    Science.gov (United States)

    Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang

    2018-02-01

    A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.

  3. A Multiple Sensor Machine Vision System for Automatic Hardwood Feature Detection

    Science.gov (United States)

    D. Earl Kline; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman; Robert L. Brisbin

    1993-01-01

    A multiple sensor machine vision prototype is being developed to scan full size hardwood lumber at industrial speeds for automatically detecting features such as knots holes, wane, stain, splits, checks, and color. The prototype integrates a multiple sensor imaging system, a materials handling system, a computer system, and application software. The prototype provides...

  4. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  5. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  6. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  7. Automatic processing of unattended object features by functional connectivity

    Directory of Open Access Journals (Sweden)

    Katja Martina Mayer

    2013-05-01

    Full Text Available Observers can selectively attend to object features that are relevant for a task. However, unattended task-irrelevant features may still be processed and possibly integrated with the attended features. This study investigated the neural mechanisms for processing both task-relevant (attended and task-irrelevant (unattended object features. The Garner paradigm was adapted for functional magnetic resonance imaging (fMRI to test whether specific brain areas process the conjunction of features or whether multiple interacting areas are involved in this form of feature integration. Observers attended to shape, colour, or non-rigid motion of novel objects while unattended features changed from trial to trial (change blocks or remained constant (no-change blocks during a given block. This block manipulation allowed us to measure the extent to which unattended features affected neural responses which would reflect the extent to which multiple object features are automatically processed. We did not find Garner interference at the behavioural level. However, we designed the experiment to equate performance across block types so that any fMRI results could not be due solely to differences in task difficulty between change and no-change blocks. Attention to specific features localised several areas known to be involved in object processing. No area showed larger responses on change blocks compared to no-change blocks. However, psychophysiological interaction analyses revealed that several functionally-localised areas showed significant positive interactions with areas in occipito-temporal and frontal areas that depended on block type. Overall, these findings suggest that both regional responses and functional connectivity are crucial for processing multi-featured objects.

  8. Procedure for the automatic mesh generation of innovative gear teeth

    Directory of Open Access Journals (Sweden)

    Radicella Andrea Chiaramonte

    2016-01-01

    Full Text Available After having described gear wheels with teeth having the two sides constituted by different involutes and their importance in engineering applications, we stress the need for an efficient procedure for the automatic mesh generation of innovative gear teeth. First, we describe the procedure for the subdivision of the tooth profile in the various possible cases, then we show the method for creating the subdivision mesh, defined by two series of curves called meridians and parallels. Finally, we describe how the above procedure for automatic mesh generation is able to solve specific cases that may arise when dealing with teeth having the two sides constituted by different involutes.

  9. Fabrication of porous zirconia using filter paper template

    International Nuclear Information System (INIS)

    Deng Yuhua; Wei Pan

    2005-01-01

    In this work, porous zirconia ceramic was synthesized using filter papers as a template. Special attention is paid to whether the structural of the filter paper can be transferred to the zirconia structure. Microstructure of so synthesized porous zirconia was observed with SEM and the phase was determined by XRD. The surface area and the pore were investigated with an automatic volumetric sorption analyzer. It has been found that the morphology of the template transmit to the porous zirconia quite well. (orig.)

  10. Automatic delineation of brain regions on MRI and PET images from the pig.

    Science.gov (United States)

    Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M; Keller, Sune H; Andersen, Flemming L; Petersen, Ida N; Knudsen, Gitte M; Svarer, Claus

    2018-01-15

    The increasing use of the pig as a research model in neuroimaging requires standardized processing tools. For example, extraction of regional dynamic time series from brain PET images requires parcellation procedures that benefit from being automated. Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer. MRI and [ 11 C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same space. We developed an automatic procedure for spatial normalization of the averaged PET template to new PET images and hereby facilitated transfer of the atlas regional parcellation. Evaluation of the automatic spatial normalization procedure found the median voxel displacement to be 0.22±0.08mm using the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [ 11 C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames. We here present an automatic procedure for accurate and reproducible spatial normalization and parcellation of pig PET images of any radiotracer with reasonable blood-brain barrier penetration. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    Science.gov (United States)

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  12. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature

  13. Image Relaxation Matching Based on Feature Points for DSM Generation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Shunyi; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    In photogrammetry and remote sensing, image matching is a basic and crucial process for automatic DEM generation. In this paper we presented a image relaxation matching method based on feature points. This method can be considered as an extention of regular grid point based matching. It avoids the shortcome of grid point based matching. For example, with this method, we can avoid low or even no texture area where errors frequently appear in cross correlaton matching. In the mean while, it makes full use of some mature techniques such as probability relaxation, image pyramid and the like which have already been successfully used in grid point matching process. Application of the technique to DEM generaton in different regions proved that it is more reasonable and reliable.

  14. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  15. A light meson translatable template

    International Nuclear Information System (INIS)

    Allgower, C.E.; Peaslee, D.C.

    2002-01-01

    Recently surveyed (mass)2 values for I = 0, JPC = 2++ light mesons can be assembled into repeating patterns of 4 states, dubbed 'templates'. Within error, both internal and external template spacings approximate simple multiples of Δm2 ≅ 0.35 GeV2. Hopefully, this feature will be useful in predicting the positions of higher isoscalar 2++ states

  16. DEVELOPMENT OF THE MODEL OF AN AUTOMATIC GENERATION OF TOTAL AMOUNTS OF COMMISSIONS IN INTERNATIONAL INTERBANK PAYMENTS

    Directory of Open Access Journals (Sweden)

    Dmitry N. Bolotov

    2013-01-01

    Full Text Available The article deals with the main form of international payment - bank transfer and features when it is charging by banks correspondent fees for transit funds in their correspondent accounts. In order to optimize the cost of expenses for international money transfers there is a need to develop models and toolkit of automatic generation of the total amount of commissions in international interbank settlements. Accordingly, based on graph theory, approach to the construction of the model was developed.

  17. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  18. Morphological self-organizing feature map neural network with applications to automatic target recognition

    Science.gov (United States)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  19. Template-based education toolkit for mobile platforms

    Science.gov (United States)

    Golagani, Santosh Chandana; Esfahanian, Moosa; Akopian, David

    2012-02-01

    Nowadays mobile phones are the most widely used portable devices which evolve very fast adding new features and improving user experiences. The latest generation of hand-held devices called smartphones is equipped with superior memory, cameras and rich multimedia features, empowering people to use their mobile phones not only as a communication tool but also for entertainment purposes. With many young students showing interest in learning mobile application development one should introduce novel learning methods which may adapt to fast technology changes and introduce students to application development. Mobile phones become a common device, and engineering community incorporates phones in various solutions. Overcoming the limitations of conventional undergraduate electrical engineering (EE) education this paper explores the concept of template-based based education in mobile phone programming. The concept is based on developing small exercise templates which students can manipulate and revise for quick hands-on introduction to the application development and integration. Android platform is used as a popular open source environment for application development. The exercises relate to image processing topics typically studied by many students. The goal is to enable conventional course enhancements by incorporating in them short hands-on learning modules.

  20. Multi-template polymerase chain reaction.

    Science.gov (United States)

    Kalle, Elena; Kubista, Mikael; Rensing, Christopher

    2014-12-01

    PCR is a formidable and potent technology that serves as an indispensable tool in a wide range of biological disciplines. However, due to the ease of use and often lack of rigorous standards many PCR applications can lead to highly variable, inaccurate, and ultimately meaningless results. Thus, rigorous method validation must precede its broad adoption to any new application. Multi-template samples possess particular features, which make their PCR analysis prone to artifacts and biases: multiple homologous templates present in copy numbers that vary within several orders of magnitude. Such conditions are a breeding ground for chimeras and heteroduplexes. Differences in template amplification efficiencies and template competition for reaction compounds undermine correct preservation of the original template ratio. In addition, the presence of inhibitors aggravates all of the above-mentioned problems. Inhibitors might also have ambivalent effects on the different templates within the same sample. Yet, no standard approaches exist for monitoring inhibitory effects in multitemplate PCR, which is crucial for establishing compatibility between samples.

  1. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    Heindoerfer, F.; Pohle, R.

    1994-01-01

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM) [de

  2. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  3. Hardware-efficient robust biometric identification from 0.58 second template and 12 features of limb (Lead I) ECG signal using logistic regression classifier.

    Science.gov (United States)

    Sahadat, Md Nazmus; Jacobs, Eddie L; Morshed, Bashir I

    2014-01-01

    The electrocardiogram (ECG), widely known as a cardiac diagnostic signal, has recently been proposed for biometric identification of individuals; however reliability and reproducibility are of research interest. In this paper, we propose a template matching technique with 12 features using logistic regression classifier that achieved high reliability and identification accuracy. Non-invasive ECG signals were captured using our custom-built ambulatory EEG/ECG embedded device (NeuroMonitor). ECG data were collected from healthy subjects (10), between 25-35 years, for 10 seconds per trial. The number of trials from each subject was 10. From each trial, only 0.58 seconds of Lead I ECG data were used as template. Hardware-efficient fiducial point detection technique was implemented for feature extraction. To obtain repeated random sub-sampling validation, data were randomly separated into training and testing sets at a ratio of 80:20. Test data were used to find the classification accuracy. ECG template data with 12 extracted features provided the best performance in terms of accuracy (up to 100%) and processing complexity (computation time of 1.2ms). This work shows that a single limb (Lead I) ECG can robustly identify an individual quickly and reliably with minimal contact and data processing using the proposed algorithm.

  4. Model-based automatic generation of grasping regions

    Science.gov (United States)

    Bloss, David A.

    1993-01-01

    The problem of automatically generating stable regions for a robotic end effector on a target object, given a model of the end effector and the object is discussed. In order to generate grasping regions, an initial valid grasp transformation from the end effector to the object is obtained based on form closure requirements, and appropriate rotational and translational symmetries are associated with that transformation in order to construct a valid, continuous grasping region. The main result of this algorithm is a list of specific, valid grasp transformations of the end effector to the target object, and the appropriate combinations of translational and rotational symmetries associated with each specific transformation in order to produce a continuous grasp region.

  5. Design dependencies within the automatic generation of hypermedia presentations

    NARCIS (Netherlands)

    O. Rosell Martinez

    2002-01-01

    textabstractMany dependencies appear between the different stages of the creation of a hypermedia presentation. These dependencies have to be taken into account while designing a system for their automatic generation. In this work we study two of them and propose some techniques to treat them.

  6. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images

    Science.gov (United States)

    Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur

    2017-03-01

    Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.

  7. MR-based automatic delineation of volumes of interest in human brain PET images using probability maps

    DEFF Research Database (Denmark)

    Svarer, Claus; Madsen, Karina; Hasselbalch, Steen G.

    2005-01-01

    The purpose of this study was to develop and validate an observer-independent approach for automatic generation of volume-of-interest (VOI) brain templates to be used in emission tomography studies of the brain. The method utilizes a VOI probability map created on the basis of a database of several...... delineation of the VOI set. The approach was also shown to work equally well in individuals with pronounced cerebral atrophy. Probability-map-based automatic delineation of VOIs is a fast, objective, reproducible, and safe way to assess regional brain values from PET or SPECT scans. In addition, the method...

  8. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    NARCIS (Netherlands)

    Calamé, Jens R.; Ioustinova, Natalia; Romijn, J.M.T.; Smith, G.; van de Pol, Jan Cornelis

    2007-01-01

    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to

  9. IADE: a system for intelligent automatic design of bioisosteric analogs

    Science.gov (United States)

    Ertl, Peter; Lewis, Richard

    2012-11-01

    IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.

  10. Automatic delineation of brain regions on MRI and PET images from the pig

    DEFF Research Database (Denmark)

    Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M

    2018-01-01

    : Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. NEW METHOD: A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer....... RESULTS: MRI and [11C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same...... the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [11C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames...

  11. Automatic Generation of Facial Expression Using Triangular Geometric Deformation

    OpenAIRE

    Jia-Shing Sheu; Tsu-Shien Hsieh; Ho-Nien Shou

    2014-01-01

    This paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The fea...

  12. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    OpenAIRE

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written accident report. The CarSim system processes formal descriptions of accidents and creates corresponding 3D simulations. A planning component models the trajectories and temporal values of every vehicle ...

  13. Finger multibiometric cryptosystems: fusion strategy and template security

    Science.gov (United States)

    Peng, Jialiang; Li, Qiong; Abd El-Latif, Ahmed A.; Niu, Xiamu

    2014-03-01

    We address two critical issues in the design of a finger multibiometric system, i.e., fusion strategy and template security. First, three fusion strategies (feature-level, score-level, and decision-level fusions) with the corresponding template protection technique are proposed as the finger multibiometric cryptosystems to protect multiple finger biometric templates of fingerprint, finger vein, finger knuckle print, and finger shape modalities. Second, we theoretically analyze different fusion strategies for finger multibiometric cryptosystems with respect to their impact on security and recognition accuracy. Finally, the performance of finger multibiometric cryptosystems at different fusion levels is investigated on a merged finger multimodal biometric database. The comparative results suggest that the proposed finger multibiometric cryptosystem at feature-level fusion outperforms other approaches in terms of verification performance and template security.

  14. Categorical templates are more useful when features are consistent: Evidence from eye movements during search for societally important vehicles.

    Science.gov (United States)

    Hout, Michael C; Robbins, Arryn; Godwin, Hayward J; Fitzsimmons, Gemma; Scarince, Collin

    2017-08-01

    Unlike in laboratory visual search tasks-wherein participants are typically presented with a pictorial representation of the item they are asked to seek out-in real-world searches, the observer rarely has veridical knowledge of the visual features that define their target. During categorical search, observers look for any instance of a categorically defined target (e.g., helping a family member look for their mobile phone). In these circumstances, people may not have information about noncritical features (e.g., the phone's color), and must instead create a broad mental representation using the features that define (or are typical of) the category of objects they are seeking out (e.g., modern phones are typically rectangular and thin). In the current investigation (Experiment 1), using a categorical visual search task, we add to the body of evidence suggesting that categorical templates are effective enough to conduct efficient visual searches. When color information was available (Experiment 1a), attentional guidance, attention restriction, and object identification were enhanced when participants looked for categories with consistent features (e.g., ambulances) relative to categories with more variable features (e.g., sedans). When color information was removed (Experiment 1b), attention benefits disappeared, but object recognition was still better for feature-consistent target categories. In Experiment 2, we empirically validated the relative homogeneity of our societally important vehicle stimuli. Taken together, our results are in line with a category-consistent view of categorical target templates (Yu, Maxfield, & Zelinsky in, Psychological Science, 2016. doi: 10.1177/0956797616640237 ), and suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction of attention as well as swift object identification.

  15. Automatic inspection Pads second generation; Inspeccion automatica de pastillas de segunda generacion

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo-Lancho gonzalez, J. F.

    2010-07-01

    In recent years, development has addressed Enusa a second generation robot for automatic inspection of tablets incorporating the following advances: more advanced systems that improve vision quality inspection equipment, conducting the inspection in line with the grinding operation, increased productivity of the inspection process to be unnecessary pills buildup in trays and lay-out of the most rational equipment allowing cleaning it easier and faster. This second generation machine is already part of the automatic inspection equipment developed by Enusa and is an example of the ongoing commitment to the development Enusa and innovation in nuclear technology.

  16. Multi-template polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Elena Kalle

    2014-12-01

    Full Text Available PCR is a formidable and potent technology that serves as an indispensable tool in a wide range of biological disciplines. However, due to the ease of use and often lack of rigorous standards many PCR applications can lead to highly variable, inaccurate, and ultimately meaningless results. Thus, rigorous method validation must precede its broad adoption to any new application. Multi-template samples possess particular features, which make their PCR analysis prone to artifacts and biases: multiple homologous templates present in copy numbers that vary within several orders of magnitude. Such conditions are a breeding ground for chimeras and heteroduplexes. Differences in template amplification efficiencies and template competition for reaction compounds undermine correct preservation of the original template ratio. In addition, the presence of inhibitors aggravates all of the above-mentioned problems. Inhibitors might also have ambivalent effects on the different templates within the same sample. Yet, no standard approaches exist for monitoring inhibitory effects in multitemplate PCR, which is crucial for establishing compatibility between samples.

  17. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    Science.gov (United States)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  18. automatic generation of root locus plots for linear time invariant

    African Journals Online (AJOL)

    user

    peak time, its real power is its ability to solve problems with higher order systems. ... implementation of a computer program for the automatic generation of root loci using .... the concepts of complex variables, the angle condition can be ...

  19. ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.

    Science.gov (United States)

    Mognon, Andrea; Jovicich, Jorge; Bruzzone, Lorenzo; Buiatti, Marco

    2011-02-01

    A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST's classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory event-related potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal. Copyright © 2010 Society for Psychophysiological Research.

  20. A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

    Directory of Open Access Journals (Sweden)

    Xu-Feng Xing

    2018-01-01

    Full Text Available LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.

  1. Automatic Generation of Tests from Domain and Multimedia Ontologies

    Science.gov (United States)

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  2. Automatic Generation and Ranking of Questions for Critical Review

    Science.gov (United States)

    Liu, Ming; Calvo, Rafael A.; Rus, Vasile

    2014-01-01

    Critical review skill is one important aspect of academic writing. Generic trigger questions have been widely used to support this activity. When students have a concrete topic in mind, trigger questions are less effective if they are too general. This article presents a learning-to-rank based system which automatically generates specific trigger…

  3. A GA-fuzzy automatic generation controller for interconnected power system

    CSIR Research Space (South Africa)

    Boesack, CD

    2011-10-01

    Full Text Available This paper presents a GA-Fuzzy Automatic Generation Controller for large interconnected power systems. The design of Fuzzy Logic Controllers by means of expert knowledge have typically been the traditional design norm, however, this may not yield...

  4. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; van Huis, Jasper R.; Dijk, Judith; van Rest, Jeroen H. C.

    2014-10-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snatch is a subtle action, and because collaboration is complex social behavior. We carried out an experiment with more than 20 validated pickpocket incidents. We used a top-down approach to translate expert knowledge in features and rules, and a bottom-up approach to learn discriminating patterns with a classifier. The classifier was used to separate the pickpockets from normal passers-by who are shopping in the mall. We performed a cross validation to train and evaluate our system. In this paper, we describe our method, identify the most valuable features, and analyze the results that were obtained in the experiment. We estimate the quality of these features and the performance of automatic detection of (collaborating) pickpockets. The results show that many of the pickpockets can be detected at a low false alarm rate.

  5. Automatic Generation System of Multiple-Choice Cloze Questions and its Evaluation

    Directory of Open Access Journals (Sweden)

    Takuya Goto

    2010-09-01

    Full Text Available Since English expressions vary according to the genres, it is important for students to study questions that are generated from sentences of the target genre. Although various questions are prepared, it is still not enough to satisfy various genres which students want to learn. On the other hand, when producing English questions, sufficient grammatical knowledge and vocabulary are needed, so it is difficult for non-expert to prepare English questions by themselves. In this paper, we propose an automatic generation system of multiple-choice cloze questions from English texts. Empirical knowledge is necessary to produce appropriate questions, so machine learning is introduced to acquire knowledge from existing questions. To generate the questions from texts automatically, the system (1 extracts appropriate sentences for questions from texts based on Preference Learning, (2 estimates a blank part based on Conditional Random Field, and (3 generates distracters based on statistical patterns of existing questions. Experimental results show our method is workable for selecting appropriate sentences and blank part. Moreover, our method is appropriate to generate the available distracters, especially for the sentence that does not contain the proper noun.

  6. MadEvent: automatic event generation with MadGraph

    International Nuclear Information System (INIS)

    Maltoni, Fabio; Stelzer, Tim

    2003-01-01

    We present a new multi-channel integration method and its implementation in the multi-purpose event generator MadEvent, which is based on MadGraph. Given a process, MadGraph automatically identifies all the relevant subprocesses, generates both the amplitudes and the mappings needed for an efficient integration over the phase space, and passes them to MadEvent. As a result, a process-specific, stand-alone code is produced that allows the user to calculate cross sections and produce unweighted events in a standard output format. Several examples are given for processes that are relevant for physics studies at present and forthcoming colliders. (author)

  7. Comparative Analysis of Music Recordings from Western and Non-Western traditions by Automatic Tonal Feature Extraction

    Directory of Open Access Journals (Sweden)

    Emilia Gómez

    2008-09-01

    Full Text Available The automatic analysis of large musical corpora by means of computational models overcomes some limitations of manual analysis, and the unavailability of scores for most existing music makes necessary to work with audio recordings. Until now, research on this area has focused on music from the Western tradition. Nevertheless, we might ask if the available methods are suitable when analyzing music from other cultures. We present an empirical approach to the comparative analysis of audio recordings, focusing on tonal features and data mining techniques. Tonal features are related to the pitch class distribution, pitch range and employed scale, gamut and tuning system. We provide our initial but promising results obtained when trying to automatically distinguish music from Western and non- Western traditions; we analyze which descriptors are most relevant and study their distribution over 1500 pieces from different traditions and styles. As a result, some feature distributions differ for Western and non-Western music, and the obtained classification accuracy is higher than 80% for different classification algorithms and an independent test set. These results show that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions and complement musicological manual analyses.

  8. BPFlexTemplate: A Business Process template generation tool based on similarity and flexibility

    Directory of Open Access Journals (Sweden)

    Latifa Ilahi

    2017-01-01

    Full Text Available In large organizations with multiple organizational units, process variants emerge due to many aspects, including local management policies, resources or socio-technical limitations. Organizations then struggle to improve a business process which has no longer a single process model to redesign, implement and adjust. In this paper, we propose an approach to tackle these two challenges: decrease the proliferation of process variants in these organizations, and foresee, at the same time, the need of having flexible business processes that allow for a certain degree of adjustment. To validate our approach, we first conducted case studies where we collected six real-world business process variants from two organizational units of the same healthcare organization. We then proposed an algorithm to derive a template process model from all the variants, which includes common and flexible process elements. We implemented our approach in a software tool called BPFlexTemplate, and tested it with the elicited variants.

  9. Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Sagonas

    2012-10-01

    Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.

  10. Design of free patterns of nanocrystals with ad hoc features via templated dewetting

    Energy Technology Data Exchange (ETDEWEB)

    Aouassa, M.; Berbezier, I.; Favre, L.; Ronda, A. [IM2NP, CNRS, AMU, Marseille (France); Bollani, M.; Sordan, R. [LNES, Como (Italy); Delobbe, A.; Sudraud, P. [Orsay Physics, Fuveau (France)

    2012-07-02

    Design of monodisperse ultra-small nanocrystals (NCs) into large scale patterns with ad hoc features is demonstrated. The process makes use of solid state dewetting of a thin film templated through alloy liquid metal ion source focused ion beam (LMIS-FIB) nanopatterning. The solid state dewetting initiated at the edges of the patterns controllably creates the ordering of NCs with ad hoc placement and periodicity. The NC size is tuned by varying the nominal thickness of the film while their position results from the association of film retraction from the edges of the lay out and Rayleigh-like instability. The use of ultra-high resolution LMIS-FIB enables to produce monocrystalline NCs with size, periodicity, and placement tunable as well. It provides routes for the free design of nanostructures for generic applications in nanoelectronics.

  11. A Neutral-Network-Fusion Architecture for Automatic Extraction of Oceanographic Features from Satellite Remote Sensing Imagery

    National Research Council Canada - National Science Library

    Askari, Farid

    1999-01-01

    This report describes an approach for automatic feature detection from fusion of remote sensing imagery using a combination of neural network architecture and the Dempster-Shafer (DS) theory of evidence...

  12. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2013-10-01

    Full Text Available Fusion of 3D airborne laser (LIDAR data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  13. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut

    2005-01-01

    feature space the ship-master will be able to interactively define a segmentation map, which is refined and optimized by the deformable template algorithms. The deformable templates are defined as two-dimensional vector-cycles. Local random transformations are applied to the vector-cycles, and stochastic...

  14. Functional Programming with C++ Template Metaprograms

    Science.gov (United States)

    Porkoláb, Zoltán

    Template metaprogramming is an emerging new direction of generative programming. With the clever definitions of templates we can force the C++ compiler to execute algorithms at compilation time. Among the application areas of template metaprograms are the expression templates, static interface checking, code optimization with adaption, language embedding and active libraries. However, as template metaprogramming was not an original design goal, the C++ language is not capable of elegant expression of metaprograms. The complicated syntax leads to the creation of code that is hard to write, understand and maintain. Although template metaprogramming has a strong relationship with functional programming, this is not reflected in the language syntax and existing libraries. In this paper we give a short and incomplete introduction to C++ templates and the basics of template metaprogramming. We will enlight the role of template metaprograms, and some important and widely used idioms. We give an overview of the possible application areas as well as debugging and profiling techniques. We suggest a pure functional style programming interface for C++ template metaprograms in the form of embedded Haskell code which is transformed to standard compliant C++ source.

  15. Enhancing the Automatic Generation of Hints with Expert Seeding

    Science.gov (United States)

    Stamper, John; Barnes, Tiffany; Croy, Marvin

    2011-01-01

    The Hint Factory is an implementation of our novel method to automatically generate hints using past student data for a logic tutor. One disadvantage of the Hint Factory is the time needed to gather enough data on new problems in order to provide hints. In this paper we describe the use of expert sample solutions to "seed" the hint generation…

  16. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Better Metrics to Automatically Predict the Quality of a Text Summary

    Directory of Open Access Journals (Sweden)

    Judith D. Schlesinger

    2012-09-01

    Full Text Available In this paper we demonstrate a family of metrics for estimating the quality of a text summary relative to one or more human-generated summaries. The improved metrics are based on features automatically computed from the summaries to measure content and linguistic quality. The features are combined using one of three methods—robust regression, non-negative least squares, or canonical correlation, an eigenvalue method. The new metrics significantly outperform the previous standard for automatic text summarization evaluation, ROUGE.

  18. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  19. An approach of optimal sensitivity applied in the tertiary loop of the automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Belati, Edmarcio A. [CIMATEC - SENAI, Salvador, BA (Brazil); Alves, Dilson A. [Electrical Engineering Department, FEIS, UNESP - Sao Paulo State University (Brazil); da Costa, Geraldo R.M. [Electrical Engineering Department, EESC, USP - Sao Paulo University (Brazil)

    2008-09-15

    This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (author)

  20. Automatic Shadow Detection and Removal from a Single Image.

    Science.gov (United States)

    Khan, Salman H; Bennamoun, Mohammed; Sohel, Ferdous; Togneri, Roberto

    2016-03-01

    We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.

  1. Study of defect generated visible photoluminescence in zinc oxide nano-particles prepared using PVA templates

    Energy Technology Data Exchange (ETDEWEB)

    Oudhia, A. [Department of Physics, Government V.Y.T. PG. Autonomous College, Durg, 491001 C.G. (India); Choudhary, A., E-mail: aarti.bhilai@gmail.com [Department of Physics, Government V.Y.T. PG. Autonomous College, Durg, 491001 C.G. (India); Sharma, S.; Aggrawal, S. [Department of Physics, Government V.Y.T. PG. Autonomous College, Durg, 491001 C.G. (India); Dhoble, S.J. [RTM University Nagpur, Maharashtra (India)

    2014-10-15

    Intrinsic defect generated photoluminescence (PL) in zinc oxide nanoparticles (NPs) obtained by a PVA template based wet-chemical process has been studied. A good controllability was achieved on the surface defects, structure and the morphology of ZnO NPs through the variation of solvents used in synthesis. The PL emission strongly depended on the defect structure and morphology. SEM, XRD, annealing and PL excitation studies were used to analyze the types of defects involved in the visible emission as well as the defect concentration. The mechanism for the blue, green and yellow emissions was proposed. The spectral content of the visible emission was controlled through generation/removal of defects through the shape transformation or annealing by focusing on defect origins and broad controls. - Highlights: • ZnO nanoparticles were synthesized using poly-vinyl alcohol template in various solvents. • The structure and morphology of ZnO nanoparticles were depended on dielectric constant and boiling point of solvents. • Photoluminescence properties of ZnO nanoparticles were studied. • Maximum optical absorbance and Photoluminescence intensity were found in ethanolic preparation. • ZnO nanoparticles were annealed at different temperatures for detection of defect emission.

  2. Automatic detection of suspicious behavior of pickpockets with track-based features in a shopping mall

    OpenAIRE

    Bouma, H.; Baan, J.; Burghouts, G.J.; Eendebak, P.T.; Huis, J.R. van; Dijk, J.; Rest, J.H.C. van

    2014-01-01

    Proactive detection of incidents is required to decrease the cost of security incidents. This paper focusses on the automatic early detection of suspicious behavior of pickpockets with track-based features in a crowded shopping mall. Our method consists of several steps: pedestrian tracking, feature computation and pickpocket recognition. This is challenging because the environment is crowded, people move freely through areas which cannot be covered by a single camera, because the actual snat...

  3. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  4. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  5. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  6. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    Science.gov (United States)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the

  7. Automated PET-only quantification of amyloid deposition with adaptive template and empirically pre-defined ROI

    Science.gov (United States)

    Akamatsu, G.; Ikari, Y.; Ohnishi, A.; Nishida, H.; Aita, K.; Sasaki, M.; Yamamoto, Y.; Sasaki, M.; Senda, M.

    2016-08-01

    Amyloid PET is useful for early and/or differential diagnosis of Alzheimer’s disease (AD). Quantification of amyloid deposition using PET has been employed to improve diagnosis and to monitor AD therapy, particularly in research. Although MRI is often used for segmentation of gray matter and for spatial normalization into standard Montreal Neurological Institute (MNI) space where region-of-interest (ROI) template is defined, 3D MRI is not always available in clinical practice. The purpose of this study was to examine the feasibility of PET-only amyloid quantification with an adaptive template and a pre-defined standard ROI template that has been empirically generated from typical cases. A total of 68 subjects who underwent brain 11C-PiB PET were examined. The 11C-PiB images were non-linearly spatially normalized to the standard MNI T1 atlas using the same transformation parameters of MRI-based normalization. The automatic-anatomical-labeling-ROI (AAL-ROI) template was applied to the PET images. All voxel values were normalized by the mean value of cerebellar cortex to generate the SUVR-scaled images. Eleven typical positive images and eight typical negative images were normalized and averaged, respectively, and were used as the positive and negative template. Positive and negative masks which consist of voxels with SUVR  ⩾1.7 were extracted from both templates. Empirical PiB-prone ROI (EPP-ROI) was generated by subtracting the negative mask from the positive mask. The 11C-PiB image of each subject was non-rigidly normalized to the positive and negative template, respectively, and the one with higher cross-correlation was adopted. The EPP-ROI was then inversely transformed to individual PET images. We evaluated differences of SUVR between standard MRI-based method and PET-only method. We additionally evaluated whether the PET-only method would correctly categorize 11C-PiB scans as positive or negative. Significant correlation was observed between the SUVRs

  8. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    Science.gov (United States)

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. A Method to Measure the Bracelet Based on Feature Energy

    Science.gov (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang

    2017-12-01

    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  10. Installation and Testing Instructions for the Sandia Automatic Report Generator (ARG).

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.

    2018-04-01

    Robert L. CLAY Sandia National Laboratories P.O. Box 969 Livermore, CA 94551, U.S.A. rlclay@sandia.gov In this report, we provide detailed and reproducible installation instructions of the Automatic Report Generator (ARG), for both Linux and macOS target platforms.

  11. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    Science.gov (United States)

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV

  12. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  13. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  14. AUTOMATIC GLOBAL REGISTRATION BETWEEN AIRBORNE LIDAR DATA AND REMOTE SENSING IMAGE BASED ON STRAIGHT LINE FEATURES

    Directory of Open Access Journals (Sweden)

    Z. Q. Liu

    2018-04-01

    Full Text Available An automatic global registration approach for point clouds and remote sensing image based on straight line features is proposed which is insensitive to rotational and scale transformation. First, the building ridge lines and contour lines in point clouds are automatically detected as registration primitives by integrating region growth and topology identification. Second, the collinear condition equation is selected as registration transformation function which is based on rotation matrix described by unit quaternion. The similarity measure is established according to the distance between the corresponding straight line features from point clouds and the image in the same reference coordinate system. Finally, an iterative Hough transform is adopted to simultaneously estimate the parameters and obtain correspondence between registration primitives. Experimental results prove the proposed method is valid and the spectral information is useful for the following classification processing.

  15. Animated pose templates for modeling and detecting human actions.

    Science.gov (United States)

    Yao, Benjamin Z; Nie, Bruce X; Liu, Zicheng; Zhu, Song-Chun

    2014-03-01

    This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a

  16. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  17. The ACR-program for automatic finite element model generation for part through cracks

    International Nuclear Information System (INIS)

    Leinonen, M.S.; Mikkola, T.P.J.

    1989-01-01

    The ACR-program (Automatic Finite Element Model Generation for Part Through Cracks) has been developed at the Technical Research Centre of Finland (VTT) for automatic finite element model generation for surface flaws using three dimensional solid elements. Circumferential or axial cracks can be generated on the inner or outer surface of a cylindrical or toroidal geometry. Several crack forms are available including the standard semi-elliptical surface crack. The program can be used in the development of automated systems for fracture mechanical analyses of structures. The tests for the accuracy of the FE-mesh have been started with two-dimensional models. The results indicate that the accuracy of the standard mesh is sufficient for practical analyses. Refinement of the standard mesh is needed in analyses with high load levels well over the limit load of the structure

  18. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  19. Automatic writer identification using connected-component contours and edge-based features of uppercase Western script.

    Science.gov (United States)

    Schomaker, Lambert; Bulacu, Marius

    2004-06-01

    In this paper, a new technique for offline writer identification is presented, using connected-component contours (COCOCOs or CO3s) in uppercase handwritten samples. In our model, the writer is considered to be characterized by a stochastic pattern generator, producing a family of connected components for the uppercase character set. Using a codebook of CO3s from an independent training set of 100 writers, the probability-density function (PDF) of CO3s was computed for an independent test set containing 150 unseen writers. Results revealed a high-sensitivity of the CO3 PDF for identifying individual writers on the basis of a single sentence of uppercase characters. The proposed automatic approach bridges the gap between image-statistics approaches on one end and manually measured allograph features of individual characters on the other end. Combining the CO3 PDF with an independent edge-based orientation and curvature PDF yielded very high correct identification rates.

  20. Automatic system for redistributing feedwater in a steam generator of a nuclear power plant

    International Nuclear Information System (INIS)

    Fuoto, J.S.; Crotzer, M.E.; Lang, G.E.

    1980-01-01

    A system is described for automatically redistributing a steam generator secondary tube system after a burst in the secondary tubing. This applies to a given steam generator in a system having several steam generators partially sharing a common tube system, and employs a pressure control generating an electrical signal which is compared with given values [fr

  1. POROUS MEMBRANE TEMPLATED SYNTHESIS OF POLYMER PILLARED LAYER

    Institute of Scientific and Technical Information of China (English)

    Zhong-wei Niu; Dan Li; Zhen-zhong Yang

    2003-01-01

    The anodic porous alumina membranes with a definite pore diameter and aspect ratio were used as templates to synthesize polymer pillared layer structures. The pillared polymer was produced in the template membrane pores, and the layer on the template surfaces. Rigid cured epoxy resin, polystyrene and soft hydrogel were chosen to confirm the methodology. The pillars were in the form of either tubes or fibers, which were controlled by the alumina membrane pore surface wettability. The structural features were confirmed by scanning electron microscopy results.

  2. BgCut: Automatic Ship Detection from UAV Images

    Directory of Open Access Journals (Sweden)

    Chao Xu

    2014-01-01

    foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.

  3. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  4. Automatic gallbladder segmentation using combined 2D and 3D shape features to perform volumetric analysis in native and secretin-enhanced MRCP sequences.

    Science.gov (United States)

    Gloger, Oliver; Bülow, Robin; Tönnies, Klaus; Völzke, Henry

    2017-11-24

    We aimed to develop the first fully automated 3D gallbladder segmentation approach to perform volumetric analysis in volume data of magnetic resonance (MR) cholangiopancreatography (MRCP) sequences. Volumetric gallbladder analysis is performed for non-contrast-enhanced and secretin-enhanced MRCP sequences. Native and secretin-enhanced MRCP volume data were produced with a 1.5-T MR system. Images of coronal maximum intensity projections (MIP) are used to automatically compute 2D characteristic shape features of the gallbladder in the MIP images. A gallbladder shape space is generated to derive 3D gallbladder shape features, which are then combined with 2D gallbladder shape features in a support vector machine approach to detect gallbladder regions in MRCP volume data. A region-based level set approach is used for fine segmentation. Volumetric analysis is performed for both sequences to calculate gallbladder volume differences between both sequences. The approach presented achieves segmentation results with mean Dice coefficients of 0.917 in non-contrast-enhanced sequences and 0.904 in secretin-enhanced sequences. This is the first approach developed to detect and segment gallbladders in MR-based volume data automatically in both sequences. It can be used to perform gallbladder volume determination in epidemiological studies and to detect abnormal gallbladder volumes or shapes. The positive volume differences between both sequences may indicate the quantity of the pancreatobiliary reflux.

  5. Biometric templates selection and update using quality measures

    Science.gov (United States)

    Abboud, Ali J.; Jassim, Sabah A.

    2012-06-01

    To deal with severe variation in recording conditions, most biometric systems acquire multiple biometric samples, at the enrolment stage, for the same person and then extract their individual biometric feature vectors and store them in the gallery in the form of biometric template(s), labelled with the person's identity. The number of samples/templates and the choice of the most appropriate templates influence the performance of the system. The desired biometric template(s) selection technique must aim to control the run time and storage requirements while improving the recognition accuracy of the biometric system. This paper is devoted to elaborating on and discussing a new two stages approach for biometric templates selection and update. This approach uses a quality-based clustering, followed by a special criterion for the selection of an ultimate set of biometric templates from the various clusters. This approach is developed to select adaptively a specific number of templates for each individual. The number of biometric templates depends mainly on the performance of each individual (i.e. gallery size should be optimised to meet the needs of each target individual). These experiments have been conducted on two face image databases and their results will demonstrate the effectiveness of proposed quality-guided approach.

  6. Optimizing preventive maintenance with maintenance templates

    International Nuclear Information System (INIS)

    Dozier, I.J.

    1996-01-01

    Rising operating costs has caused maintenance professionals to rethink their strategy for preventive maintenance (PM) programs. Maintenance Templates are pre-engineered PM task recommendations for a component type based on application of the component. Development of the maintenance template considers the dominant failure cause of the component and the type of preventive maintenance that can predict or prevent the failure from occurring. Maintenance template development also attempts to replace fixed frequency tasks with condition monitoring tasks such as vibration analysis or thermography. For those components that have fixed frequency PM intervals, consideration is given to the maintenance drivers such as criticality, environment and usage. This helps to maximize the PM frequency intervals and maximize the component availability. Maintenance Templates have been used at PECO Energy's Limerick Generating Station during the Reliability Centered Maintenance (RCM) Process to optimize their PM program. This paper describes the development and uses of the maintenance templates

  7. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas

    2016-01-01

    . To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging...

  8. Automatic caption generation for news images.

    Science.gov (United States)

    Feng, Yansong; Lapata, Mirella

    2013-04-01

    This paper is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Examples include video and image retrieval as well as the development of tools that aid visually impaired individuals to access pictorial information. Our approach leverages the vast resource of pictures available on the web and the fact that many of them are captioned and colocated with thematically related documents. Our model learns to create captions from a database of news articles, the pictures embedded in them, and their captions, and consists of two stages. Content selection identifies what the image and accompanying article are about, whereas surface realization determines how to verbalize the chosen content. We approximate content selection with a probabilistic image annotation model that suggests keywords for an image. The model postulates that images and their textual descriptions are generated by a shared set of latent variables (topics) and is trained on a weakly labeled dataset (which treats the captions and associated news articles as image labels). Inspired by recent work in summarization, we propose extractive and abstractive surface realization models. Experimental results show that it is viable to generate captions that are pertinent to the specific content of an image and its associated article, while permitting creativity in the description. Indeed, the output of our abstractive model compares favorably to handwritten captions and is often superior to extractive methods.

  9. Fully automatic GBM segmentation in the TCGA-GBM dataset: Prognosis and correlation with VASARI features.

    Science.gov (United States)

    Rios Velazquez, Emmanuel; Meier, Raphael; Dunn, William D; Alexander, Brian; Wiest, Roland; Bauer, Stefan; Gutman, David A; Reyes, Mauricio; Aerts, Hugo J W L

    2015-11-18

    Reproducible definition and quantification of imaging biomarkers is essential. We evaluated a fully automatic MR-based segmentation method by comparing it to manually defined sub-volumes by experienced radiologists in the TCGA-GBM dataset, in terms of sub-volume prognosis and association with VASARI features. MRI sets of 109 GBM patients were downloaded from the Cancer Imaging archive. GBM sub-compartments were defined manually and automatically using the Brain Tumor Image Analysis (BraTumIA). Spearman's correlation was used to evaluate the agreement with VASARI features. Prognostic significance was assessed using the C-index. Auto-segmented sub-volumes showed moderate to high agreement with manually delineated volumes (range (r): 0.4 - 0.86). Also, the auto and manual volumes showed similar correlation with VASARI features (auto r = 0.35, 0.43 and 0.36; manual r = 0.17, 0.67, 0.41, for contrast-enhancing, necrosis and edema, respectively). The auto-segmented contrast-enhancing volume and post-contrast abnormal volume showed the highest AUC (0.66, CI: 0.55-0.77 and 0.65, CI: 0.54-0.76), comparable to manually defined volumes (0.64, CI: 0.53-0.75 and 0.63, CI: 0.52-0.74, respectively). BraTumIA and manual tumor sub-compartments showed comparable performance in terms of prognosis and correlation with VASARI features. This method can enable more reproducible definition and quantification of imaging based biomarkers and has potential in high-throughput medical imaging research.

  10. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  11. Extending Supernova Spectral Templates for Next Generation Space Telescope Observations

    Science.gov (United States)

    Roberts-Pierel, Justin; Rodney, Steven A.; Steven Rodney

    2018-01-01

    Widely used empirical supernova (SN) Spectral Energy Distributions (SEDs) have not historically extended meaningfully into the ultraviolet (UV), or the infrared (IR). However, both are critical for current and future aspects of SN research including UV spectra as probes of poorly understood SN Ia physical properties, and expanding our view of the universe with high-redshift James Webb Space Telescope (JWST) IR observations. We therefore present a comprehensive set of SN SED templates that have been extended into the UV and IR, as well as an open-source software package written in Python that enables a user to generate their own extrapolated SEDs. We have taken a sampling of core-collapse (CC) and Type Ia SNe to get a time-dependent distribution of UV and IR colors (U-B,r’-[JHK]), and then generated color curves are used to extrapolate SEDs into the UV and IR. The SED extrapolation process is now easily duplicated using a user’s own data and parameters via our open-source Python package: SNSEDextend. This work develops the tools necessary to explore the JWST’s ability to discriminate between CC and Type Ia SNe, as well as provides a repository of SN SEDs that will be invaluable to future JWST and WFIRST SN studies.

  12. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Software module for geometric product modeling and NC tool path generation

    International Nuclear Information System (INIS)

    Sidorenko, Sofija; Dukovski, Vladimir

    2003-01-01

    The intelligent CAD/CAM system named VIRTUAL MANUFACTURE is created. It is consisted of four intelligent software modules: the module for virtual NC machine creation, the module for geometric product modeling and automatic NC path generation, the module for virtual NC machining and the module for virtual product evaluation. In this paper the second intelligent software module is presented. This module enables feature-based product modeling carried out via automatic saving of the designed product geometric features as knowledge data. The knowledge data are afterwards applied for automatic NC program generation for the designed product NC machining. (Author)

  14. Feature Usage Explorer: Usage Monitoring and Visualization Tool in HTML5 Based Applications

    Directory of Open Access Journals (Sweden)

    Sarunas Marciuska

    2013-10-01

    Full Text Available Feature Usage Explorer is a JavaScript library, which automatically detects features in HTML5 based applications and monitors their usage. The collected information can be visualized in a Feature Usage Diagram, which is automatically generated from an input json file. Currently, the users of Feature Usage Explorer have to design their own tool in order to generate the json file from collected usage information. This option remains viable when using the library in order not to constraint the user’s choice of preferred data storage. Feature Usage Explorer can be reused in any HTML5 based applications where an understanding of how users interact with the system is required (i.e. user experience and usability studies, human computer interaction field, or requirement prioritization area.

  15. Automatic item generation implemented for measuring artistic judgment aptitude.

    Science.gov (United States)

    Bezruczko, Nikolaus

    2014-01-01

    Automatic item generation (AIG) is a broad class of methods that are being developed to address psychometric issues arising from internet and computer-based testing. In general, issues emphasize efficiency, validity, and diagnostic usefulness of large scale mental testing. Rapid prominence of AIG methods and their implicit perspective on mental testing is bringing painful scrutiny to many sacred psychometric assumptions. This report reviews basic AIG ideas, then presents conceptual foundations, image model development, and operational application to artistic judgment aptitude testing.

  16. Regulation and Measurement of the Heat Generated by Automatic Tooth Preparation in a Confined Space.

    Science.gov (United States)

    Yuan, Fusong; Zheng, Jianqiao; Sun, Yuchun; Wang, Yong; Lyu, Peijun

    2017-06-01

    The aim of this study was to assess and regulate heat generation in the dental pulp cavity and circumambient temperature around a tooth during laser ablation with a femtosecond laser in a confined space. The automatic tooth preparing technique is one of the traditional oral clinical technology innovations. In this technique, a robot controlled an ultrashort pulse laser to automatically complete the three-dimensional teeth preparing in a confined space. The temperature control is the main measure for protecting the tooth nerve. Ten tooth specimens were irradiated with a femtosecond laser controlled by a robot in a confined space to generate 10 teeth preparation. During the process, four thermocouple sensors were used to record the pulp cavity and circumambient environment temperatures with or without air cooling. A statistical analysis of the temperatures was performed between the conditions with and without air cooling (p heat generated in the pulp cavity was lower than the threshold for dental pulp damage. These results indicate that femtosecond laser ablation with air cooling might be an appropriate method for automatic tooth preparing.

  17. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  18. Synthesis of Porous Carbon Monoliths Using Hard Templates.

    Science.gov (United States)

    Klepel, Olaf; Danneberg, Nina; Dräger, Matti; Erlitz, Marcel; Taubert, Michael

    2016-03-21

    The preparation of porous carbon monoliths with a defined shape via template-assisted routes is reported. Monoliths made from porous concrete and zeolite were each used as the template. The porous concrete-derived carbon monoliths exhibited high gravimetric specific surface areas up to 2000 m²·g -1 . The pore system comprised macro-, meso-, and micropores. These pores were hierarchically arranged. The pore system was created by the complex interplay of the actions of both the template and the activating agent as well. On the other hand, zeolite-made template shapes allowed for the preparation of microporous carbon monoliths with a high volumetric specific surface area. This feature could be beneficial if carbon monoliths must be integrated into technical systems under space-limited conditions.

  19. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  20. A System for Automatically Generating Scheduling Heuristics

    Science.gov (United States)

    Morris, Robert

    1996-01-01

    The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.

  1. Automatic detection and classification of breast tumors in ultrasonic images using texture and morphological features.

    Science.gov (United States)

    Su, Yanni; Wang, Yuanyuan; Jiao, Jing; Guo, Yi

    2011-01-01

    Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity.

  2. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  3. Block Copolymer-Templated Approach to Nanopatterned Metal-Organic Framework Films.

    Science.gov (United States)

    Zhou, Meimei; Wu, Yi-Nan; Wu, Baozhen; Yin, Xianpeng; Gao, Ning; Li, Fengting; Li, Guangtao

    2017-08-17

    The fabrication of patterned metal-organic framework (MOF) films with precisely controlled nanoscale resolution has been a fundamental challenge in nanoscience and nanotechnology. In this study, nanopatterned MOF films were fabricated using a layer-by-layer (LBL) growth method on functional templates (such as a bicontinuous nanoporous membrane or a structure with highly long-range-ordered nanoscopic channels parallel to the underlying substrate) generated by the microphase separation of polystyrene-b-poly(2-vinylpyridine) (PS-b-P2VP) block copolymers. HKUST-1 can be directly deposited on the templates without any chemical modification because the pyridine groups in P2VP interact with metal ions via metal-BCP complexes. As a result, nanopatterned HKUST-1 films with feature sizes below 50 nm and controllable thicknesses can be fabricated by controlling the number of LBL growth cycles. The proposed fabrication method further extends the applications of MOFs in various fields. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Automatic Generation of User Interface Layouts for Alternative Screen Orientations

    OpenAIRE

    Zeidler , Clemens; Weber , Gerald; Stuerzlinger , Wolfgang; Lutteroth , Christof

    2017-01-01

    Part 1: Adaptive Design and Mobile Applications; International audience; Creating multiple layout alternatives for graphical user interfaces to accommodate different screen orientations for mobile devices is labor intensive. Here, we investigate how such layout alternatives can be generated automatically from an initial layout. Providing good layout alternatives can inspire developers in their design work and support them to create adaptive layouts. We performed an analysis of layout alternat...

  5. Projector : automatic contig mapping for gap closure purposes

    NARCIS (Netherlands)

    van Hijum, SAFT; Zomer, AL; Kuipers, OP; Kok, J

    2003-01-01

    Projector was designed for automatic positioning of contigs from an unfinished prokaryotic genome onto a template genome of a closely related strain or species. Projector mapped 84 contigs of Lactococcus lactis MG1363 (corresponding to 81% of the assembly nucleotides) against the genome of L.lactis

  6. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    Science.gov (United States)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  7. Patch layout generation by detecting feature networks

    KAUST Repository

    Cao, Yuanhao

    2015-02-01

    The patch layout of 3D surfaces reveals the high-level geometric and topological structures. In this paper, we study the patch layout computation by detecting and enclosing feature loops on surfaces. We present a hybrid framework which combines several key ingredients, including feature detection, feature filtering, feature curve extension, patch subdivision and boundary smoothing. Our framework is able to compute patch layouts through concave features as previous approaches, but also able to generate nice layouts through smoothing regions. We demonstrate the effectiveness of our framework by comparing with the state-of-the-art methods.

  8. Automatic Generation of Supervisory Control System Software Using Graph Composition

    Science.gov (United States)

    Nakata, Hideo; Sano, Tatsuro; Kojima, Taizo; Seo, Kazuo; Uchida, Tomoyuki; Nakamura, Yasuaki

    This paper describes the automatic generation of system descriptions for SCADA (Supervisory Control And Data Acquisition) systems. The proposed method produces various types of data and programs for SCADA systems from equipment definitions using conversion rules. At first, this method makes directed graphs, which represent connections between the equipment, from equipment definitions. System descriptions are generated using the conversion rules, by analyzing these directed graphs, and finding the groups of equipment that involve similar operations. This method can make the conversion rules multi levels by using the composition of graphs, and can reduce the number of rules. The developer can define and manage these rules efficiently.

  9. Generating cancelable fingerprint templates.

    Science.gov (United States)

    Ratha, Nalini K; Chikkerur, Sharat; Connell, Jonathan H; Bolle, Ruud M

    2007-04-01

    Biometrics-based authentication systems offer obvious usability advantages over traditional password and token-based authentication schemes. However, biometrics raises several privacy concerns. A biometric is permanently associated with a user and cannot be changed. Hence, if a biometric identifier is compromised, it is lost forever and possibly for every application where the biometric is used. Moreover, if the same biometric is used in multiple applications, a user can potentially be tracked from one application to the next by cross-matching biometric databases. In this paper, we demonstrate several methods to generate multiple cancelable identifiers from fingerprint images to overcome these problems. In essence, a user can be given as many biometric identifiers as needed by issuing a new transformation "key." The identifiers can be cancelled and replaced when compromised. We empirically compare the performance of several algorithms such as Cartesian, polar, and surface folding transformations of the minutiae positions. It is demonstrated through multiple experiments that we can achieve revocability and prevent cross-matching of biometric databases. It is also shown that the transforms are noninvertible by demonstrating that it is computationally as hard to recover the original biometric identifier from a transformed version as by randomly guessing. Based on these empirical results and a theoretical analysis we conclude that feature-level cancelable biometric construction is practicable in large biometric deployments.

  10. Some problems raised by the operation of large nuclear turbo-generator sets. Automatic control system for steam turbo-generator units

    International Nuclear Information System (INIS)

    Cecconi, F.

    1976-01-01

    The design of an appropriate automatic system was found to be useful to improve the control of large size turbo-generator units so as to provide easy and efficient control and monitoring. The experience of the manufacturer of these turbo-generator units allowed a system well suited for this function to be designed [fr

  11. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  12. A usefulness and evaluation of setting region of interest automatically, using NEURO FLEXER ver. 1.0

    International Nuclear Information System (INIS)

    Mizuno, Takashi; Takahashi, Masaaki

    2010-01-01

    Software NEURO FLEXER Ver. 1.0 (FLEX), Brain ROI (BROI) and 3D Stereotactic ROI Template ver. 3.1 (3DSRT) were compared with authors' manual method (MM) for ROI setting to estimate cerebral blood flow (CBF) in single photon emission computed tomography (SPECT) to search the most efficient automatic setting. Subjects analyzed were 123 IMP SPECT autoradiography (ARG) images of 52 patients (M/F 24/28, average age of 69 y) with I: cerebral infarction and ischemia (INF, 26 cases), Alzheimer disease and other dementia (AD, 14), postoperation subarachnoid hemorrhage (10) and 2 other cases, and II: their each 10 cases of AD with and INF without atrophy in MRI. The machine was Toshiba SPECT GCA9300A equipped with the high resolution fan beam collimator for low energy and the processor GMS5500/PI. ARG acquisition was conducted by 8 rotations/20 min with 3.44 mm thick slices. ROI was set by MM with GMS5500/PI or automatically with FLEX (Fuji Mediphysics), BROI and 3DSRT (both, Fuji Film RI Pharma). Anatomical standardization was done with iSSP5 (NEUROSTAT STEREO) and eZIZ (SPM2 spatial normalize). Five experts split up MM and one finally checked all. In group I (all lesions) and II (according to disease type), MM vs each automatic ROI setting was compared for rCBF (mL/min/100 g) by regression. The automatic ROI setting with FLEX software was suggested to be closest to that by authors' MM for estimating CBF in SPECT as the software in standardization and volume of interest (VOI) template application was found flexible even at reduced blood flow and atrophic lesion. For the automation, however, the user should understand the features of the standardization and know how to meet with the situation. (T.T.)

  13. Creation of structured documentation templates using Natural Language Processing techniques.

    Science.gov (United States)

    Kashyap, Vipul; Turchin, Alexander; Morin, Laura; Chang, Frank; Li, Qi; Hongsermeier, Tonya

    2006-01-01

    Structured Clinical Documentation is a fundamental component of the healthcare enterprise, linking both clinical (e.g., electronic health record, clinical decision support) and administrative functions (e.g., evaluation and management coding, billing). One of the challenges in creating good quality documentation templates has been the inability to address specialized clinical disciplines and adapt to local clinical practices. A one-size-fits-all approach leads to poor adoption and inefficiencies in the documentation process. On the other hand, the cost associated with manual generation of documentation templates is significant. Consequently there is a need for at least partial automation of the template generation process. We propose an approach and methodology for the creation of structured documentation templates for diabetes using Natural Language Processing (NLP).

  14. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  15. Biometric template revocation

    Science.gov (United States)

    Arndt, Craig M.

    2004-08-01

    Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.

  16. Smart Contract Templates: foundations, design landscape and research directions

    OpenAIRE

    Clack, Christopher D.; Bakshi, Vikram A.; Braine, Lee

    2016-01-01

    In this position paper, we consider some foundational topics regarding smart contracts (such as terminology, automation, enforceability, and semantics) and define a smart contract as an automatable and enforceable agreement. We explore a simple semantic framework for smart contracts, covering both operational and non-operational aspects, and describe templates and agreements for legally-enforceable smart contracts, based on legal documents. Building upon the Ricardian Contract, we identify op...

  17. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    Science.gov (United States)

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  18. Automatic real-time detection of endoscopic procedures using temporal features.

    Science.gov (United States)

    Stanek, Sean R; Tavanapong, Wallapak; Wong, Johnny; Oh, Jung Hwan; de Groen, Piet C

    2012-11-01

    Endoscopy is used for inspection of the inner surface of organs such as the colon. During endoscopic inspection of the colon or colonoscopy, a tiny video camera generates a video signal, which is displayed on a monitor for interpretation in real-time by physicians. In practice, these images are not typically captured, which may be attributed by lack of fully automated tools for capturing, analysis of important contents, and quick and easy retrieval of these contents. This paper presents the description and evaluation results of our novel software that uses new metrics based on image color and motion over time to automatically record all images of an individual endoscopic procedure into a single digitized video file. The software automatically discards out-patient video frames between different endoscopic procedures. We validated our software system on 2464 h of live video (over 265 million frames) from endoscopy units where colonoscopy and upper endoscopy were performed. Our previous classification method achieved a frame-based sensitivity of 100.00%, but only a specificity of 89.22%. Our new method achieved a frame-based sensitivity and specificity of 99.90% and 99.97%, a significant improvement. Our system is robust for day-to-day use in medical practice. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Synthesis of Inorganic Nanocomposites by Selective Introduction of Metal Complexes into a Self-Assembled Block Copolymer Template

    Directory of Open Access Journals (Sweden)

    Hiroaki Wakayama

    2015-01-01

    Full Text Available Inorganic nanocomposites have characteristic structures that feature expanded interfaces, quantum effects, and resistance to crack propagation. These structures are promising for the improvement of many materials including thermoelectric materials, photocatalysts, and structural materials. Precise control of the inorganic nanocomposites’ morphology, size, and chemical composition is very important for these applications. Here, we present a novel fabrication method to control the structures of inorganic nanocomposites by means of a self-assembled block copolymer template. Different metal complexes were selectively introduced into specific polymer blocks of the block copolymer, and subsequent removal of the block copolymer template by oxygen plasma treatment produced hexagonally packed porous structures. In contrast, calcination removal of the block copolymer template yielded nanocomposites consisting of metallic spheres in a matrix of a metal oxide. These results demonstrate that different nanostructures can be created by selective use of processes to remove the block copolymer templates. The simple process of first mixing block copolymers and magnetic nanomaterial precursors and then subsequently removing the block copolymer template enables structural control of magnetic nanomaterials, which will facilitate their applicability in patterned media, including next-generation perpendicular magnetic recording media.

  20. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Georgel, B.; Zorgati, R.

    1994-01-01

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  1. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    International Nuclear Information System (INIS)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-01

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  2. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu [Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, 62102 (China); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06510-3220 (United States)

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  3. Photometric redshifts for the next generation of deep radio continuum surveys - I. Template fitting

    Science.gov (United States)

    Duncan, Kenneth J.; Brown, Michael J. I.; Williams, Wendy L.; Best, Philip N.; Buat, Veronique; Burgarella, Denis; Jarvis, Matt J.; Małek, Katarzyna; Oliver, S. J.; Röttgering, Huub J. A.; Smith, Daniel J. B.

    2018-01-01

    We present a study of photometric redshift performance for galaxies and active galactic nuclei detected in deep radio continuum surveys. Using two multiwavelength data sets, over the NOAO Deep Wide Field Survey Boötes and COSMOS fields, we assess photometric redshift (photo-z) performance for a sample of ∼4500 radio continuum sources with spectroscopic redshifts relative to those of ∼63 000 non-radio-detected sources in the same fields. We investigate the performance of three photometric redshift template sets as a function of redshift, radio luminosity and infrared/X-ray properties. We find that no single template library is able to provide the best performance across all subsets of the radio-detected population, with variation in the optimum template set both between subsets and between fields. Through a hierarchical Bayesian combination of the photo-z estimates from all three template sets, we are able to produce a consensus photo-z estimate that equals or improves upon the performance of any individual template set.

  4. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  5. Cross-cultural assessment of automatically generated multimodal referring expressions in a virtual world

    NARCIS (Netherlands)

    van der Sluis, Ielka; Luz, Saturnino; Breitfuss, Werner; Ishizuka, Mitsuru; Prendinger, Helmut

    This paper presents an assessment of automatically generated multimodal referring expressions as produced by embodied conversational agents in a virtual world. The algorithm used for this purpose employs general principles of human motor control and cooperativity in dialogues that can be

  6. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  7. Performance Evaluation of Fusing Protected Fingerprint Minutiae Templates on the Decision Level

    Directory of Open Access Journals (Sweden)

    Raymond N. J. Veldhuis

    2012-04-01

    Full Text Available In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples.

  8. Performance Evaluation of Fusing Protected Fingerprint Minutiae Templates on the Decision Level

    Science.gov (United States)

    Yang, Bian; Busch, Christoph; de Groot, Koen; Xu, Haiyun; Veldhuis, Raymond N. J.

    2012-01-01

    In a biometric authentication system using protected templates, a pseudonymous identifier is the part of a protected template that can be directly compared. Each compared pair of pseudonymous identifiers results in a decision testing whether both identifiers are derived from the same biometric characteristic. Compared to an unprotected system, most existing biometric template protection methods cause to a certain extent degradation in biometric performance. Fusion is therefore a promising way to enhance the biometric performance in template-protected biometric systems. Compared to feature level fusion and score level fusion, decision level fusion has not only the least fusion complexity, but also the maximum interoperability across different biometric features, template protection and recognition algorithms, templates formats, and comparison score rules. However, performance improvement via decision level fusion is not obvious. It is influenced by both the dependency and the performance gap among the conducted tests for fusion. We investigate in this paper several fusion scenarios (multi-sample, multi-instance, multi-sensor, multi-algorithm, and their combinations) on the binary decision level, and evaluate their biometric performance and fusion efficiency on a multi-sensor fingerprint database with 71,994 samples. PMID:22778583

  9. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  10. MAS Based Distributed Automatic Generation Control for Cyber-Physical Microgrid System

    Institute of Scientific and Technical Information of China (English)

    Zhongwen Li; Chuanzhi Zang; Peng Zeng; Haibin Yu; Hepeng Li

    2016-01-01

    The microgrid is a typical cyber-physical microgrid system(CPMS). The physical unconventional distributed generators(DGs) are intermittent and inverter-interfaced which makes them very different to control. The cyber components,such as the embedded computer and communication network,are equipped with DGs, to process and transmit the necessary information for the controllers. In order to ensure system-wide observability, controllability and stabilization for the microgrid,the cyber and physical component need to be integrated. For the physical component of CPMS, the droop-control method is popular as it can be applied in both modes of operation to improve the grid transient performance. Traditional droop control methods have the drawback of the inherent trade-off between power sharing and voltage and frequency regulation. In this paper, the global information(such as the average voltage and the output active power of the microgrid and so on) are acquired distributedly based on multi-agent system(MAS). Based on the global information from cyber components of CPMS, automatic generation control(AGC) and automatic voltage control(AVC)are proposed to deal with the drawback of traditional droop control. Simulation studies in PSCAD demonstrate the effectiveness of the proposed control methods.

  11. MAS Based Distributed Automatic Generation Control for Cyber-Physical Microgrid System

    Institute of Scientific and Technical Information of China (English)

    Zhongwen Li; Chuanzhi Zang; Peng Zeng; Haibin Yu; Hepeng Li

    2016-01-01

    The microgrid is a typical cyber-physical micro grid system (CPMS).The physical unconventional distributed generators (DGs) are intermittent and inverter-interfaced which makes them very different to control.The cyber components,such as the embedded computer and communication network,are equipped with DGs,to process and transmit the necessary information for the controllers.In order to ensure system-wide observability,controllability and stabilization for the microgrid,the cyber and physical component need to be integrated.For the physical component of CPMS,the droop-control method is popular as it can be applied in both modes of operation to improve the grid transient performance.Traditional droop control methods have the drawback of the inherent trade-off between power sharing and voltage and frequency regulation.In this paper,the global information (such as the average voltage and the output active power of the microgrid and so on) are acquired distributedly based on multi-agent system (MAS).Based on the global information from cyber components of CPMS,automatic generation control (AGC) and automatic voltage control (AVC) are proposed to deal with the drawback of traditional droop control.Simulation studies in PSCAD demonstrate the effectiveness of the proposed control methods.

  12. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  13. PHOTOGRAMMETRIC MODEL BASED METHOD OF AUTOMATIC ORIENTATION OF SPACE CARGO SHIP RELATIVE TO THE INTERNATIONAL SPACE STATION

    Directory of Open Access Journals (Sweden)

    Y. B. Blokhinov

    2012-07-01

    Full Text Available The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS for the International Space Station (ISS is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  14. Automatic Attraction of Visual Attention by Supraletter Features of Former Target Strings

    Directory of Open Access Journals (Sweden)

    Søren eKyllingsbæk

    2014-11-01

    Full Text Available Observers were trained to search for a particular horizontal string of 3 capital letters presented among similar strings consisting of exactly the same letters in different orders. The training was followed by a test in which the observers searched for a new target that was identical to one of the former distractors. The new distractor set consisted of the remaining former distractors plus the former target. On each trial, three letter-strings were displayed, which included the target string with a probability of .5. In Experiment 1, the strings were centered at different locations on the circumference of an imaginary circle around the fixation point. The training phase of Experiment 2 was similar, but in the test phase of the experiment, the strings were located in a vertical array centered on fixation, and in target-present arrays, the target always appeared at fixation. In both experiments, performance (d’ degraded on trials in which former targets were present, suggesting that the former targets automatically drew processing resources away from the current targets. Apparently, the two experiments showed automatic attraction of visual attention by supraletter features of former target strings.

  15. Automatic modulation format recognition for the next generation optical communication networks using artificial neural networks

    Science.gov (United States)

    Guesmi, Latifa; Hraghi, Abir; Menif, Mourad

    2015-03-01

    A new technique for Automatic Modulation Format Recognition (AMFR) in next generation optical communication networks is presented. This technique uses the Artificial Neural Network (ANN) in conjunction with the features of Linear Optical Sampling (LOS) of the detected signal at high bit rates using direct detection or coherent detection. The use of LOS method for this purpose mainly driven by the increase of bit rates which enables the measurement of eye diagrams. The efficiency of this technique is demonstrated under different transmission impairments such as chromatic dispersion (CD) in the range of -500 to 500 ps/nm, differential group delay (DGD) in the range of 0-15 ps and the optical signal tonoise ratio (OSNR) in the range of 10-30 dB. The results of numerical simulation for various modulation formats demonstrate successful recognition from a known bit rates with a higher estimation accuracy, which exceeds 99.8%.

  16. Feature-Based Analysis of Plasma-Based Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Geddes, Cameron G. R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Min [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cormier-Michel, Estelle [Tech-X Corp., Boulder, CO (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-02-01

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  17. Direct deposition of inorganic–organic hybrid semiconductors and their template-assisted microstructures

    International Nuclear Information System (INIS)

    Dwivedi, V.K.; Baumberg, J.J.; Prakash, G. Vijaya

    2013-01-01

    A straight-forward method is developed to deposit a new class of self-organized inorganic–organic (IO) hybrid, (C 12 H 25 NH 3 ) 2 PbI 4 . These IO hybrid structures are stacked-up as natural multiple quantum well structures and exhibit strong room-temperature exciton emission and other multifunctional features. Here it is successfully demonstrated that these materials can be directly carved into 2D photonic structures from the inexpensive template-assisted electrochemical deposition followed by solution processing. The applicability of this method for many such varieties of IO-hybrids is also explored. By appropriately controlling the deposition conditions and the self-assembly templates, target structures are developed for new-generation low-cost photonic devices. -- Highlights: ► New fabrication methodology for self-organized inorganic–organic hybrids. ► Strongly confined exciton emission and photoconductive properties. ► Simple bottom-up fabrication for device applications.

  18. Automatic Description Generation from Images : A Survey of Models, Datasets, and Evaluation Measures

    NARCIS (Netherlands)

    Bernardi, Raffaella; Cakici, Ruket; Elliott, Desmond; Erdem, Aykut; Erdem, Erkut; Ikizler-Cinbis, Nazli; Keller, Frank; Muscat, Adrian; Plank, Barbara

    2016-01-01

    Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem,

  19. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  20. Pengenalan Angka Pada Sistem Operasi Android Dengan Menggunakan Metode Template Matching

    Directory of Open Access Journals (Sweden)

    Abdi Pandu Kusuma

    2016-07-01

    Full Text Available AbstrakUsia dini merupakan usia yang efektif untuk mengembangkan berbagai potensi yang dimiliki oleh anak. Upaya pengembangan potensi dapat dilakukan melalui berbagai cara termasuk dengan cara bermain. Bermain bagi anak merupakan cara yang tepat untuk belajar. Berdasarkan fenomena tersebut, maka perlu dibuat sebuah aplikasi pengenalan angka yang interaktif dengan unsur edukasi. Aplikasi tersebut diharapkan dapat mengambil keputusan secara otomatis apa yang ditulis anak itu bernilai benar atau salah dan juga dapat membangkitkan semangat belajar anak dalam mengenal pola angka. Solusi yang sesuai agar aplikasi tersebut dapat memberikan jawaban salah atau benar digunakan satu metode yaitu template matching. Pengenalan angka dengan menggunakan metode template matching dilakukan dengan cara membandingkan citra masukan dengan citra template. Hasil template matching dihitung dari banyaknya titik pada citra masukan yang sesuai dengan citra template. Template disediakan pada database untuk memberikan contoh cara penulisan pola angka. Uji coba dilakukan pada aplikasi sebanyak 40 kali dengan pola yang berbeda. Dari hasil uji coba didapat prosentase keberhasilan aplikasi ini mencapai 75,75%.Kata kunci: Belajar, bermain, Template Matching, dan pola. AbstractEarly childhood is an effective age to develop the potential of the child. Potential development efforts can be done through various ways, including by playing. Playing for children is a great way to learn. Based on this phenomenon, it should be made an introduction to the numbers interactive application with elements of education. The application is expected to take decisions automatically what the child is written is true or false, and also can encourage a child's learning in recognizing number patterns. Appropriate solutions so that the app can give an answer right or wrong to use the methods that template matching. The introduction of the numbers by using template matching is done by comparing the

  1. Software and hardware platform for testing of Automatic Generation Control algorithms

    Directory of Open Access Journals (Sweden)

    Vasiliev Alexey

    2017-01-01

    Full Text Available Development and implementation of new Automatic Generation Control (AGC algorithms requires testing them on a model that adequately simulates primary energetic, information and control processes. In this article an implementation of a test platform based on HRTSim (Hybrid Real Time Simulator and SCADA CK-2007 (which is widely used by the System Operator of Russia is proposed. Testing of AGC algorithms on the test platform based on the same SCADA system that is used in operation allows to exclude errors associated with the transfer of AGC algorithms and settings from the test platform to a real power system. A power system including relay protection, automatic control systems and emergency control automatics can be accurately simulated on HRTSim. Besides the information commonly used by conventional AGC systems HRTSim is able to provide a resemblance of Phasor Measurement Unit (PMU measurements (information about rotor angles, magnitudes and phase angles of currents and voltages etc.. The additional information significantly expands the number of possible AGC algorithms so the test platform is useful in modern AGC system developing. The obtained test results confirm that the proposed system is applicable for the tasks mentioned above.

  2. An in silico MS/MS library for automatic annotation of novel FAHFA lipids.

    Science.gov (United States)

    Ma, Yan; Kind, Tobias; Vaniya, Arpana; Gennity, Ingrid; Fahrmann, Johannes F; Fiehn, Oliver

    2015-01-01

    A new lipid class named 'fatty acid esters of hydroxyl fatty acids' (FAHFA) was recently discovered in mammalian adipose tissue and in blood plasma and some FAHFAs were found to be associated with type 2 diabetes. To facilitate the automatic annotation of FAHFAs in biological specimens, a tandem mass spectra (MS/MS) library is needed. Due to the limitation of the commercial available standard compounds, we proposed building an in silico MS/MS library to extend the coverage of molecules. We developed a computer-generated library with 3267 tandem mass spectra (MS/MS) for 1089 FAHFA species. FAHFA spectra were generated based on authentic standards with negative mode electrospray ionization and 10, 20, and 40 V collision induced dissociation at 4 spectra/s as used in in ultra-high performance liquid chromatography-QTOF mass spectrometry studies. However, positional information of the hydroxyl group is only obtained either at lower QTOF spectra acquisition rates of 1 spectrum/s or at the MS(3) level in ion trap instruments. Therefore, an additional set of 4290 fragment-rich MS/MS spectra was created to enable distinguishing positional FAHFA isomers. The library was generated based on ion fragmentations and ion intensities of FAHFA external reference standards, developing a heuristic model for fragmentation rules and extending these rules to large swaths of computer-generated structures of FAHFAs with varying chain lengths, degrees of unsaturation and hydroxyl group positions. Subsequently, we validated the new in silico library by discovering several new FAHFA species in egg yolk, showing that this library enables high-throughput screening of FAHFA lipids in various biological matrices. The developed library and templates are freely available for commercial or noncommercial use at http://fiehnlab.ucdavis.edu/staff/yanma/fahfa-lipid-library. This in silico MS/MS library allows users to annotate FAHFAs from accurate mass tandem mass spectra in an easy and fast manner

  3. Automatic programmable air ozonizer

    International Nuclear Information System (INIS)

    Gubarev, S.P.; Klosovsky, A.V.; Opaleva, G.P.; Taran, V.S.; Zolototrubova, M.I.

    2015-01-01

    In this paper we describe a compact, economical, easy to manage auto air ozonator developed at the Institute of Plasma Physics of the NSC KIPT. It is designed for sanitation, disinfection of premises and cleaning the air from foreign odors. A distinctive feature of the developed device is the generation of a given concentration of ozone, approximately 0.7 maximum allowable concentration (MAC), and automatic maintenance of a specified level. This allows people to be inside the processed premises during operation. The microprocessor controller to control the operation of the ozonator was developed

  4. Facile template-directed synthesis of carbon-coated SnO2 nanotubes with enhanced Li-storage capabilities

    International Nuclear Information System (INIS)

    Zhu, Xiaoshu; Zhu, Jingyi; Yao, Yinan; Zhou, Yiming; Tang, Yawen; Wu, Ping

    2015-01-01

    Herein, a novel type of carbon-coated SnO 2 nanotubes has been designed and synthesized through a facile two-step hydrothermal approach by using ZnO nanorods as templates. During the synthetic route, SnO 2 nanocrystals and carbon layer have been uniformly deposited on the rod-like templates in sequence, meanwhile ZnO nanorods could be in situ dissolved owing to the generated alkaline and acidic environments during hydrothermal coating of SnO 2 nanocrystals and hydrothermal carbonization of glucose, respectively. When utilized as an anode material in lithium-ion batteries, the carbon-coated SnO 2 nanotubes manifests markedly enhanced Li-storage capabilities in terms of specific capacity and cycling stability in comparison with bare SnO 2 nanocrystals. - Graphical abstract: Display Omitted - Highlights: • C-coated SnO 2 nanotubes prepared via facile ZnO-nanorod-templated hydrothermal route. • Unique morphological and structural features toward lithium storage. • Enhanced Li-storage performance in terms of specific capacity and cycling stability

  5. Robust Automatic Speech Recognition Features using Complex Wavelet Packet Transform Coefficients

    Directory of Open Access Journals (Sweden)

    TjongWan Sen

    2009-11-01

    Full Text Available To improve the performance of phoneme based Automatic Speech Recognition (ASR in noisy environment; we developed a new technique that could add robustness to clean phonemes features. These robust features are obtained from Complex Wavelet Packet Transform (CWPT coefficients. Since the CWPT coefficients represent all different frequency bands of the input signal, decomposing the input signal into complete CWPT tree would also cover all frequencies involved in recognition process. For time overlapping signals with different frequency contents, e. g. phoneme signal with noises, its CWPT coefficients are the combination of CWPT coefficients of phoneme signal and CWPT coefficients of noises. The CWPT coefficients of phonemes signal would be changed according to frequency components contained in noises. Since the numbers of phonemes in every language are relatively small (limited and already well known, one could easily derive principal component vectors from clean training dataset using Principal Component Analysis (PCA. These principal component vectors could be used then to add robustness and minimize noises effects in testing phase. Simulation results, using Alpha Numeric 4 (AN4 from Carnegie Mellon University and NOISEX-92 examples from Rice University, showed that this new technique could be used as features extractor that improves the robustness of phoneme based ASR systems in various adverse noisy conditions and still preserves the performance in clean environments.

  6. Deliberation versus automaticity in decision making: Which presentation format features facilitate automatic decision making?

    Directory of Open Access Journals (Sweden)

    Anke Soellner

    2013-05-01

    Full Text Available The idea of automatic decision making approximating normatively optimal decisions without necessitating much cognitive effort is intriguing. Whereas recent findings support the notion that such fast, automatic processes explain empirical data well, little is known about the conditions under which such processes are selected rather than more deliberate stepwise strategies. We investigate the role of the format of information presentation, focusing explicitly on the ease of information acquisition and its influence on information integration processes. In a probabilistic inference task, the standard matrix employed in prior research was contrasted with a newly created map presentation format and additional variations of both presentation formats. Across three experiments, a robust presentation format effect emerged: Automatic decision making was more prevalent in the matrix (with high information accessibility, whereas sequential decision strategies prevailed when the presentation format demanded more information acquisition effort. Further scrutiny of the effect showed that it is not driven by the presentation format as such, but rather by the extent of information search induced by a format. Thus, if information is accessible with minimal need for information search, information integration is likely to proceed in a perception-like, holistic manner. In turn, a moderate demand for information search decreases the likelihood of behavior consistent with the assumptions of automatic decision making.

  7. Markov random field based automatic image alignment for electron tomography.

    Science.gov (United States)

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  8. Inverse planning for interstitial gynecologic template brachytherapy: truly anatomy-based planning

    International Nuclear Information System (INIS)

    Lessard, Etienne; Hsu, I-Chou; Pouliot, Jean

    2002-01-01

    Purpose: Commercially available optimization schemes generally result in an undesirable dose distribution, because of the particular shapes of tumors extending laterally from the tandem. Dose distribution is therefore manually obtained by adjusting relative dwell time values until an acceptable solution is found. The objective of this work is to present the clinical application of an inverse planning dose optimization tool for the automatic determination of source dwell time values in the treatment of interstitial gynecologic templates. Methods and Materials: In cases where the tumor extends beyond the range of the tandem-ovoid applicator, catheters as well as the tandem are inserted into the paravaginal and parametrial region in an attempt to cover the tumor volume. CT scans of these patients are then used for CT-based dose planning. Dose distribution is obtained manually by varying the relative dwell times until adequate dose coverage is achieved. This manual planning is performed by an experienced physician. In parallel, our in-house inverse planning based on simulated annealing is used to automatically determine which of all possible dwell positions will become active and to calculate the dwell time values needed to fulfill dose constraints applied to the tumor volume and to each organ at risk. To compare the results of these planning methods, dose-volume histograms and isodose distributions were generated for the target and each organ at risk. Results: This procedure has been applied for the dose planning of 12 consecutive interstitial gynecologic templates cases. For all cases, once the anatomy was contoured, the routine of inverse planning based on simulated annealing found the solution to the dose constraints within 1 min of CPU time. In comparison, manual planning took more than 45 min. The inverse planning-generated plans showed improved protection to organs at risk for the same coverage compared to manual planning. Conclusion: This inverse planning tool

  9. Efficient Generation and Selection of Combined Features for Improved Classification

    KAUST Repository

    Shono, Ahmad N.

    2014-05-01

    This study contributes a methodology and associated toolkit developed to allow users to experiment with the use of combined features in classification problems. Methods are provided for efficiently generating combined features from an original feature set, for efficiently selecting the most discriminating of these generated combined features, and for efficiently performing a preliminary comparison of the classification results when using the original features exclusively against the results when using the selected combined features. The potential benefit of considering combined features in classification problems is demonstrated by applying the developed methodology and toolkit to three sample data sets where the discovery of combined features containing new discriminating information led to improved classification results.

  10. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. Automatic Generation of Minimal Cut Sets

    Directory of Open Access Journals (Sweden)

    Sentot Kromodimoeljo

    2015-06-01

    Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.

  12. Automatic generation of configuration files for a distributed control system

    CERN Document Server

    Cupérus, J

    1995-01-01

    The CERN PS accelerator complex is composed of 9 interlinked accelerators for production and acceleration of various kinds of particles. The hardware is controlled through CAMAC, VME, G64, and GPIB modules, which in turn are controlled by more than 100 microprocessors in VME crates. To produce startup files for all these microprocessors, with the correct drivers, programs and parameters in each of them, is quite a challenge. The problem is solved by generating the startup files automatically from the description of the control system in a relational database. The generation process detects inconsistencies and incomplete information. Included in the startup files are data which are formally comments, but can be interpreted for run-time checking of interface modules and program activity.

  13. Using Automated Processes to Generate Test Items And Their Associated Solutions and Rationales to Support Formative Feedback

    Directory of Open Access Journals (Sweden)

    Mark Gierl

    2015-08-01

    Full Text Available Automatic item generation is the process of using item models to produce assessment tasks using computer technology. An item model is similar to a template that highlights the elements in the task that must be manipulated to produce new items. The purpose of our study is to describe an innovative method for generating large numbers of diverse and heterogeneous items along with their solutions and associated rationales to support formative feedback. We demonstrate the method by generating items in two diverse content areas, mathematics and nonverbal reasoning

  14. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  15. Experience in connecting the power generating units of thermal power plants to automatic secondary frequency regulation within the united power system of Russia

    International Nuclear Information System (INIS)

    Zhukov, A. V.; Komarov, A. N.; Safronov, A. N.; Barsukov, I. V.

    2009-01-01

    The principles of central control of the power generating units of thermal power plants by automatic secondary frequency and active power overcurrent regulation systems, and the algorithms for interactions between automatic power control systems for the power production units in thermal power plants and centralized systems for automatic frequency and power regulation, are discussed. The order of switching the power generating units of thermal power plants over to control by a centralized system for automatic frequency and power regulation and by the Central Coordinating System for automatic frequency and power regulation is presented. The results of full-scale system tests of the control of power generating units of the Kirishskaya, Stavropol, and Perm GRES (State Regional Electric Power Plants) by the Central Coordinating System for automatic frequency and power regulation at the United Power System of Russia on September 23-25, 2008, are reported.

  16. Automatic generation of stop word lists for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  17. The ''controbloc'', a programmable automatic device for the 1,300 MW generation of power stations

    International Nuclear Information System (INIS)

    Pralus, B.; Winzelle, J.C.

    1983-01-01

    Technological progress in the field of microelectronics has led to the development of an automatic control device, the ''controbloc'', for operating and controlling nuclear power plants. The ''controbloc'' will be used in automatic systems with a high degree of safety and versatility and is now being installed in the first of the new generation 1,300 MW power stations. The main characteristics of the device and the evaluation tests which have been carried out are described [fr

  18. Voice Quality Measuring Setup with Automatic Voice over IP Call Generator and Lawful Interception Packet Analyzer

    Directory of Open Access Journals (Sweden)

    PLEVA Matus

    Full Text Available This paper describes the packet measuring laboratory setup, which could be used also for lawful interception applications, using professional packet analyzer, Voice over IP call generator, free call server (Asterisk linux setup and appropriate software and hardware described below. This setup was used for measuring the quality of the automatically generated VoIP calls under stressed network conditions, when the call manager server was flooded with high bandwidth traffic, near the bandwidth limit of the connected switch. The call generator realizes 30 calls simultaneously and the packet capturer & analyzercould decode the VoIP traffic, extract RTP session data, automatically analyze the voice quality using standardized MOS (Mean Opinion Score values and describe also the source of the voice degradation (jitter, packet loss, codec, delay, etc..

  19. Improvement in the performance of CAD for the Alzheimer-type dementia based on automatic extraction of temporal lobe from coronal MR images

    International Nuclear Information System (INIS)

    Kaeriyama, Tomoharu; Kodama, Naoki; Kaneko, Tomoyuki; Shimada, Tetsuo; Tanaka, Hiroyuki; Takeda, Ai; Fukumoto, Ichiro

    2004-01-01

    In this study, we extracted whole brain and temporal lobe images from MR images (26 healthy elderly controls and 34 Alzheimer-type dementia patients) by means of binarize, mask processing, template matching, Hough transformation, and boundary tracing etc. We assessed the extraction accuracy by comparing the extracted images to images extracts by a radiological technologist. The results of assessment by consistent rate; brain images 91.3±4.3%, right temporal lobe 83.3±6.9%, left temporal lobe 83.7±7.6%. Furthermore discriminant analysis using 6 textural features demonstrated sensitivity and specificity of 100% when the healthy elderly controls were compared to the Alzheimer-type dementia patients. Our research showed the possibility of automatic objective diagnosis of temporal lobe abnormalities by automatic extracted images of the temporal lobes. (author)

  20. Perl Template Toolkit

    CERN Document Server

    Chamberlain, Darren; Cross, David; Torkington, Nathan; Diaz, tatiana Apandi

    2004-01-01

    Among the many different approaches to "templating" with Perl--such as Embperl, Mason, HTML::Template, and hundreds of other lesser known systems--the Template Toolkit is widely recognized as one of the most versatile. Like other templating systems, the Template Toolkit allows programmers to embed Perl code and custom macros into HTML documents in order to create customized documents on the fly. But unlike the others, the Template Toolkit is as facile at producing HTML as it is at producing XML, PDF, or any other output format. And because it has its own simple templating language, templates

  1. Character feature integration of Chinese calligraphy and font

    Science.gov (United States)

    Shi, Cao; Xiao, Jianguo; Jia, Wenhua; Xu, Canhui

    2013-01-01

    A framework is proposed in this paper to effectively generate a new hybrid character type by means of integrating local contour feature of Chinese calligraphy with structural feature of font in computer system. To explore traditional art manifestation of calligraphy, multi-directional spatial filter is applied for local contour feature extraction. Then the contour of character image is divided into sub-images. The sub-images in the identical position from various characters are estimated by Gaussian distribution. According to its probability distribution, the dilation operator and erosion operator are designed to adjust the boundary of font image. And then new Chinese character images are generated which possess both contour feature of artistical calligraphy and elaborate structural feature of font. Experimental results demonstrate the new characters are visually acceptable, and the proposed framework is an effective and efficient strategy to automatically generate the new hybrid character of calligraphy and font.

  2. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  3. TermGenie - a web-application for pattern-based ontology class generation.

    Science.gov (United States)

    Dietze, Heiko; Berardini, Tanya Z; Foulger, Rebecca E; Hill, David P; Lomax, Jane; Osumi-Sutherland, David; Roncaglia, Paola; Mungall, Christopher J

    2014-01-01

    Biological ontologies are continually growing and improving from requests for new classes (terms) by biocurators. These ontology requests can frequently create bottlenecks in the biocuration process, as ontology developers struggle to keep up, while manually processing these requests and create classes. TermGenie allows biocurators to generate new classes based on formally specified design patterns or templates. The system is web-based and can be accessed by any authorized curator through a web browser. Automated rules and reasoning engines are used to ensure validity, uniqueness and relationship to pre-existing classes. In the last 4 years the Gene Ontology TermGenie generated 4715 new classes, about 51.4% of all new classes created. The immediate generation of permanent identifiers proved not to be an issue with only 70 (1.4%) obsoleted classes. TermGenie is a web-based class-generation system that complements traditional ontology development tools. All classes added through pre-defined templates are guaranteed to have OWL equivalence axioms that are used for automatic classification and in some cases inter-ontology linkage. At the same time, the system is simple and intuitive and can be used by most biocurators without extensive training.

  4. AUTOMATIC WINDING GENERATION USING MATRIX REPRESENTATION - ANFRACTUS TOOL 1.0

    Directory of Open Access Journals (Sweden)

    Daoud Ouamara

    2018-02-01

    Full Text Available This paper describes an original approach dealing with AC/DC winding design in electrical machines. A research software called “ANFRACTUS Tool 1.0”, allowing automatic generation of all windings in multi-phases electrical machines, has been developed using the matrix representation. Unlike existent methods, where the aim is to synthesize a winding with higher performances, the proposed method provides the opportunity to choose between all doable windings. The specificity of this approach is based on the fact that it take only the slots, phases and layers number as input parameters. The poles number is not requested to run the generation process. Windings generation by matrix representation may be applied for any number of slots, phases and layers. The software do not deal with the manner that coils are connected but just the emplacement of coils in each slot with its current sense. The waveform and the harmonic spectrum of the total magnetomotive force (MMF are given as result.

  5. The Design of a Templated C++ Small Vector Class for Numerical Computing

    Science.gov (United States)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  6. Automatic Control Systems (ACS for Generation and Sale of Electric Power Under Conditions of Industry-Sector Liberalization

    Directory of Open Access Journals (Sweden)

    Yu. S. Petrusha

    2013-01-01

    Full Text Available Possible risks pertaining to transition of electric-power industry to market relations have been considered in the paper. The paper presents an integrated ACS for generation and sale of electric power as an improvement of methodology for organizational and technical management. The given system is based on integration of operating Automatic Dispatch Control System (ADCS and developing Automatic Electricity Meter Reading System (AEMRS. The paper proposes to form an inter-branch sector of ACS PLC (Automatic Control System for Prolongation of Life Cycle users which is oriented on provision of development strategy.

  7. A model for abnormal activity recognition and alert generation system for elderly care by hidden conditional random fields using R-transform and generalized discriminant analysis features.

    Science.gov (United States)

    Khan, Zafar Ali; Sohn, Won

    2012-10-01

    The growing population of elderly people living alone increases the need for automatic healthcare monitoring systems for elderly care. Automatic vision sensor-based systems are increasingly used for human activity recognition (HAR) in recent years. This study presents an improved model, tested using actors, of a sensor-based HAR system to recognize daily life activities of elderly people at home and generate an alert in case of abnormal HAR. Datasets consisting of six abnormal activities (falling backward, falling forward, falling rightward, falling leftward, chest pain, and fainting) and four normal activities (walking, rushing, sitting down, and standing up) are generated from different view angles (90°, -90°, 45°, -45°). Feature extraction and dimensions reduction are performed by R-transform followed by generalized discriminant analysis (GDA) methods. R-transform extracts symmetric, scale, and translation-invariant features from the sequences of activities. GDA increases the discrimination between different classes of highly similar activities. Silhouette sequences are quantified by the Linde-Buzo-Gray algorithm and recognized by hidden conditional random fields. Experimental results provide an average recognition rate of 94.2% for abnormal activities and 92.7% for normal activities. The recognition rate for the highly similar activities from different view angles shows the flexibility and efficacy of the proposed abnormal HAR and alert generation system for elderly care.

  8. Intelligent control schemes applied to Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Dingguo Chen

    2016-04-01

    Full Text Available Integrating ever increasing amount of renewable generating resources to interconnected power systems has created new challenges to the safety and reliability of today‟s power grids and posed new questions to be answered in the power system modeling, analysis and control. Automatic Generation Control (AGC must be extended to be able to accommodate the control of renewable generating assets. In addition, AGC is mandated to operate in accordance with the NERC‟s Control Performance Standard (CPS criteria, which represent a greater flexibility in relaxing the control of generating resources and yet assuring the stability and reliability of interconnected power systems when each balancing authority operates in full compliance. Enhancements in several aspects to the traditional AGC must be made in order to meet the aforementioned challenges. It is the intention of this paper to provide a systematic, mathematical formulation for AGC as a first attempt in the context of meeting the NERC CPS requirements and integrating renewable generating assets, which has not been seen reported in the literature to the best knowledge of the authors. Furthermore, this paper proposes neural network based predictive control schemes for AGC. The proposed controller is capable of handling complicated nonlinear dynamics in comparison with the conventional Proportional Integral (PI controller which is typically most effective to handle linear dynamics. The neural controller is designed in such a way that it has the capability of controlling the system generation in the relaxed manner so the ACE is controlled to a desired range instead of driving it to zero which would otherwise increase the control effort and cost; and most importantly the resulting system control performance meets the NERC CPS requirements and/or the NERC Balancing Authority’s ACE Limit (BAAL compliance requirements whichever are applicable.

  9. Automated ventricular systems segmentation in brain CT images by combining low-level segmentation and high-level template matching

    Directory of Open Access Journals (Sweden)

    Ward Kevin R

    2009-11-01

    Full Text Available Abstract Background Accurate analysis of CT brain scans is vital for diagnosis and treatment of Traumatic Brain Injuries (TBI. Automatic processing of these CT brain scans could speed up the decision making process, lower the cost of healthcare, and reduce the chance of human error. In this paper, we focus on automatic processing of CT brain images to segment and identify the ventricular systems. The segmentation of ventricles provides quantitative measures on the changes of ventricles in the brain that form vital diagnosis information. Methods First all CT slices are aligned by detecting the ideal midlines in all images. The initial estimation of the ideal midline of the brain is found based on skull symmetry and then the initial estimate is further refined using detected anatomical features. Then a two-step method is used for ventricle segmentation. First a low-level segmentation on each pixel is applied on the CT images. For this step, both Iterated Conditional Mode (ICM and Maximum A Posteriori Spatial Probability (MASP are evaluated and compared. The second step applies template matching algorithm to identify objects in the initial low-level segmentation as ventricles. Experiments for ventricle segmentation are conducted using a relatively large CT dataset containing mild and severe TBI cases. Results Experiments show that the acceptable rate of the ideal midline detection is over 95%. Two measurements are defined to evaluate ventricle recognition results. The first measure is a sensitivity-like measure and the second is a false positive-like measure. For the first measurement, the rate is 100% indicating that all ventricles are identified in all slices. The false positives-like measurement is 8.59%. We also point out the similarities and differences between ICM and MASP algorithms through both mathematically relationships and segmentation results on CT images. Conclusion The experiments show the reliability of the proposed algorithms. The

  10. LMD Based Features for the Automatic Seizure Detection of EEG Signals Using SVM.

    Science.gov (United States)

    Zhang, Tao; Chen, Wanzhong

    2017-08-01

    Achieving the goal of detecting seizure activity automatically using electroencephalogram (EEG) signals is of great importance and significance for the treatment of epileptic seizures. To realize this aim, a newly-developed time-frequency analytical algorithm, namely local mean decomposition (LMD), is employed in the presented study. LMD is able to decompose an arbitrary signal into a series of product functions (PFs). Primarily, the raw EEG signal is decomposed into several PFs, and then the temporal statistical and non-linear features of the first five PFs are calculated. The features of each PF are fed into five classifiers, including back propagation neural network (BPNN), K-nearest neighbor (KNN), linear discriminant analysis (LDA), un-optimized support vector machine (SVM) and SVM optimized by genetic algorithm (GA-SVM), for five classification cases, respectively. Confluent features of all PFs and raw EEG are further passed into the high-performance GA-SVM for the same classification tasks. Experimental results on the international public Bonn epilepsy EEG dataset show that the average classification accuracy of the presented approach are equal to or higher than 98.10% in all the five cases, and this indicates the effectiveness of the proposed approach for automated seizure detection.

  11. An Approach to Detect and Study DNA Double-Strand Break Repair by Transcript RNA Using a Spliced-Antisense RNA Template.

    Science.gov (United States)

    Keskin, Havva; Storici, Francesca

    2018-01-01

    A double-strand break (DSB) is one of the most dangerous DNA lesion, and its repair is crucial for genome stability. Homologous recombination is considered the safest way to repair a DNA DSB and requires an identical or nearly identical DNA template, such as a sister chromatid or a homologous chromosome for accurate repair. Can transcript RNA serve as donor template for DSB repair? Here, we describe an approach that we developed to detect and study DNA repair by transcript RNA. Key features of the method are: (i) use of antisense (noncoding) RNA as template for DSB repair by RNA, (ii) use of intron splicing to distinguish the sequence of the RNA template from that of the DNA that generates the RNA template, and (iii) use of a trans and cis system to study how RNA repairs a DSB in homologous but distant DNA or in its own DNA, respectively. This chapter provides details on how to use a spliced-antisense RNA template to detect and study DSB repair by RNA in trans or cis in yeast cells. Our approach for detection of DSB repair by RNA in cells can be applied to cell types other than yeast, such as bacteria, mammalian cells, or other eukaryotic cells. © 2018 Elsevier Inc. All rights reserved.

  12. FUSING SPEECH SIGNAL AND PALMPRINT FEATURES FOR AN SECURED AUTHENTICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    P.K. Mahesh

    2011-11-01

    Full Text Available In the application of Biometric authentication, personal identification is regarded as an effective method for automatic recognition, with a high confidence, a person’s identity. Using multimodal biometric systems we typically get better performance compare to single biometric modality. This paper proposes the multimodal biometrics system for identity verification using two traits, i.e., speech signal and palmprint. Integrating the palmprint and speech information increases robustness of person authentication. The proposed system is designed for applications where the training data contains a speech signal and palmprint. It is well known that the performance of person authentication using only speech signal or palmprint is deteriorated by feature changes with time. The final decision is made by fusion at matching score level architecture in which feature vectors are created independently for query measures and are then compared to the enrolment templates, which are stored during database preparation.

  13. Ordered Nanomaterials Thin Films via Supported Anodized Alumina Templates

    Directory of Open Access Journals (Sweden)

    Mohammed eES-SOUNI

    2014-10-01

    Full Text Available Supported anodized alumina template films with highly ordered porosity are best suited for fabricating large area ordered nanostructures with tunable dimensions and aspect ratios. In this paper we first discuss important issues for the generation of such templates, including required properties of the Al/Ti/Au/Ti thin film heterostructure on a substrate for high quality templates. We then show examples of anisotropic nanostructure films consisting of noble metals using these templates, discuss briefly their optical properties and their applications to molecular detection using surface enhanced Raman spectroscopy. Finally we briefly address the possibility to make nanocomposite films, exemplary shown on a plasmonic-thermochromic nanocomposite of VO2-capped Au-nanorods.

  14. Computational resources to filter gravitational wave data with P-approximant templates

    International Nuclear Information System (INIS)

    Porter, Edward K

    2002-01-01

    The prior knowledge of the gravitational waveform from compact binary systems makes matched filtering an attractive detection strategy. This detection method involves the filtering of the detector output with a set of theoretical waveforms or templates. One of the most important factors in this strategy is knowing how many templates are needed in order to reduce the loss of possible signals. In this study, we calculate the number of templates and computational power needed for a one-step search for gravitational waves from inspiralling binary systems. We build on previous works by first expanding the post-Newtonian waveforms to 2.5-PN order and second, for the first time, calculating the number of templates needed when using P-approximant waveforms. The analysis is carried out for the four main first-generation interferometers, LIGO, GEO600, VIRGO and TAMA. As well as template number, we also calculate the computational cost of generating banks of templates for filtering GW data. We carry out the calculations for two initial conditions. In the first case we assume a minimum individual mass of 1 M o-dot and in the second, we assume a minimum individual mass of 5 M o-dot . We find that, in general, we need more P-approximant templates to carry out a search than if we use standard PN templates. This increase varies according to the order of PN-approximation, but can be as high as a factor of 3 and is explained by the smaller span of the P-approximant templates as we go to higher masses. The promising outcome is that for 2-PN templates, the increase is small and is outweighed by the known robustness of the 2-PN P-approximant templates

  15. Automatic systems for opening and closing reactor vessels, steam generators, and pressurizers

    International Nuclear Information System (INIS)

    Samblat, C.

    1990-01-01

    The need for shorter working assignments, reduced dose rates and less time consumption have caused Electricite de France and Framatome to automate the entire procedure of opening and closing the main components in the primary system, such as the reactor vessel, steam generator, and pressurizer. The experience accumulated by the two companies in more than 300 annual revisions of nuclear generating units worldwide has been used as a basis for automating all bolt opening and closing steps as well as cleaning processes. The machines and automatic systems currently in operation are the result of extensive studies and practical tests. (orig.) [de

  16. A novel chaotic stream cipher and its application to palmprint template protection

    International Nuclear Information System (INIS)

    Heng-Jian, Li; Jia-Shu, Zhang

    2010-01-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher. (general)

  17. A novel chaotic stream cipher and its application to palmprint template protection

    Science.gov (United States)

    Li, Heng-Jian; Zhang, Jia-Shu

    2010-04-01

    Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.

  18. Spline-based automatic path generation of welding robot

    Institute of Scientific and Technical Information of China (English)

    Niu Xuejuan; Li Liangyu

    2007-01-01

    This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.

  19. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Olivier Aycard

    2004-12-01

    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  20. The Utility of Template Analysis in Qualitative Psychology Research.

    Science.gov (United States)

    Brooks, Joanna; McCluskey, Serena; Turley, Emma; King, Nigel

    2015-04-03

    Thematic analysis is widely used in qualitative psychology research, and in this article, we present a particular style of thematic analysis known as Template Analysis. We outline the technique and consider its epistemological position, then describe three case studies of research projects which employed Template Analysis to illustrate the diverse ways it can be used. Our first case study illustrates how the technique was employed in data analysis undertaken by a team of researchers in a large-scale qualitative research project. Our second example demonstrates how a qualitative study that set out to build on mainstream theory made use of the a priori themes (themes determined in advance of coding) permitted in Template Analysis. Our final case study shows how Template Analysis can be used from an interpretative phenomenological stance. We highlight the distinctive features of this style of thematic analysis, discuss the kind of research where it may be particularly appropriate, and consider possible limitations of the technique. We conclude that Template Analysis is a flexible form of thematic analysis with real utility in qualitative psychology research.

  1. Multi-template tensor-based morphometry: application to analysis of Alzheimer's disease.

    Science.gov (United States)

    Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka

    2011-06-01

    In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and compared to the conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1% for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. A concatenated coding scheme for biometric template protection

    NARCIS (Netherlands)

    Shao, X.; Xu, H.; Veldhuis, Raymond N.J.; Slump, Cornelis H.

    2012-01-01

    Cryptography may mitigate the privacy problem in biometric recognition systems. However, cryptography technologies lack error-tolerance and biometric samples cannot be reproduced exactly, rising the robustness problem. The biometric template protection system needs a good feature extraction

  3. Incorporating Learning Characteristics into Automatic Essay Scoring Models: What Individual Differences and Linguistic Features Tell Us about Writing Quality

    Science.gov (United States)

    Crossley, Scott A.; Allen, Laura K.; Snow, Erica L.; McNamara, Danielle S.

    2016-01-01

    This study investigates a novel approach to automatically assessing essay quality that combines natural language processing approaches that assess text features with approaches that assess individual differences in writers such as demographic information, standardized test scores, and survey results. The results demonstrate that combining text…

  4. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  5. Automatic Classification of Normal and Cancer Lung CT Images Using Multiscale AM-FM Features

    Directory of Open Access Journals (Sweden)

    Eman Magdy

    2015-01-01

    Full Text Available Computer-aided diagnostic (CAD systems provide fast and reliable diagnosis for medical images. In this paper, CAD system is proposed to analyze and automatically segment the lungs and classify each lung into normal or cancer. Using 70 different patients’ lung CT dataset, Wiener filtering on the original CT images is applied firstly as a preprocessing step. Secondly, we combine histogram analysis with thresholding and morphological operations to segment the lung regions and extract each lung separately. Amplitude-Modulation Frequency-Modulation (AM-FM method thirdly, has been used to extract features for ROIs. Then, the significant AM-FM features have been selected using Partial Least Squares Regression (PLSR for classification step. Finally, K-nearest neighbour (KNN, support vector machine (SVM, naïve Bayes, and linear classifiers have been used with the selected AM-FM features. The performance of each classifier in terms of accuracy, sensitivity, and specificity is evaluated. The results indicate that our proposed CAD system succeeded to differentiate between normal and cancer lungs and achieved 95% accuracy in case of the linear classifier.

  6. A template-based approach for responsibility management in executable business processes

    Science.gov (United States)

    Cabanillas, Cristina; Resinas, Manuel; Ruiz-Cortés, Antonio

    2018-05-01

    Process-oriented organisations need to manage the different types of responsibilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at runtime is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. In this paper, we address this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. We introduce a metamodel based on Responsibility Assignment Matrices (RAM) to model the responsibility assignment for each activity, and a flexible template-based mechanism that automatically transforms such information into BPMN elements, which can be interpreted and executed by a BPMS. Thus, our approach does not enforce any specific behaviour for the different responsibilities but new templates can be modelled to specify the interaction that best suits the activity requirements. Furthermore, libraries of templates can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.

  7. Durable diamond-like carbon templates for UV nanoimprint lithography

    International Nuclear Information System (INIS)

    Tao, L; Ramachandran, S; Nelson, C T; Overzet, L J; Goeckner, M; Lee, G; Hu, W; Lin, M; Willson, C G; Wu, W

    2008-01-01

    The interaction between resist and template during the separation process after nanoimprint lithography (NIL) can cause the formation of defects and damage to the templates and resist patterns. To alleviate these problems, fluorinated self-assembled monolayers (F-SAMs, i.e. tridecafluoro-1,1,2,2,tetrahydrooctyl trichlorosilane or FDTS) have been employed as template release coatings. However, we find that the FDTS coating undergoes irreversible degradation after only 10 cycles of UV nanoimprint processes with SU-8 resist. The degradation includes a 28% reduction in surface F atoms and significant increases in the surface roughness. In this paper, diamond-like carbon (DLC) films were investigated as an alternative material not only for coating but also for direct fabrication of nanoimprint templates. DLC films deposited on quartz templates in a plasma enhanced chemical vapor deposition system are shown to have better chemical and physical stability than FDTS. After the same 10 cycles of UV nanoimprints, the surface composition as well as the roughness of DLC films were found to be unchanged. The adhesion energy between the DLC surface and SU-8 is found to be smaller than that of FDTS despite the slightly higher total surface energy of DLC. DLC templates with 40 nm features were fabricated using e-beam lithography followed by Cr lift-off and reactive ion etching. UV nanoimprinting using the directly patterned DLC templates in SU-8 resist demonstrates good pattern transfer fidelity and easy template-resist separation. These results indicate that DLC is a promising material for fabricating durable templates for UV nanoimprint lithography

  8. Automatic two- and three-dimensional mesh generation based on fuzzy knowledge processing

    Science.gov (United States)

    Yagawa, G.; Yoshimura, S.; Soneda, N.; Nakao, K.

    1992-09-01

    This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique. A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the “closeness” of nodal location to each stress concentration field as well as “nodal density”. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique. To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.

  9. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    Science.gov (United States)

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  10. Emergence of a code in the polymerization of amino acids along RNA templates.

    Directory of Open Access Journals (Sweden)

    Jean Lehmann

    2009-06-01

    Full Text Available The origin of the genetic code in the context of an RNA world is a major problem in the field of biophysical chemistry. In this paper, we describe how the polymerization of amino acids along RNA templates can be affected by the properties of both molecules. Considering a system without enzymes, in which the tRNAs (the translation adaptors are not loaded selectively with amino acids, we show that an elementary translation governed by a Michaelis-Menten type of kinetics can follow different polymerization regimes: random polymerization, homopolymerization and coded polymerization. The regime under which the system is running is set by the relative concentrations of the amino acids and the kinetic constants involved. We point out that the coding regime can naturally occur under prebiotic conditions. It generates partially coded proteins through a mechanism which is remarkably robust against non-specific interactions (mismatches between the adaptors and the RNA template. Features of the genetic code support the existence of this early translation system.

  11. Personal Identification and the Assessment of the Psychophysiological State While Writing a Signature

    Directory of Open Access Journals (Sweden)

    Pavel Lozhnikov

    2015-08-01

    Full Text Available This article discusses the problem of user identification and psychophysiological state assessment while writing a signature using a graphics tablet. The solution of the problem includes the creation of templates containing handwriting signature features simultaneously with the hidden registration of physiological parameters of a person being tested. Heart rate variability description in the different time points is used as a physiological parameter. As a result, a signature template is automatically generated for psychophysiological states of an identified person. The problem of user identification and psychophysiological state assessment is solved depending on the registered value of a physiological parameter.

  12. Subtle Distinctions: How Attentional Templates Influence EEG Parameters of Cognitive Control in a Spatial Cuing Paradigm

    Directory of Open Access Journals (Sweden)

    Christine Mertes

    2018-03-01

    Full Text Available Using event-related potentials (ERPs of the electroencephalogram, we investigated how cognitive control is altered by the scope of an attentional template currently activated in visual working memory. Participants performed a spatial cuing task where an irrelevant color singleton cue was presented prior to a target array. Blockwise, the target was either a red circle or a gray square and had to be searched within homogenous (gray circles or heterogeneous non-targets (differently colored circles or various shapes. Thereby we aimed to trigger the adoption of different attentional templates: a broader singleton or a narrower, more specific feature template. ERP markers of attentional selection and inhibitory control showed that the amount of cognitive control was overall enhanced when participants searched on the basis of a feature-specific template: the analysis revealed reduced selection (N2pc, frontal P2 and pronounced inhibition (negative shift of frontal N2 of the irrelevant color cue when participants searched for a feature target. On behavioral level attentional capture was most pronounced in the color condition with no differentiation between the task-induced scopes of the attentional template.

  13. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fengli [Univ. of Arkansas, Fayetteville, AR (United States); Li, Qinghua [Univ. of Arkansas, Fayetteville, AR (United States)

    2017-10-09

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Network and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.

  14. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    Kaneko, Tomoyuki; Kodama, Naoki; Kaeriyama, Tomoharu; Fukumoto, Ichiro

    2004-01-01

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  15. Evaluation of a feature extraction framework for FPGA firmware generation during a beam-test at CERN-SPS for the CBM-TRD experiment

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Chavez, Cruz de Jesus; Munoz Castillo, Carlos Enrique; Kebschull, Udo [Infrastructure and Computer Systems in Data Processing (IRI), Goethe University, Frankfurt am Main (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    A feature extraction framework has been developed to allow easy FPGA firmware generation for specific feature extraction algorithms in order to find and extract regions of interest within time-based signals. This framework allows the instantiation of multiple well-known feature extraction algorithms such as center of gravity, time over threshold and cluster finder, just to mention a few of them. A graphical user interface has also been built on top of the framework to provide a user-friendly way to visualize the data-flow architecture across processing stages. The FPGA platform constraints are automatically set up by the framework itself. This feature reduces the need of low-level hardware configuration knowledge that would normally be provided by the user, centering the attention in setting up the processing algorithms for the given task more than in writing hardware description code. During November 2015, a beam-test was performed at the CERN-SPS hall. The presented framework was used to generate a firmware for the SysCore3 FPGA development board used to readout two TRD detectors by means of the SPADIC 1.0 front-end chip. The framework architecture, design methodology, as well as the achieved results during the mentioned beam-test are presented.

  16. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials

    Science.gov (United States)

    Valderrama, Joaquin T.; de la Torre, Angel; Van Dun, Bram

    2018-02-01

    Objective. Artifact reduction in electroencephalogram (EEG) signals is usually necessary to carry out data analysis appropriately. Despite the large amount of denoising techniques available with a multichannel setup, there is a lack of efficient algorithms that remove (not only detect) blink-artifacts from a single channel EEG, which is of interest in many clinical and research applications. This paper describes and evaluates the iterative template matching and suppression (ITMS), a new method proposed for detecting and suppressing the artifact associated with the blink activity from a single channel EEG. Approach. The approach of ITMS consists of (a) an iterative process in which blink-events are detected and the blink-artifact waveform of the analyzed subject is estimated, (b) generation of a signal modeling the blink-artifact, and (c) suppression of this signal from the raw EEG. The performance of ITMS is compared with the multi-window summation of derivatives within a window (MSDW) technique using both synthesized and real EEG data. Main results. Results suggest that ITMS presents an adequate performance in detecting and suppressing blink-artifacts from a single channel EEG. When applied to the analysis of cortical auditory evoked potentials (CAEPs), ITMS provides a significant quality improvement in the resulting responses, i.e. in a cohort of 30 adults, the mean correlation coefficient improved from 0.37 to 0.65 when the blink-artifacts were detected and suppressed by ITMS. Significance. ITMS is an efficient solution to the problem of denoising blink-artifacts in single-channel EEG applications, both in clinical and research fields. The proposed ITMS algorithm is stable; automatic, since it does not require human intervention; low-invasive, because the EEG segments not contaminated by blink-artifacts remain unaltered; and easy to implement, as can be observed in the Matlab script implemeting the algorithm provided as supporting material.

  17. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  18. Automatic garment template sewing technology based on machine identification%基于机器识别的全自动服装模板缝纫技术

    Institute of Scientific and Technical Information of China (English)

    张华玲; 戴斌辉; 原竞杰

    2016-01-01

    In view of the low efficiency of the traditional garment sewing process and dependence on the manual operation and other issues,a kind of automatic sewing technology of intelligent garment templates is put forward based on visual technology.The X/Y/Z direction of the three freedom mo-tion of the mechanical body is designed to complete the cutting and molding of the fabric,PVC,leath-er and other different materials.Then by using the teaching acquisition intelligent vision technology, the sample is automaticly generated to complete intelligent traj ectory planning,and drive sequential action of mechanical body through the embedded platform.The automation of garment sewing clothing is realized,improving the streamlined,standardized and efficient operations,decreasing the depend-ence of garment factory on skilled workers.%针对传统服装缝制工艺效率低且依赖于人工操作等问题,提出一种基于智能视觉技术的全自动服装模板缝纫技术。设计一个可沿 X/Y/Z 3个方向自由运动的机械本体,完成对面料、PVC、皮革等不同材料的裁剪和制模;运用智能视觉技术软件进行视教采集,自动生成样片完成智能轨迹规划并通过嵌入式平台驱动机械本体的顺序动作,实现服装缝制自动化,提高服装作业的流水化、标准化、高效化,降低服装厂对熟练工的依赖性。

  19. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    Science.gov (United States)

    2015-12-01

    more protocols (especially at different layers of the OSI model ), implementing an inference engine to extract inter- and intrapacket dependencies, and...ARL-TR-7543 ● DEC 2015 US Army Research Laboratory Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model ...ICMP) Model Generation for ns-3 by Jaime C Acosta and Felipe Jovel Survivability/Lethality Analysis Directorate, ARL Felipe Sotelo and Caesar

  20. Automatic generation of medium-detailed 3D models of buildings based on CAD data

    NARCIS (Netherlands)

    Dominguez-Martin, B.; Van Oosterom, P.; Feito-Higueruela, F.R.; Garcia-Fernandez, A.L.; Ogayar-Anguita, C.J.

    2015-01-01

    We present the preliminary results of a work in progress which aims to obtain a software system able to automatically generate a set of diverse 3D building models with a medium level of detail, that is, more detailed that a mere parallelepiped, but not as detailed as a complete geometric

  1. Automatic Generation of the Planning Tunnel High Speed Craft Hull Form

    Institute of Scientific and Technical Information of China (English)

    Morteza Ghassabzadeh; Hassan Ghassemi

    2012-01-01

    The creation of geometric model of a ship to determine the characteristics of hydrostatic and hydrodynamic,and also for structural design and equipments arrangement are so important in the ship design process.Planning tunnel high speed craft is one of the crafts in which,achievement to their top speed is more important.These crafts with the use of tunnel have the aero-hydrodynamics properties to diminish the resistance,good sea-keeping behavior,reduce slamming and avoid porpoising.Because of the existence of the tunnel,the hull form generation of these crafts is more complex and difficult.In this paper,it has attempted to provide a method based on geometry creation guidelines and with an entry of the least control and hull form adjustment parameters,to generate automatically the hull form of planning tunnel craft.At first,the equations of mathematical model are described and subsequent,three different models generated based on present method are compared and analyzed.Obviously,the generated model has more application in the early stages of design.

  2. A Hybrid Approach to Protect Palmprint Templates

    Directory of Open Access Journals (Sweden)

    Hailun Liu

    2014-01-01

    Full Text Available Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.

  3. Colloidal micro- and nano-particles as templates for polyelectrolyte multilayer capsules.

    Science.gov (United States)

    Parakhonskiy, Bogdan V; Yashchenok, Alexey M; Konrad, Manfred; Skirtach, Andre G

    2014-05-01

    Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  5. A heads-up no-limit Texas Hold'em poker player: Discretized betting models and automatically generated equilibrium-finding programs

    DEFF Research Database (Denmark)

    Gilpin, Andrew G.; Sandholm, Tuomas; Sørensen, Troels Bjerre

    2008-01-01

    choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from...... an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries....

  6. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  7. Automatic Overset Grid Generation with Heuristic Feedback Control

    Science.gov (United States)

    Robinson, Peter I.

    2001-01-01

    An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.

  8. Automatic visual inspection of a missing split pin in the China railway high-speed.

    Science.gov (United States)

    Lu, Shengfang; Liu, Zhen

    2016-10-20

    The split pin (SP) on the caliper brake is a vital component of the brake system of a bogie traveling along the China railway high-speed (CRH), and the absence of the SP could cause serious train accidents. A new automatic visual inspection method is proposed for the quick and accurate detection of SP faults of the CRH. The proposed approach is based on the histogram of gradient (HOG) combined with the complete local binary pattern (CLBP). First, a fast pyramid template matching technique is presented for localizing the region of interest to reduce the searching scope. Under the multiresolution pyramid model for target localization, a coarse-to-fine strategy is employed to ensure that the recognizing speed of the SP for the entire image is increased significantly. Second, a hierarchical framework is adopted at the localizing and inspecting stages of the SP to automatically implement the inspection tasks. To increase the robustness to the outside complex illumination, the HOG feature for localizing the target and the CLBP feature for examining the state of the SP (i.e., missing or not-missing) are extracted in the Sobel gradient domain. The localization and recognition stages are both fulfilled through the use of their respective intersection kernel support vector machine classifiers and corresponding features. In conclusion, experimental results indicate that the inspection system achieves a high accuracy rate of more than 99.0% and a real-time speed, thus proving that the proposed method is effective for the fault inspection of the SP and can satisfy the requirements of CRH's actual application.

  9. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  10. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  11. Application of GA optimization for automatic generation control design in an interconnected power system

    International Nuclear Information System (INIS)

    Golpira, H.; Bevrani, H.; Golpira, H.

    2011-01-01

    Highlights: → A realistic model for automatic generation control (AGC) design is proposed. → The model considers GRC, Speed governor dead band, filters and time delay. → The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  12. Application of GA optimization for automatic generation control design in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Golpira, H., E-mail: hemin.golpira@uok.ac.i [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Bevrani, H. [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Golpira, H. [Department of Industrial Engineering, Islamic Azad University, Sanandaj Branch, PO Box 618, Kurdistan (Iran, Islamic Republic of)

    2011-05-15

    Highlights: {yields} A realistic model for automatic generation control (AGC) design is proposed. {yields} The model considers GRC, Speed governor dead band, filters and time delay. {yields} The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  13. Next Generation Model 8800 Automatic TLD Reader

    International Nuclear Information System (INIS)

    Velbeck, K.J.; Streetz, K.L.; Rotunda, J.E.

    1999-01-01

    BICRON NE has developed an advanced version of the Model 8800 Automatic TLD Reader. Improvements in the reader include a Windows NT TM -based operating system and a Pentium microprocessor for the host controller, a servo-controlled transport, a VGA display, mouse control, and modular assembly. This high capacity reader will automatically read fourteen hundred TLD Cards in one loading. Up to four elements in a card can be heated without mechanical contact, using hot nitrogen gas. Improvements in performance include an increased throughput rate and more precise card positioning. Operation is simplified through easy-to-read Windows-type screens. Glow curves are displayed graphically along with light intensity, temperature, and channel scaling. Maintenance and diagnostic aids are included for easier troubleshooting. A click of a mouse will command actions that are displayed in easy-to-understand English words. Available options include an internal 90 Sr irradiator, automatic TLD calibration, and two different extremity monitoring modes. Results from testing include reproducibility, reader stability, linearity, detection threshold, residue, primary power supply voltage and frequency, transient voltage, drop testing, and light leakage. (author)

  14. Tailoring silver nanoparticle construction using dendrimer templated silica networks

    International Nuclear Information System (INIS)

    Liu Xiaojun; Kakkar, Ashok

    2008-01-01

    We have examined the role of the internal environment of dendrimer templated silica networks in tailoring the construction of silver nanoparticle assemblies. Silica networks from which 3,5-dihydroxybenzyl alcohol based dendrimer templates have been completely removed, slowly wet with an aqueous solution of silver acetate. The latter then reacts with internal silica silanol groups, leading to chemisorption of silver ions, followed by the growth of silver oxide nanoparticles. Silica network constructed using generation 4 dendrimer contains residual dendrimer template, and mixes with aqueous silver acetate solution easily. Upon chemisorption, silver ions get photolytically reduced to silver metal under a stabilizing dendrimer environment, leading to the formation of silver metal nanoparticles

  15. GPP Webinar: Solar Procurement Templates and Tools for Higher Education

    Science.gov (United States)

    Green Power Partnership webinar on solar procurement for Higher Education which features various tools and templates that schools can use to shape and manage the solar procurement process to a successful outcome.

  16. Automatic ID heat load generation in ANSYS code

    International Nuclear Information System (INIS)

    Wang, Zhibi.

    1992-01-01

    Detailed power density profiles are critical in the execution of a thermal analysis using a finite element (FE) code such as ANSYS. Unfortunately, as yet there is no easy way to directly input the precise power profiles into ANSYS. A straight-forward way to do this is to hand-calculate the power of each node or element and then type the data into the code. Every time a change is made to the FE model, the data must be recalculated and reentered. One way to solve this problem is to generate a set of discrete data, using another code such as PHOTON2, and curve-fit the data. Using curve-fitted formulae has several disadvantages. It is time consuming because of the need to run a second code for generation of the data, curve-fitting, and doing the data check, etc. Additionally, because there is no generality for different beamlines or different parameters, the above work must be repeated for each case. And, errors in the power profiles due to curve-fitting result in errors in the analysis. To solve the problem once and for all and with the capability to apply to any insertion device (ID), a program for ED power profile was written in ANSYS Parametric Design Language (APDL). This program is implemented as an ANSYS command with input parameters of peak magnetic field, deflection parameter, length of ID, and distance from the source. Once the command is issued, all the heat load will be automatically generated by the code

  17. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation.

    Science.gov (United States)

    Mangado, Nerea; Ceresa, Mario; Duchateau, Nicolas; Kjer, Hans Martin; Vera, Sergio; Dejea Velardo, Hector; Mistrik, Pavel; Paulsen, Rasmus R; Fagertun, Jens; Noailly, Jérôme; Piella, Gemma; González Ballester, Miguel Ángel

    2016-08-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations was obtained, in an average time of 94 s. The framework has proven to be fast and robust, and is promising for a detailed prognosis of the cochlear implantation surgery.

  18. LINGUISTIC DATABASE FOR AUTOMATIC GENERATION SYSTEM OF ENGLISH ADVERTISING TEXTS

    Directory of Open Access Journals (Sweden)

    N. A. Metlitskaya

    2017-01-01

    Full Text Available The article deals with the linguistic database for the system of automatic generation of English advertising texts on cosmetics and perfumery. The database for such a system includes two main blocks: automatic dictionary (that contains semantic and morphological information for each word, and semantic-syntactical formulas of the texts in a special formal language SEMSINT. The database is built on the result of the analysis of 30 English advertising texts on cosmetics and perfumery. First, each word was given a unique code. For example, N stands for nouns, A – for adjectives, V – for verbs, etc. Then all the lexicon of the analyzed texts was distributed into different semantic categories. According to this semantic classification each word was given a special semantic code. For example, the record N01 that is attributed to the word «lip» in the dictionary means that this word refers to nouns of the semantic category «part of a human’s body».The second block of the database includes the semantic-syntactical formulas of the analyzed advertising texts written in a special formal language SEMSINT. The author gives a brief description of this language, presenting its essence and structure. Also, an example of one formalized advertising text in SEMSINT is provided.

  19. Automatic generation of a subject-specific model for accurate markerless motion capture and biomechanical applications.

    Science.gov (United States)

    Corazza, Stefano; Gambaretto, Emiliano; Mündermann, Lars; Andriacchi, Thomas P

    2010-04-01

    A novel approach for the automatic generation of a subject-specific model consisting of morphological and joint location information is described. The aim is to address the need for efficient and accurate model generation for markerless motion capture (MMC) and biomechanical studies. The algorithm applied and expanded on previous work on human shapes space by embedding location information for ten joint centers in a subject-specific free-form surface. The optimal locations of joint centers in the 3-D mesh were learned through linear regression over a set of nine subjects whose joint centers were known. The model was shown to be sufficiently accurate for both kinematic (joint centers) and morphological (shape of the body) information to allow accurate tracking with MMC systems. The automatic model generation algorithm was applied to 3-D meshes of different quality and resolution such as laser scans and visual hulls. The complete method was tested using nine subjects of different gender, body mass index (BMI), age, and ethnicity. Experimental training error and cross-validation errors were 19 and 25 mm, respectively, on average over the joints of the ten subjects analyzed in the study.

  20. Less is more : unparser-completeness of metalanguages for template engines

    NARCIS (Netherlands)

    Arnoldus, B.J.; Brand, van den M.G.J.; Serebrenik, A.

    2011-01-01

    A code generator is a program translating an input model into code. In this paper we focus on template-based code generators in the context of the model view controller architecture (MVC). The language in which the code generator is written is known as a metalanguage in the code generation parlance.

  1. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  2. Automatic generation of warehouse mediators using an ontology engine

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  3. Efficient Generation and Selection of Combined Features for Improved Classification

    KAUST Repository

    Shono, Ahmad N.

    2014-01-01

    This study contributes a methodology and associated toolkit developed to allow users to experiment with the use of combined features in classification problems. Methods are provided for efficiently generating combined features from an original

  4. Automatic Generation Control Study in Two Area Reheat Thermal Power System

    Science.gov (United States)

    Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu

    2017-08-01

    Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.

  5. Automatic Feature Selection and Weighting for the Formation of Homogeneous Groups for Regional Intensity-Duration-Frequency (IDF) Curve Estimation

    Science.gov (United States)

    Yang, Z.; Burn, D. H.

    2017-12-01

    Extreme rainfall events can have devastating impacts on society. To quantify the associated risk, the IDF curve has been used to provide the essential rainfall-related information for urban planning. However, the recent changes in the rainfall climatology caused by climate change and urbanization have made the estimates provided by the traditional regional IDF approach increasingly inaccurate. This inaccuracy is mainly caused by two problems: 1) The ineffective choice of similarity indicators for the formation of a homogeneous group at different regions; and 2) An inadequate number of stations in the pooling group that does not adequately reflect the optimal balance between group size and group homogeneity or achieve the lowest uncertainty in the rainfall quantiles estimates. For the first issue, to consider the temporal difference among different meteorological and topographic indicators, a three-layer design is proposed based on three stages in the extreme rainfall formation: cloud formation, rainfall generation and change of rainfall intensity above urban surface. During the process, the impacts from climate change and urbanization are considered through the inclusion of potential relevant features at each layer. Then to consider spatial difference of similarity indicators for the homogeneous group formation at various regions, an automatic feature selection and weighting algorithm, specifically the hybrid searching algorithm of Tabu search, Lagrange Multiplier and Fuzzy C-means Clustering, is used to select the optimal combination of features for the potential optimal homogenous groups formation at a specific region. For the second issue, to compare the uncertainty of rainfall quantile estimates among potential groups, the two sample Kolmogorov-Smirnov test-based sample ranking process is used. During the process, linear programming is used to rank these groups based on the confidence intervals of the quantile estimates. The proposed methodology fills the gap

  6. Templating mesoporous zeolites

    DEFF Research Database (Denmark)

    Egeblad, Kresten; Christensen, Christina Hviid; Kustova, Marina

    2008-01-01

    The application of templating methods to produce zeolite materials with hierarchical bi- or trimodal pore size distributions is reviewed with emphasis on mesoporous materials. Hierarchical zeolite materials are categorized into three distinctly different types of materials: hierarchical zeolite...... crystals, nanosized zeolite crystals, and supported zeolite crystals. For the pure zeolite materials in the first two categories, the additional meso- or macroporosity can be classified as being either intracrystalline or intercrystalline, whereas for supported zeolite materials, the additional porosity...... originates almost exclusively from the support material. The methods for introducing mesopores into zeolite materials are discussed and categorized. In general, mesopores can be templated in zeolite materials by use of solid templating, supramolecular templating, or indirect templating...

  7. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  8. Automatic generation of anatomic characteristics from cerebral aneurysm surface models.

    Science.gov (United States)

    Neugebauer, M; Lawonn, K; Beuing, O; Preim, B

    2013-03-01

    Computer-aided research on cerebral aneurysms often depends on a polygonal mesh representation of the vessel lumen. To support a differentiated, anatomy-aware analysis, it is necessary to derive anatomic descriptors from the surface model. We present an approach on automatic decomposition of the adjacent vessels into near- and far-vessel regions and computation of the axial plane. We also exemplarily present two applications of the geometric descriptors: automatic computation of a unique vessel order and automatic viewpoint selection. Approximation methods are employed to analyze vessel cross-sections and the vessel area profile along the centerline. The resulting transition zones between near- and far- vessel regions are used as input for an optimization process to compute the axial plane. The unique vessel order is defined via projection into the plane space of the axial plane. The viewing direction for the automatic viewpoint selection is derived from the normal vector of the axial plane. The approach was successfully applied to representative data sets exhibiting a broad variability with respect to the configuration of their adjacent vessels. A robustness analysis showed that the automatic decomposition is stable against noise. A survey with 4 medical experts showed a broad agreement with the automatically defined transition zones. Due to the general nature of the underlying algorithms, this approach is applicable to most of the likely aneurysm configurations in the cerebral vasculature. Additional geometric information obtained during automatic decomposition can support correction in case the automatic approach fails. The resulting descriptors can be used for various applications in the field of visualization, exploration and analysis of cerebral aneurysms.

  9. Features and perspectives of automatized construction crane-manipulators

    Science.gov (United States)

    Stepanov, Mikhail A.; Ilukhin, Peter A.

    2018-03-01

    Modern construction industry still has a high percentage of manual labor, and the greatest prospects of improving the construction process are lying in the field of automatization. In this article automatized construction manipulator-cranes are being studied in order to achieve the most rational design scheme. This is done through formulating a list of general conditions necessary for such cranes and a set of specialized kinematical conditions. A variety of kinematical schemes is evaluated via these conditions, and some are taken for further dynamical analisys. The comparative dynamical analisys of taken schemes was made and the most rational scheme was defined. Therefore a basis for a more complex and practical research of manipulator-cranes design is given and ways to implement them on practical level can now be calculated properly. Also, the perspectives of implementation of automated control systems and informational networks on construction sites in order to boost the quality of construction works, safety of labour and ecological safety are shown.

  10. Features generated for computational splice-site prediction correspond to functional elements

    Directory of Open Access Journals (Sweden)

    Wilbur W John

    2007-10-01

    Full Text Available Abstract Background Accurate selection of splice sites during the splicing of precursors to messenger RNA requires both relatively well-characterized signals at the splice sites and auxiliary signals in the adjacent exons and introns. We previously described a feature generation algorithm (FGA that is capable of achieving high classification accuracy on human 3' splice sites. In this paper, we extend the splice-site prediction to 5' splice sites and explore the generated features for biologically meaningful splicing signals. Results We present examples from the observed features that correspond to known signals, both core signals (including the branch site and pyrimidine tract and auxiliary signals (including GGG triplets and exon splicing enhancers. We present evidence that features identified by FGA include splicing signals not found by other methods. Conclusion Our generated features capture known biological signals in the expected sequence interval flanking splice sites. The method can be easily applied to other species and to similar classification problems, such as tissue-specific regulatory elements, polyadenylation sites, promoters, etc.

  11. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features.

    Science.gov (United States)

    Abbas, Qaisar; Fondon, Irene; Sarmiento, Auxiliadora; Jiménez, Soledad; Alemany, Pedro

    2017-11-01

    Diabetic retinopathy (DR) is leading cause of blindness among diabetic patients. Recognition of severity level is required by ophthalmologists to early detect and diagnose the DR. However, it is a challenging task for both medical experts and computer-aided diagnosis systems due to requiring extensive domain expert knowledge. In this article, a novel automatic recognition system for the five severity level of diabetic retinopathy (SLDR) is developed without performing any pre- and post-processing steps on retinal fundus images through learning of deep visual features (DVFs). These DVF features are extracted from each image by using color dense in scale-invariant and gradient location-orientation histogram techniques. To learn these DVF features, a semi-supervised multilayer deep-learning algorithm is utilized along with a new compressed layer and fine-tuning steps. This SLDR system was evaluated and compared with state-of-the-art techniques using the measures of sensitivity (SE), specificity (SP) and area under the receiving operating curves (AUC). On 750 fundus images (150 per category), the SE of 92.18%, SP of 94.50% and AUC of 0.924 values were obtained on average. These results demonstrate that the SLDR system is appropriate for early detection of DR and provide an effective treatment for prediction type of diabetes.

  12. Automatic Number Plate Recognition System for IPhone Devices

    Directory of Open Access Journals (Sweden)

    Călin Enăchescu

    2013-06-01

    Full Text Available This paper presents a system for automatic number plate recognition, implemented for devices running the iOS operating system. The methods used for number plate recognition are based on existing methods, but optimized for devices with low hardware resources. To solve the task of automatic number plate recognition we have divided it into the following subtasks: image acquisition, localization of the number plate position on the image and character detection. The first subtask is performed by the camera of an iPhone, the second one is done using image pre-processing methods and template matching. For the character recognition we are using a feed-forward artificial neural network. Each of these methods is presented along with its results.

  13. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  14. Zero-Shot Learning by Generating Pseudo Feature Representations

    OpenAIRE

    Lu, Jiang; Li, Jin; Yan, Ziang; Zhang, Changshui

    2017-01-01

    Zero-shot learning (ZSL) is a challenging task aiming at recognizing novel classes without any training instances. In this paper we present a simple but high-performance ZSL approach by generating pseudo feature representations (GPFR). Given the dataset of seen classes and side information of unseen classes (e.g. attributes), we synthesize feature-level pseudo representations for novel concepts, which allows us access to the formulation of unseen class predictor. Firstly we design a Joint Att...

  15. Important conventional island design features: generators

    International Nuclear Information System (INIS)

    Fritsch, Th.

    1985-01-01

    To-day, maximum reactor capacity is setting a provisional limit to the MW race. The latest nuclear generators in manufacturing are rated 1530 MW - 1710 MVA and are doubtless the most powerful ones in the world. The target to be aimed at in designing large turbogenerators may be defined by the following points: 1) meeting the rated load conditions without overpassing maximum admissible temperatures in any part of the machine; 2) keeping losses as small as possible; 3) keeping overall size small enough to allow rail transportation from the works to the site; 4) choosing well experienced solutions in order to set a highly reliable machine with maximum maintenance. In this report the main features of nuclear generators in the 1000-2000 MVA range are described. (Auth.)

  16. Automatic control of the water level of steam generators from 0% to 100% of the load

    International Nuclear Information System (INIS)

    Hocepied, R.; Debelle, J.; Timmermans, A.; Lams, J.-L.; Baeyens, R.; Eussen, G.; Bassem, G.

    1978-01-01

    The water level of a steam generator is hard to control manually and it is practically impossible for a human operator to react correctly to every important perturbation. These phenomena are further accentuated during the start-up at low load and at low feedwater temperature. The control schemes traditionally provided do not permit satisfactory automatic level control during all operating circumstances. Adaptions of the control system permit all the problems encountered to be solved: automatic control of the level in the steam generators is possible from 0% to 100% of the load and also when large-scale perturbations occur. Such a result has been obtained by use of systematic methods for the analysis of the steam generator's behaviour. These methods have also been used to verify the performance of the control system. The control system installed at the Doel nuclear power station prevents most of the reactor or turbine trip-outs caused by level deviations occurring during start-up and low-load operation. It also minimizes the effects on the unit of incidents such as tripping the unit on house load, safety tripping, fast run-back on reduced load, etc. The principles used are applicable to the control of steam generators of all pressurized water reactor power stations. (author)

  17. Automatic orientation and 3D modelling from markerless rock art imagery

    Science.gov (United States)

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.

    2013-02-01

    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  18. Readability of patient discharge instructions with and without the use of electronically available disease-specific templates.

    Science.gov (United States)

    Mueller, Stephanie K; Giannelli, Kyla; Boxer, Robert; Schnipper, Jeffrey L

    2015-07-01

    Low health literacy is common, leading to patient vulnerability during hospital discharge, when patients rely on written health instructions. We aimed to examine the impact of the use of electronic, patient-friendly, templated discharge instructions on the readability of discharge instructions provided to patients at discharge. We performed a retrospective cohort study of 233 patients discharged from a large tertiary care hospital to their homes following the implementation of a web-based "discharge module," which included the optional use of diagnosis-specific templated discharge instructions. We compared the readability of discharge instructions, as measured by the Flesch Reading Ease Level test (FREL, on a 0-100 scale, with higher scores indicating greater readability) and the Flesch-Kincaid Grade Level test (FKGL, measured in grade levels), between discharges that used templated instructions (with or without modification) versus discharges that used clinician-generated instructions (with or without available templated instructions for the specific discharge diagnosis). Templated discharge instructions were provided to patients in 45% of discharges. Of the 55% of patients that received clinician-generated discharge instructions, the majority (78.1%) had no available templated instruction for the specific discharge diagnosis. Templated discharge instructions had higher FREL scores (71 vs. 57, P readability (a higher FREL score and a lower FKGL score) than the use of clinician-generated discharge instructions. The main reason for clinicians to create discharge instructions was the lack of available templates for the patient's specific discharge diagnosis. Use of electronically available templated discharge instructions may be a viable option to improve the readability of written material provided to patients at discharge, although the library of available templates requires expansion. © The Author 2015. Published by Oxford University Press on behalf of the

  19. A stochastic approach for automatic generation of urban drainage systems.

    Science.gov (United States)

    Möderl, M; Butler, D; Rauch, W

    2009-01-01

    Typically, performance evaluation of new developed methodologies is based on one or more case studies. The investigation of multiple real world case studies is tedious and time consuming. Moreover extrapolating conclusions from individual investigations to a general basis is arguable and sometimes even wrong. In this article a stochastic approach is presented to evaluate new developed methodologies on a broader basis. For the approach the Matlab-tool "Case Study Generator" is developed which generates a variety of different virtual urban drainage systems automatically using boundary conditions e.g. length of urban drainage system, slope of catchment surface, etc. as input. The layout of the sewer system is based on an adapted Galton-Watson branching process. The sub catchments are allocated considering a digital terrain model. Sewer system components are designed according to standard values. In total, 10,000 different virtual case studies of urban drainage system are generated and simulated. Consequently, simulation results are evaluated using a performance indicator for surface flooding. Comparison between results of the virtual and two real world case studies indicates the promise of the method. The novelty of the approach is that it is possible to get more general conclusions in contrast to traditional evaluations with few case studies.

  20. Extraction of airway trees using multiple hypothesis tracking and template matching

    DEFF Research Database (Denmark)

    Raghavendra, Selvan; Petersen, Jens; Pedersen, Jesper Johannes Holst

    2016-01-01

    used in constructing a multiple hypothesis tree, which is then traversed to reach decisions. The proposed modifications remove the need for local thresholding of hypotheses as decisions are made entirely based on statistical comparisons involving the hypothesis tree. The results show improvements......Knowledge of airway tree morphology has important clinical applications in diagnosis of chronic obstructive pulmonary disease. We present an automatic tree extraction method based on multiple hypothesis tracking and template matching for this purpose and evaluate its performance on chest CT images...

  1. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  2. Exemplary design of a DICOM structured report template for CBIR integration into radiological routine

    Science.gov (United States)

    Welter, Petra; Deserno, Thomas M.; Gülpers, Ralph; Wein, Berthold B.; Grouls, Christoph; Günther, Rolf W.

    2010-03-01

    The large and continuously growing amount of medical image data demands access methods with regards to content rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar images into our template. Our approach also includes the new concept of a set of selected images for defining the processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.

  3. Development of a Feature and Template-Assisted Assembler and Application to the Analysis of a Foot-and-Mouth Disease Virus Genotyping Microarray.

    Directory of Open Access Journals (Sweden)

    Roger W Barrette

    Full Text Available Several RT-PCR and genome sequencing strategies exist for the resolution of Foot-and-Mouth Disease virus (FMDV. While these approaches are relatively straightforward, they can be vulnerable to failure due to the unpredictable nature of FMDV genome sequence variations. Sequence independent single primer amplification (SISPA followed by genotyping microarray offers an attractive unbiased approach to FMDV characterization. Here we describe a custom FMDV microarray and a companion feature and template-assisted assembler software (FAT-assembler capable of resolving virus genome sequence using a moderate number of conserved microarray features. The results demonstrate that this approach may be used to rapidly characterize naturally occurring FMDV as well as an engineered chimeric strain of FMDV. The FAT-assembler, while applied to resolving FMDV genomes, represents a new bioinformatics approach that should be broadly applicable to interpreting microarray genotyping data for other viruses or target organisms.

  4. Quantitative Folding Pattern Analysis of Early Primary Sulci in Human Fetuses with Brain Abnormalities.

    Science.gov (United States)

    Im, K; Guimaraes, A; Kim, Y; Cottrill, E; Gagoski, B; Rollins, C; Ortinau, C; Yang, E; Grant, P E

    2017-07-01

    Aberrant gyral folding is a key feature in the diagnosis of many cerebral malformations. However, in fetal life, it is particularly challenging to confidently diagnose aberrant folding because of the rapid spatiotemporal changes of gyral development. Currently, there is no resource to measure how an individual fetal brain compares with normal spatiotemporal variations. In this study, we assessed the potential for automatic analysis of early sulcal patterns to detect individual fetal brains with cerebral abnormalities. Triplane MR images were aligned to create a motion-corrected volume for each individual fetal brain, and cortical plate surfaces were extracted. Sulcal basins were automatically identified on the cortical plate surface and compared with a combined set generated from 9 normal fetal brain templates. Sulcal pattern similarities to the templates were quantified by using multivariate geometric features and intersulcal relationships for 14 normal fetal brains and 5 fetal brains that were proved to be abnormal on postnatal MR imaging. Results were compared with the gyrification index. Significantly reduced sulcal pattern similarities to normal templates were found in all abnormal individual fetuses compared with normal fetuses (mean similarity [normal, abnormal], left: 0.818, 0.752; P the primary distinguishing features. The gyrification index was not significantly different between the normal and abnormal groups. Automated analysis of interrelated patterning of early primary sulci could outperform the traditional gyrification index and has the potential to quantitatively detect individual fetuses with emerging abnormal sulcal patterns. © 2017 by American Journal of Neuroradiology.

  5. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  6. Grammatical Templates: Improving Text Difficulty Evaluation for Language Learners

    OpenAIRE

    Wang, Shuhan; Andersen, Erik

    2016-01-01

    Language students are most engaged while reading texts at an appropriate difficulty level. However, existing methods of evaluating text difficulty focus mainly on vocabulary and do not prioritize grammatical features, hence they do not work well for language learners with limited knowledge of grammar. In this paper, we introduce grammatical templates, the expert-identified units of grammar that students learn from class, as an important feature of text difficulty evaluation. Experimental clas...

  7. Software for objective comparison of vocal acoustic features over weeks of audio recording: KLFromRecordingDays

    Science.gov (United States)

    Soderstrom, Ken; Alalawi, Ali

    KLFromRecordingDays allows measurement of Kullback-Leibler (KL) distances between 2D probability distributions of vocal acoustic features. Greater KL distance measures reflect increased phonological divergence across the vocalizations compared. The software has been used to compare *.wav file recordings made by Sound Analysis Recorder 2011 of songbird vocalizations pre- and post-drug and surgical manipulations. Recordings from individual animals in *.wav format are first organized into subdirectories by recording day and then segmented into individual syllables uttered and acoustic features of these syllables using Sound Analysis Pro 2011 (SAP). KLFromRecordingDays uses syllable acoustic feature data output by SAP to a MySQL table to generate and compare "template" (typically pre-treatment) and "target" (typically post-treatment) probability distributions. These distributions are a series of virtual 2D plots of the duration of each syllable (as x-axis) to each of 13 other acoustic features measured by SAP for that syllable (as y-axes). Differences between "template" and "target" probability distributions for each acoustic feature are determined by calculating KL distance, a measure of divergence of the target 2D distribution pattern from that of the template. KL distances and the mean KL distance across all acoustic features are calculated for each recording day and output to an Excel spreadsheet. Resulting data for individual subjects may then be pooled across treatment groups and graphically summarized and used for statistical comparisons. Because SAP-generated MySQL files are accessed directly, data limits associated with spreadsheet output are avoided, and the totality of vocal output over weeks may be objectively analyzed all at once. The software has been useful for measuring drug effects on songbird vocalizations and assessing recovery from damage to regions of vocal motor cortex. It may be useful in studies employing other species, and as part of speech

  8. A swarm-trained k-nearest prototypes adaptive classifier with automatic feature selection for interval data.

    Science.gov (United States)

    Silva Filho, Telmo M; Souza, Renata M C R; Prudêncio, Ricardo B C

    2016-08-01

    Some complex data types are capable of modeling data variability and imprecision. These data types are studied in the symbolic data analysis field. One such data type is interval data, which represents ranges of values and is more versatile than classic point data for many domains. This paper proposes a new prototype-based classifier for interval data, trained by a swarm optimization method. Our work has two main contributions: a swarm method which is capable of performing both automatic selection of features and pruning of unused prototypes and a generalized weighted squared Euclidean distance for interval data. By discarding unnecessary features and prototypes, the proposed algorithm deals with typical limitations of prototype-based methods, such as the problem of prototype initialization. The proposed distance is useful for learning classes in interval datasets with different shapes, sizes and structures. When compared to other prototype-based methods, the proposed method achieves lower error rates in both synthetic and real interval datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Template Synthesis of Nanostructured Polymeric Membranes by Inkjet Printing.

    Science.gov (United States)

    Gao, Peng; Hunter, Aaron; Benavides, Sherwood; Summe, Mark J; Gao, Feng; Phillip, William A

    2016-02-10

    The fabrication of functional nanomaterials with complex structures has been serving great scientific and practical interests, but current fabrication and patterning methods are generally costly and laborious. Here, we introduce a versatile, reliable, and rapid method for fabricating nanostructured polymeric materials. The novel method is based on a combination of inkjet printing and template synthesis, and its utility and advantages in the fabrication of polymeric nanomaterials is demonstrated through three examples: the generation of polymeric nanotubes, nanowires, and thin films. Layer-by-layer-assembled nanotubes can be synthesized in a polycarbonate track-etched (PCTE) membrane by printing poly(allylamine hydrochloride) and poly(styrenesulfonate) sequentially. This sequential deposition of polyelectrolyte ink enables control over the surface charge within the nanotubes. By a simple change of the printing conditions, polymeric nanotubes or nanowires were prepared by printing poly(vinyl alcohol) in a PCTE template. In this case, the high-throughput nature of the method enables functional nanomaterials to be generated in under 3 min. Furthermore, we demonstrate that inkjet printing paired with template synthesis can be used to generate patterns comprised of chemically distinct nanomaterials. Thin polymeric films of layer-by-layer-assembled poly(allylamine hydrochloride) and poly(styrenesulfonate) are printed on a PCTE membrane. Track-etched membranes covered with the deposited thin films reject ions and can potentially be utilized as nanofiltration membranes. When the fabrication of these different classes of nanostructured materials is demonstrated, the advantages of pairing template synthesis with inkjet printing, which include fast and reliable deposition, judicious use of the deposited materials, and the ability to design chemically patterned surfaces, are highlighted.

  10. Development of ANJOYMC Program for Automatic Generation of Monte Carlo Cross Section Libraries

    International Nuclear Information System (INIS)

    Kim, Kang Seog; Lee, Chung Chan

    2007-03-01

    The NJOY code developed at Los Alamos National Laboratory is to generate the cross section libraries in ACE format for the Monte Carlo codes such as MCNP and McCARD by processing the evaluated nuclear data in ENDF/B format. It takes long time to prepare all the NJOY input files for hundreds of nuclides with various temperatures, and there can be some errors in the input files. In order to solve these problems, ANJOYMC program has been developed. By using a simple user input deck, this program is not only to generate all the NJOY input files automatically, but also to generate a batch file to perform all the NJOY calculations. The ANJOYMC program is written in Fortran90 and can be executed under the WINDOWS and LINUX operating systems in Personal Computer. Cross section libraries in ACE format can be generated in a short time and without an error by using a simple user input deck

  11. Report Template

    DEFF Research Database (Denmark)

    Bjørn, Anders; Laurent, Alexis; Owsianiak, Mikołaj

    2018-01-01

    To ensure consistent reporting of life cycle assessment (LCA), we provide a report template. The report includes elements of an LCA study as recommended but the ILCD Handbook. Illustrative case study reported according to this template is presented in Chap. 39 ....

  12. Generating description with multi-feature fusion and saliency maps of image

    Science.gov (United States)

    Liu, Lisha; Ding, Yuxuan; Tian, Chunna; Yuan, Bo

    2018-04-01

    Generating description for an image can be regard as visual understanding. It is across artificial intelligence, machine learning, natural language processing and many other areas. In this paper, we present a model that generates description for images based on RNN (recurrent neural network) with object attention and multi-feature of images. The deep recurrent neural networks have excellent performance in machine translation, so we use it to generate natural sentence description for images. The proposed method uses single CNN (convolution neural network) that is trained on ImageNet to extract image features. But we think it can not adequately contain the content in images, it may only focus on the object area of image. So we add scene information to image feature using CNN which is trained on Places205. Experiments show that model with multi-feature extracted by two CNNs perform better than which with a single feature. In addition, we make saliency weights on images to emphasize the salient objects in images. We evaluate our model on MSCOCO based on public metrics, and the results show that our model performs better than several state-of-the-art methods.

  13. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    CERN Document Server

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques

    2005-01-01

    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  14. On the application of bezier surfaces for GA-Fuzzy controller design for use in automatic generation control

    CSIR Research Space (South Africa)

    Boesack, CD

    2012-03-01

    Full Text Available Automatic Generation Control (AGC) of large interconnected power systems are typically controlled by a PI or PID type control law. Recently intelligent control techniques such as GA-Fuzzy controllers have been widely applied within the power...

  15. MULTIDIRECTIONAL BUILDING DETECTION IN AERIAL IMAGES WITHOUT SHAPE TEMPLATES

    Directory of Open Access Journals (Sweden)

    A. Manno-Kovacs

    2013-05-01

    Full Text Available The aim of this paper is to exploit orientation information of an urban area for extracting building contours without shape templates. Unlike using shape templates, these given contours describe more variability and reveal the fine details of the building outlines, resulting in a more accurate detection process, which is beneficial for many tasks, like map updating and city planning. According to our assumption, orientation of the closely located buildings is coherent, it is related to the road network, therefore adaptation of this information can lead to more efficient building detection results. The introduced method first extracts feature points for representing the urban area. Orientation information in the feature point neighborhoods is analyzed to define main orientations. Based on orientation information, the urban area is classified into different directional clusters. The edges of the classified building groups are then emphasized with shearlet based edge detection method, which is able to detect edges only in the main directions, resulting in an efficient connectivity map. In the last step, with the fusion of the feature points and connectivity map, building contours are detected with a non-parametric active contour method.

  16. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    Science.gov (United States)

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. (c) 2015 APA, all rights reserved).

  17. Characterizing chaotic melodies in automatic music composition

    Science.gov (United States)

    Coca, Andrés E.; Tost, Gerard O.; Zhao, Liang

    2010-09-01

    In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests.

  18. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    Science.gov (United States)

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  19. Automatic tracking of red blood cells in micro channels using OpenCV

    Science.gov (United States)

    Rodrigues, Vânia; Rodrigues, Pedro J.; Pereira, Ana I.; Lima, Rui

    2013-10-01

    The present study aims to developan automatic method able to track red blood cells (RBCs) trajectories flowing through a microchannel using the Open Source Computer Vision (OpenCV). The developed method is based on optical flux calculation assisted by the maximization of the template-matching product. The experimental results show a good functional performance of this method.

  20. A Template for Open Inquiry: Using Questions to Encourage and Support Inquiry in Earth and Space Science

    Science.gov (United States)

    Hermann, Ronald S.; Miranda, Rommel J.

    2010-01-01

    This article provides an instructional approach to helping students generate open-inquiry research questions, which the authors call the "open-inquiry question template." This template was created based on their experience teaching high school science and preservice university methods courses. To help teachers implement this template, they…

  1. Preparation of Biomorphic TiO2 Ceramics from Rattan Templates

    Directory of Open Access Journals (Sweden)

    Liangcun Qian

    2015-05-01

    Full Text Available In this work, biomorphic ceramics were produced from various rattan templates, and sol infiltration was used with vacuum/positive pressure technology. Finally, the samples were sintered to form TiO2 ceramics with a rattan microstructure. Through X-ray diffraction (XRD, thermogravimetric (TG data, dimensional variation analysis, and scanning electron microscopy (SEM images of biomorphic ceramics, the results of this experiment showed that the times of sol-gel infiltration were decreased due to use of the vacuum/positive pressure technology. In order to further supply the TiO2 content and fill the pyrolysis gaps in the charcoal/TiO2 composites sintered at 800 °C, it was necessary to repeat the sol-gel process. In the transverse section, ceramics for the rattan templates without the rattan edge, more perfect biomorphic features were achieved. Conversely, deformations occurred along the transverse section of the ceramics for the templates made with the rattan edge. Meanwhile, the fracture phenomenon took place along the ceramic axial section. The main reason for deformation and fracture was that the anisotropic structure of the template was stressed during the sintering process. Furthermore, the micrometer-sized pores were found in the ceramics along the axial section because of the removal of the charcoal templates.

  2. Welding template

    International Nuclear Information System (INIS)

    Ben Venue, R.J. of.

    1976-01-01

    A welding template is described which is used to weld strip material into a cellular grid structure for the accommodation of fuel elements in a nuclear reactor. On a base plate the template carries a multitude of cylindrical pins whose upper half is narrower than the bottom half and only one of which is attached to the base plate. The others are arrested in a hexagonal array by oblong webs clamped together by chuck jaws which can be secured by means of screws. The parts are ground very accurately. The template according to the invention is very easy to make. (UWI) [de

  3. Development of user interface to support automatic program generation of nuclear power plant analysis by module-based simulation system

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Mizutani, Naoki; Nakaya, Ken-ichiro; Wakabayashi, Jiro

    1988-01-01

    Module-based Simulation System (MSS) has been developed to realize a new software work environment enabling versatile dynamic simulation of a complex nuclear power system flexibly. The MSS makes full use of modern software technology to replace a large fraction of human software works in complex, large-scale program development by computer automation. Fundamental methods utilized in MSS and developmental study on human interface system SESS-1 to help users in generating integrated simulation programs automatically are summarized as follows: (1) To enhance usability and 'communality' of program resources, the basic mathematical models of common usage in nuclear power plant analysis are programed as 'modules' and stored in a module library. The information on usage of individual modules are stored in module database with easy registration, update and retrieval by the interactive management system. (2) Target simulation programs and the input/output files are automatically generated with simple block-wise languages by a precompiler system for module integration purpose. (3) Working time for program development and analysis in an example study of an LMFBR plant thermal-hydraulic transient analysis was demonstrated to be remarkably shortened, with the introduction of an interface system SESS-1 developed as an automatic program generation environment. (author)

  4. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  5. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    Science.gov (United States)

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed.

  6. Sponge-Templated Macroporous Graphene Network for Piezoelectric ZnO Nanogenerator.

    Science.gov (United States)

    Li, Xinda; Chen, Yi; Kumar, Amit; Mahmoud, Ahmed; Nychka, John A; Chung, Hyun-Joong

    2015-09-23

    We report a simple approach to fabricate zinc oxide (ZnO) nanowire based electricity generators on three-dimensional (3D) graphene networks by utilizing a commercial polyurethane (PU) sponge as a structural template. Here, a 3D network of graphene oxide is deposited from solution on the template and then is chemically reduced. Following steps of ZnO nanowire growth, polydimethylsiloxane (PDMS) backfilling and electrode lamination completes the fabrication processes. When compared to conventional generators with 2D planar geometry, the sponge template provides a 3D structure that has a potential to increase power density per unit area. The modified one-pot ZnO synthesis method allows the whole process to be inexpensive and environmentally benign. The nanogenerator yields an open circuit voltage of ∼0.5 V and short circuit current density of ∼2 μA/cm(2), while the output was found to be consistent after ∼3000 cycles. Finite element analysis of stress distribution showed that external stress is concentrated to deform ZnO nanowires by orders of magnitude compared to surrounding PU and PDMS, in agreement with our experiment. It is shown that the backfilled PDMS plays a crucial role for the stress concentration, which leads to an efficient electricity generation.

  7. A novel String Banana Template Method for Tracks Reconstruction in High Multiplicity Events with significant Multiple Scattering and its Firmware Implementation

    CERN Document Server

    Kulinich, P; Krylov, V

    2004-01-01

    Novel String Banana Template Method (SBTM) for track reconstruction in difficult conditions is proposed and implemented for off-line analysis of relativistic heavy ion collision events. The main idea of the method is in use of features of ensembles of tracks selected by 3-fold coincidence. Two steps model of track is used: the first one - averaged over selected ensemble and the second - per event dependent. It takes into account Multiple Scattering (MS) for this particular track. SBTM relies on use of stored templates generated by precise Monte Carlo simulation, so it's more time efficient for the case of 2D spectrometer. All data required for track reconstruction in such difficult conditions could be prepared in convenient format for fast use. Its template based nature and the fact that the SBTM track model is actually very close to the hits implies that it can be implemented in a firmware processor. In this report a block diagram of firmware based pre-processor for track reconstruction in CMS-like Si tracke...

  8. An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2014-01-01

    Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.

  9. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  10. Automatic Generation of Building Models with Levels of Detail 1-3

    Science.gov (United States)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  11. Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Prima Dewi Purnamasari

    2017-05-01

    Full Text Available The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG signal is preferred over facial expression, as people cannot control the EEG signal generated by their brain; the EEG ensures a stronger reliability in the psychological signal. However, because of its uniqueness between individuals and its vulnerability to noise, use of EEG signals can be rather complicated. In this paper, we propose a methodology to conduct EEG-based emotion recognition by using a filtered bispectrum as the feature extraction subsystem and an artificial neural network (ANN as the classifier. The bispectrum is theoretically superior to the power spectrum because it can identify phase coupling between the nonlinear process components of the EEG signal. In the feature extraction process, to extract the information contained in the bispectrum matrices, a 3D pyramid filter is used for sampling and quantifying the bispectrum value. Experiment results show that the mean percentage of the bispectrum value from 5 × 5 non-overlapped 3D pyramid filters produces the highest recognition rate. We found that reducing the number of EEG channels down to only eight in the frontal area of the brain does not significantly affect the recognition rate, and the number of data samples used in the training process is then increased to improve the recognition rate of the system. We have also utilized a probabilistic neural network (PNN as another classifier and compared its recognition rate with that of the back-propagation neural network (BPNN, and the results show that the PNN produces a comparable recognition rate and lower computational costs. Our research shows that the extracted bispectrum values of an EEG signal using 3D filtering as a feature extraction

  12. An integrated automatic system for the eddy-current testing of the steam generator tubes

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Hee Gon; Choi, Seong Su [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center

    1995-12-31

    This research project was focused on automation of steam generator tubes inspection for nuclear power plants. ECT (Eddy Current Testing) inspection process in nuclear power plants is classified into 3 subprocesses such as signal acquisition process, signal evaluation process, and inspection planning and data management process. Having been automated individually, these processes were effectively integrated into an automatic inspection system, which was implemented in HP workstation with expert system developed (author). 25 refs., 80 figs.

  13. An integrated automatic system for the eddy-current testing of the steam generator tubes

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Hee Gon; Choi, Seong Su [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center

    1996-12-31

    This research project was focused on automation of steam generator tubes inspection for nuclear power plants. ECT (Eddy Current Testing) inspection process in nuclear power plants is classified into 3 subprocesses such as signal acquisition process, signal evaluation process, and inspection planning and data management process. Having been automated individually, these processes were effectively integrated into an automatic inspection system, which was implemented in HP workstation with expert system developed (author). 25 refs., 80 figs.

  14. WiseScaffolder: an algorithm for the semi-automatic scaffolding of Next Generation Sequencing data.

    Science.gov (United States)

    Farrant, Gregory K; Hoebeke, Mark; Partensky, Frédéric; Andres, Gwendoline; Corre, Erwan; Garczarek, Laurence

    2015-09-03

    The sequencing depth provided by high-throughput sequencing technologies has allowed a rise in the number of de novo sequenced genomes that could potentially be closed without further sequencing. However, genome scaffolding and closure require costly human supervision that often results in genomes being published as drafts. A number of automatic scaffolders were recently released, which improved the global quality of genomes published in the last few years. Yet, none of them reach the efficiency of manual scaffolding. Here, we present an innovative semi-automatic scaffolder that additionally helps with chimerae resolution and generates valuable contig maps and outputs for manual improvement of the automatic scaffolding. This software was tested on the newly sequenced marine cyanobacterium Synechococcus sp. WH8103 as well as two reference datasets used in previous studies, Rhodobacter sphaeroides and Homo sapiens chromosome 14 (http://gage.cbcb.umd.edu/). The quality of resulting scaffolds was compared to that of three other stand-alone scaffolders: SSPACE, SOPRA and SCARPA. For all three model organisms, WiseScaffolder produced better results than other scaffolders in terms of contiguity statistics (number of genome fragments, N50, LG50, etc.) and, in the case of WH8103, the reliability of the scaffolds was confirmed by whole genome alignment against a closely related reference genome. We also propose an efficient computer-assisted strategy for manual improvement of the scaffolding, using outputs generated by WiseScaffolder, as well as for genome finishing that in our hands led to the circularization of the WH8103 genome. Altogether, WiseScaffolder proved more efficient than three other scaffolders for both prokaryotic and eukaryotic genomes and is thus likely applicable to most genome projects. The scaffolding pipeline described here should be of particular interest to biologists wishing to take advantage of the high added value of complete genomes.

  15. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  16. GalaxyGAN: Generative Adversarial Networks for recovery of galaxy features

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Krishnan Santhanam, Gokula

    2017-02-01

    GalaxyGAN uses Generative Adversarial Networks to reliably recover features in images of galaxies. The package uses machine learning to train on higher quality data and learns to recover detailed features such as galaxy morphology by effectively building priors. This method opens up the possibility of recovering more information from existing and future imaging data.

  17. Effect of template in MCM-41 on the adsorption of aniline from aqueous solution.

    Science.gov (United States)

    Yang, Xinxin; Guan, Qingxin; Li, Wei

    2011-11-01

    The effect of the surfactant template cetyltrimethylammonium bromide (CTAB) in MCM-41 on the adsorption of aniline was investigated. Various MCM-41 samples were prepared by controlling template removal using an extraction method. The samples were then used as adsorbents for the removal of aniline from aqueous solution. The results showed that the MCM-41 samples with the template partially removed (denoted as C-MCM-41) exhibited better adsorption performance than MCM-41 with the template completely removed (denoted as MCM-41). The reason for this difference may be that the C-MCM-41 samples had stronger hydrophobic properties and selectivity for aniline because of the presence of the template. The porosity and cationic sites generated by the template play an important role in the adsorption process. The optimal adsorbent with moderate template was achieved by changing the ratio of extractant; it has the potential for promising applications in the field of water pollution control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. A Novel Approach for Automatic Machining Feature Recognition with Edge Blend Feature

    OpenAIRE

    Keong Chen Wong; Yusof Yusri

    2017-01-01

    This paper presents an algorithm for efficiently recognizing and determining the convexity of an edge blend feature. The algorithm first recognizes all of the edge blend features from the Boundary Representation of a part; then a series of convexity test have been run on the recognized edge blend features. The novelty of the presented algorithm lies in, instead of each recognized blend feature is suppressed as most of researchers did, the recognized blend features of this research are gone th...

  19. Generating Impact Maps from Automatically Detected Bomb Craters in Aerial Wartime Images Using Marked Point Processes

    Science.gov (United States)

    Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian

    2018-04-01

    The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.

  20. EXTRACSION: a system for automatic Eddy Current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Georgel, B.; Zorgati, R.

    1992-01-01

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automation of all process that contribute to diagnosis. This paper describes how signal processing, pattern recognition and artificial and artificial intelligence are used to build a software package that is able to automatically provide an efficient diagnosis. (author)

  1. Securing Iris Templates using Combined User and Soft Biometric based Password Hardened Fuzzy Vault

    OpenAIRE

    Meenakshi, V. S.; Padmavathi, G.

    2010-01-01

    Personal identification and authentication is very crucial in the current scenario. Biometrics plays an important role in this area. Biometric based authentication has proved superior compared to traditional password based authentication. Anyhow biometrics is permanent feature of a person and cannot be reissued when compromised as passwords. To over come this problem, instead of storing the original biometric templates transformed templates can be stored. Whenever the transformation function ...

  2. Template-Based Estimation of Time-Varying Tempo

    Directory of Open Access Journals (Sweden)

    Peeters Geoffroy

    2007-01-01

    Full Text Available We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a proposed reassigned spectral energy flux for the detection of musical events. The dominant periodicities of this flux are estimated by a proposed combination of discrete Fourier transform and frequency-mapped autocorrelation function. The most likely meter, beat, and tatum over time are then estimated jointly using proposed meter/beat subdivision templates and a Viterbi decoding algorithm. The performances of our system have been evaluated on four different test sets among which three were used during the ISMIR 2004 tempo induction contest. The performances obtained are close to the best results of this contest.

  3. Metal-organic framework templated electrodeposition of functional gold nanostructures

    International Nuclear Information System (INIS)

    Worrall, Stephen D.; Bissett, Mark A.; Hill, Patrick I.; Rooney, Aidan P.; Haigh, Sarah J.; Attfield, Martin P.; Dryfe, Robert A.W.

    2016-01-01

    Highlights: • Electrodeposition of anisotropic Au nanostructures templated by HKUST-1. • Au nanostructures replicate ∼1.4 nm pore spaces of HKUST-1. • Encapsulated Au nanostructures active as SERS substrate for 4-fluorothiophenol. - Abstract: Utilizing a pair of quick, scalable electrochemical processes, the permanently porous MOF HKUST-1 was electrochemically grown on a copper electrode and this HKUST-1-coated electrode was used to template electrodeposition of a gold nanostructure within the pore network of the MOF. Transmission electron microscopy demonstrates that a proportion of the gold nanostructures exhibit structural features replicating the pore space of this ∼1.4 nm maximum pore diameter MOF, as well as regions that are larger in size. Scanning electron microscopy shows that the electrodeposited gold nanostructure, produced under certain conditions of synthesis and template removal, is sufficiently inter-grown and mechanically robust to retain the octahedral morphology of the HKUST-1 template crystals. The functionality of the gold nanostructure within the crystalline HKUST-1 was demonstrated through the surface enhanced Raman spectroscopic (SERS) detection of 4-fluorothiophenol at concentrations as low as 1 μM. The reported process is confirmed as a viable electrodeposition method for obtaining functional, accessible metal nanostructures encapsulated within MOF crystals.

  4. Ultrafast layer based computer-generated hologram calculation with sparse template holographic fringe pattern for 3-D object.

    Science.gov (United States)

    Kim, Hak Gu; Man Ro, Yong

    2017-11-27

    In this paper, we propose a new ultrafast layer based CGH calculation that exploits the sparsity of hologram fringe pattern in 3-D object layer. Specifically, we devise a sparse template holographic fringe pattern. The holographic fringe pattern on a depth layer can be rapidly calculated by adding the sparse template holographic fringe patterns at each object point position. Since the size of sparse template holographic fringe pattern is much smaller than that of the CGH plane, the computational load can be significantly reduced. Experimental results show that the proposed method achieves 10-20 msec for 1024x1024 pixels providing visually plausible results.

  5. An automatic granular structure generation and finite element analysis of heterogeneous semi-solid materials

    International Nuclear Information System (INIS)

    Sharifi, Hamid; Larouche, Daniel

    2015-01-01

    The quality of cast metal products depends on the capacity of the semi-solid metal to sustain the stresses generated during the casting. Predicting the evolution of these stresses with accuracy in the solidification interval should be highly helpful to avoid the formation of defects like hot tearing. This task is however very difficult because of the heterogeneous nature of the material. In this paper, we propose to evaluate the mechanical behaviour of a metal during solidification using a mesh generation technique of the heterogeneous semi-solid material for a finite element analysis at the microscopic level. This task is done on a two-dimensional (2D) domain in which the granular structure of the solid phase is generated surrounded by an intergranular and interdendritc liquid phase. Some basic solid grains are first constructed and projected in the 2D domain with random orientations and scale factors. Depending on their orientation, the basic grains are combined to produce larger grains or separated by a liquid film. Different basic grain shapes can produce different granular structures of the mushy zone. As a result, using this automatic grain generation procedure, we can investigate the effect of grain shapes and sizes on the thermo-mechanical behaviour of the semi-solid material. The granular models are automatically converted to the finite element meshes. The solid grains and the liquid phase are meshed properly using quadrilateral elements. This method has been used to simulate the microstructure of a binary aluminium–copper alloy (Al–5.8 wt% Cu) when the fraction solid is 0.92. Using the finite element method and the Mie–Grüneisen equation of state for the liquid phase, the transient mechanical behaviour of the mushy zone under tensile loading has been investigated. The stress distribution and the bridges, which are formed during the tensile loading, have been detected. (paper)

  6. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    Kodama, Naoki; Kaneko, Tomoyuki

    2005-01-01

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  7. Multi-template tensor-based morphometry: application to analysis of Alzheimer's disease

    DEFF Research Database (Denmark)

    Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart

    2011-01-01

    impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than...

  8. Application of ANN-SCE model on the evaluation of automatic generation control performance

    Energy Technology Data Exchange (ETDEWEB)

    Chang-Chien, L.R.; Lo, C.S.; Lee, K.S. [National Cheng Kung Univ., Tainan, Taiwan (China)

    2005-07-01

    An accurate evaluation of load frequency control (LFC) performance is needed to balance minute-to-minute electricity generation and demand. In this study, an artificial neural network-based system control error (ANN-SCE) model was used to assess the performance of automatic generation controls (AGC). The model was used to identify system dynamics for control references in supplementing AGC logic. The artificial neural network control error model was used to track a single area's LFC dynamics in Taiwan. The model was used to gauge the impacts of regulation control. Results of the training, evaluating, and projecting processes showed that the ANN-SCE model could be algebraically decomposed into components corresponding to different impact factors. The SCE information obtained from testing of various AGC gains provided data for the creation of a new control approach. The ANN-SCE model was used in conjunction with load forecasting and scheduled generation data to create an ANN-SCE identifier. The model successfully simulated SCE dynamics. 13 refs., 10 figs.

  9. An introduction to automatic radioactive sample counters

    International Nuclear Information System (INIS)

    1980-01-01

    The subject is covered in chapters, entitled; the detection of radiation in sample counters; nucleonic equipment; liquid scintillation counting; basic features of automatic sample counters; statistics of counting; data analysis; purchase, installation, calibration and maintenance of automatic sample counters. (U.K.)

  10. Design Features of Modern Mechanical Ventilators.

    Science.gov (United States)

    MacIntyre, Neil

    2016-12-01

    A positive-pressure breath ideally should provide a V T that is adequate for gas exchange and appropriate muscle unloading while minimizing any risk for injury or discomfort. The latest generation of ventilators uses sophisticated feedback systems to sculpt positive-pressure breaths according to patient effort and respiratory system mechanics. Currently, however, these new control strategies are not totally closed-loop systems. This is because the automatic input variables remain limited, some clinician settings are still required, and the specific features of the perfect breath design still are not entirely clear. Despite these limitations, there are some rationale for many of these newer feedback features. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Design reflowable digital book template

    Science.gov (United States)

    Prasetya, Didik Dwi; Widiyaningtyas, Triyanna; Arifin, M. Zainal; Wahyu Sakti G., I.

    2017-09-01

    Electronic books (e-books or digital books) increasingly in demand and continue to grow in the form of future books. One of the standard format electronic books that potential is EPUB (electronic publication) published by the International Digital Publishing Forum (IDPF). This digital book has major advantages are able to provide interactive and reflowable content, which are not found in another book format, such as PDF. Reflowable content allows the book can be accessed through a variety of reader device, like desktop and mobile with a fit and comfort view. However, because the generating process of an EPUB digital book is not as easy a PDF, so this format is less popular. Therefore, in order to help overcome the existing problems, this paper develops digital reflowable text book templates to support electronic learning, especially in Indonesia. This template can be used by anyone to produce a standard digital book quickly and easily without requiring additional specialized knowledge.

  12. Interactive music composition driven by feature evolution.

    Science.gov (United States)

    Kaliakatsos-Papakostas, Maximos A; Floros, Andreas; Vrahatis, Michael N

    2016-01-01

    Evolutionary music composition is a prominent technique for automatic music generation. The immense adaptation potential of evolutionary algorithms has allowed the realisation of systems that automatically produce music through feature and interactive-based composition approaches. Feature-based composition employs qualitatively descriptive music features as fitness landmarks. Interactive composition systems on the other hand, derive fitness directly from human ratings and/or selection. The paper at hand introduces a methodological framework that combines the merits of both evolutionary composition methodologies. To this end, a system is presented that is organised in two levels: the higher level of interaction and the lower level of composition. The higher level incorporates the particle swarm optimisation algorithm, along with a proposed variant and evolves musical features according to user ratings. The lower level realizes feature-based music composition with a genetic algorithm, according to the top level features. The aim of this work is not to validate the efficiency of the currently utilised setup in each level, but to examine the convergence behaviour of such a two-level technique in an objective manner. Therefore, an additional novelty in this work concerns the utilisation of artificial raters that guide the system through the space of musical features, allowing the exploration of its convergence characteristics: does the system converge to optimal melodies, is this convergence fast enough for potential human listeners and is the trajectory to convergence "interesting' and "creative" enough? The experimental results reveal that the proposed methodological framework represents a fruitful and robust, novel approach to interactive music composition.

  13. Cloning nanocrystal morphology with soft templates

    Science.gov (United States)

    Thapa, Dev Kumar; Pandey, Anshu

    2016-08-01

    In most template directed preparative methods, while the template decides the nanostructure morphology, the structure of the template itself is a non-general outcome of its peculiar chemistry. Here we demonstrate a template mediated synthesis that overcomes this deficiency. This synthesis involves overgrowth of silica template onto a sacrificial nanocrystal. Such templates are used to copy the morphologies of gold nanorods. After template overgrowth, gold is removed and silver is regrown in the template cavity to produce a single crystal silver nanorod. This technique allows for duplicating existing nanocrystals, while also providing a quantifiable breakdown of the structure - shape interdependence.

  14. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  15. Template-based CTA to x-ray angio rigid registration of coronary arteries in frequency domain with automatic x-ray segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Aksoy, Timur; Unal, Gozde [Sabanci University, Tuzla, Istanbul 34956 (Turkey); Demirci, Stefanie; Navab, Nassir [Computer Aided Medical Procedures (CAMP), Technical University of Munich, Garching, 85748 (Germany); Degertekin, Muzaffer [Yeditepe University Hospital, Istanbul 34752 (Turkey)

    2013-10-15

    Purpose: A key challenge for image guided coronary interventions is accurate and absolutely robust image registration bringing together preinterventional information extracted from a three-dimensional (3D) patient scan and live interventional image information. In this paper, the authors present a novel scheme for 3D to two-dimensional (2D) rigid registration of coronary arteries extracted from preoperative image scan (3D) and a single segmented intraoperative x-ray angio frame in frequency and spatial domains for real-time angiography interventions by C-arm fluoroscopy.Methods: Most existing rigid registration approaches require a close initialization due to the abundance of local minima and high complexity of search algorithms. The authors' method eliminates this requirement by transforming the projections into translation-invariant Fourier domain for estimating the 3D pose. For 3D rotation recovery, template Digitally Reconstructed Radiographs (DRR) as candidate poses of 3D vessels of segmented computed tomography angiography are produced by rotating the camera (image intensifier) around the DICOM angle values with a specific range as in C-arm setup. The authors have compared the 3D poses of template DRRs with the segmented x-ray after equalizing the scales in three domains, namely, Fourier magnitude, Fourier phase, and Fourier polar. The best rotation pose candidate was chosen by one of the highest similarity measures returned by the methods in these domains. It has been noted in literature that frequency domain methods are robust against noise and occlusion which was also validated by the authors' results. 3D translation of the volume was then recovered by distance-map based BFGS optimization well suited to convex structure of the authors' objective function without local minima due to distance maps. A novel automatic x-ray vessel segmentation was also performed in this study.Results: Final results were evaluated in 2D projection space for

  16. Template-based CTA to x-ray angio rigid registration of coronary arteries in frequency domain with automatic x-ray segmentation

    International Nuclear Information System (INIS)

    Aksoy, Timur; Unal, Gozde; Demirci, Stefanie; Navab, Nassir; Degertekin, Muzaffer

    2013-01-01

    Purpose: A key challenge for image guided coronary interventions is accurate and absolutely robust image registration bringing together preinterventional information extracted from a three-dimensional (3D) patient scan and live interventional image information. In this paper, the authors present a novel scheme for 3D to two-dimensional (2D) rigid registration of coronary arteries extracted from preoperative image scan (3D) and a single segmented intraoperative x-ray angio frame in frequency and spatial domains for real-time angiography interventions by C-arm fluoroscopy.Methods: Most existing rigid registration approaches require a close initialization due to the abundance of local minima and high complexity of search algorithms. The authors' method eliminates this requirement by transforming the projections into translation-invariant Fourier domain for estimating the 3D pose. For 3D rotation recovery, template Digitally Reconstructed Radiographs (DRR) as candidate poses of 3D vessels of segmented computed tomography angiography are produced by rotating the camera (image intensifier) around the DICOM angle values with a specific range as in C-arm setup. The authors have compared the 3D poses of template DRRs with the segmented x-ray after equalizing the scales in three domains, namely, Fourier magnitude, Fourier phase, and Fourier polar. The best rotation pose candidate was chosen by one of the highest similarity measures returned by the methods in these domains. It has been noted in literature that frequency domain methods are robust against noise and occlusion which was also validated by the authors' results. 3D translation of the volume was then recovered by distance-map based BFGS optimization well suited to convex structure of the authors' objective function without local minima due to distance maps. A novel automatic x-ray vessel segmentation was also performed in this study.Results: Final results were evaluated in 2D projection space for patient data; and

  17. Pythran: enabling static optimization of scientific Python programs

    Science.gov (United States)

    Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan

    2015-01-01

    Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.

  18. Synthesis and catalytic activity of polysaccharide templated nanocrystalline sulfated zirconia

    Energy Technology Data Exchange (ETDEWEB)

    Sherly, K. B.; Rakesh, K. [Mahatma Gandhi University Regional Research Center in Chemistry, Department of Chemistry, Mar Athanasius College, Kothamangalam-686666, Kerala (India)

    2014-01-28

    Nanoscaled materials are of great interest due to their unique enhanced optical, electrical and magnetic properties. Sulfate-promoted zirconia has been shown to exhibit super acidic behavior and high activity for acid catalyzed reactions. Nanocrystalline zirconia was prepared in the presence of polysaccharide template by interaction between ZrOCl{sub 2}⋅8H{sub 2}O and chitosan template. The interaction was carried out in aqueous phase, followed by the removal of templates by calcination at optimum temperature and sulfation. The structural and textural features were characterized by powder XRD, TG, SEM and TEM. XRD patterns showed the peaks of the diffractogram were in agreement with the theoretical data of zirconia with the catalytically active tetragonal phase and average crystalline size of the particles was found to be 9 nm, which was confirmed by TEM. TPD using ammonia as probe, FTIR and BET surface area analysis were used for analyzing surface features like acidity and porosity. The BET surface area analysis showed the sample had moderately high surface area. FTIR was used to find the type species attached to the surface of zirconia. UV-DRS found the band gap of the zirconia was found to be 2.8 eV. The benzylation of o-xylene was carried out batchwise in atmospheric pressure and 433K temperature using sulfated zirconia as catalyst.

  19. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han

    2014-07-22

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  20. Computing layouts with deformable templates

    KAUST Repository

    Peng, Chi-Han; Yang, Yongliang; Wonka, Peter

    2014-01-01

    In this paper, we tackle the problem of tiling a domain with a set of deformable templates. A valid solution to this problem completely covers the domain with templates such that the templates do not overlap. We generalize existing specialized solutions and formulate a general layout problem by modeling important constraints and admissible template deformations. Our main idea is to break the layout algorithm into two steps: a discrete step to lay out the approximate template positions and a continuous step to refine the template shapes. Our approach is suitable for a large class of applications, including floorplans, urban layouts, and arts and design. Copyright © ACM.

  1. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  2. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  3. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    Gloger, Oliver; Völzke, Henry; Tönnies, Klaus; Mensel, Birger

    2015-01-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  4. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  5. Differential evolution algorithm based automatic generation control for interconnected power systems with

    Directory of Open Access Journals (Sweden)

    Banaja Mohanty

    2014-09-01

    Full Text Available This paper presents the design and performance analysis of Differential Evolution (DE algorithm based Proportional–Integral (PI and Proportional–Integral–Derivative (PID controllers for Automatic Generation Control (AGC of an interconnected power system. Initially, a two area thermal system with governor dead-band nonlinearity is considered for the design and analysis purpose. In the proposed approach, the design problem is formulated as an optimization problem control and DE is employed to search for optimal controller parameters. Three different objective functions are used for the design purpose. The superiority of the proposed approach has been shown by comparing the results with a recently published Craziness based Particle Swarm Optimization (CPSO technique for the same interconnected power system. It is noticed that, the dynamic performance of DE optimized PI controller is better than CPSO optimized PI controllers. Additionally, controller parameters are tuned at different loading conditions so that an adaptive gain scheduling control strategy can be employed. The study is further extended to a more realistic network of two-area six unit system with different power generating units such as thermal, hydro, wind and diesel generating units considering boiler dynamics for thermal plants, Generation Rate Constraint (GRC and Governor Dead Band (GDB non-linearity.

  6. Automatic recognition of conceptualization zones in scientific articles and two life science applications.

    Science.gov (United States)

    Liakata, Maria; Saha, Shyamasree; Dobnik, Simon; Batchelor, Colin; Rebholz-Schuhmann, Dietrich

    2012-04-01

    Scholarly biomedical publications report on the findings of a research investigation. Scientists use a well-established discourse structure to relate their work to the state of the art, express their own motivation and hypotheses and report on their methods, results and conclusions. In previous work, we have proposed ways to explicitly annotate the structure of scientific investigations in scholarly publications. Here we present the means to facilitate automatic access to the scientific discourse of articles by automating the recognition of 11 categories at the sentence level, which we call Core Scientific Concepts (CoreSCs). These include: Hypothesis, Motivation, Goal, Object, Background, Method, Experiment, Model, Observation, Result and Conclusion. CoreSCs provide the structure and context to all statements and relations within an article and their automatic recognition can greatly facilitate biomedical information extraction by characterizing the different types of facts, hypotheses and evidence available in a scientific publication. We have trained and compared machine learning classifiers (support vector machines and conditional random fields) on a corpus of 265 full articles in biochemistry and chemistry to automatically recognize CoreSCs. We have evaluated our automatic classifications against a manually annotated gold standard, and have achieved promising accuracies with 'Experiment', 'Background' and 'Model' being the categories with the highest F1-scores (76%, 62% and 53%, respectively). We have analysed the task of CoreSC annotation both from a sentence classification as well as sequence labelling perspective and we present a detailed feature evaluation. The most discriminative features are local sentence features such as unigrams, bigrams and grammatical dependencies while features encoding the document structure, such as section headings, also play an important role for some of the categories. We discuss the usefulness of automatically generated Core

  7. Automatic generation and simulation of urban building energy models based on city datasets for city-scale building retrofit analysis

    International Nuclear Information System (INIS)

    Chen, Yixing; Hong, Tianzhen; Piette, Mary Ann

    2017-01-01

    Highlights: •Developed methods and used data models to integrate city’s public building records. •Shading from neighborhood buildings strongly influences urban building performance. •A case study demonstrated the workflow, simulation and analysis of building retrofits. •CityBES retrofit analysis feature provides actionable information for decision making. •Discussed significance and challenges of urban building energy modeling. -- Abstract: Buildings in cities consume 30–70% of total primary energy, and improving building energy efficiency is one of the key strategies towards sustainable urbanization. Urban building energy models (UBEM) can support city managers to evaluate and prioritize energy conservation measures (ECMs) for investment and the design of incentive and rebate programs. This paper presents the retrofit analysis feature of City Building Energy Saver (CityBES) to automatically generate and simulate UBEM using EnergyPlus based on cities’ building datasets and user-selected ECMs. CityBES is a new open web-based tool to support city-scale building energy efficiency strategic plans and programs. The technical details of using CityBES for UBEM generation and simulation are introduced, including the workflow, key assumptions, and major databases. Also presented is a case study that analyzes the potential retrofit energy use and energy cost savings of five individual ECMs and two measure packages for 940 office and retail buildings in six city districts in northeast San Francisco, United States. The results show that: (1) all five measures together can save 23–38% of site energy per building; (2) replacing lighting with light-emitting diode lamps and adding air economizers to existing heating, ventilation and air-conditioning (HVAC) systems are most cost-effective with an average payback of 2.0 and 4.3 years, respectively; and (3) it is not economical to upgrade HVAC systems or replace windows in San Francisco due to the city’s mild

  8. Physical controls on directed virus assembly at nanoscale chemical templates

    International Nuclear Information System (INIS)

    Cheung, C L; Chung, S; Chatterji, A; Lin, T; Johnson, J E; Hok, S; Perkins, J; De Yoreo, J

    2006-01-01

    Viruses are attractive building blocks for nanoscale heterostructures, but little is understood about the physical principles governing their directed assembly. In-situ force microscopy was used to investigate organization of Cowpea Mosaic Virus engineered to bind specifically and reversibly at nanoscale chemical templates with sub-30nm features. Morphological evolution and assembly kinetics were measured as virus flux and inter-viral potential were varied. The resulting morphologies were similar to those of atomic-scale epitaxial systems, but the underlying thermodynamics was analogous to that of colloidal systems in confined geometries. The 1D templates biased the location of initial cluster formation, introduced asymmetric sticking probabilities, and drove 1D and 2D condensation at subcritical volume fractions. The growth kinetics followed a t 1/2 law controlled by the slow diffusion of viruses. The lateral expansion of virus clusters that initially form on the 1D templates following introduction of polyethylene glycol (PEG) into the solution suggests a significant role for weak interaction

  9. A Deformable Template Model, with Special Reference to Elliptical Templates

    DEFF Research Database (Denmark)

    Hobolth, Asger; Pedersen, Jan; Jensen, Eva Bjørn Vedel

    2002-01-01

    This paper suggests a high-level continuous image model for planar star-shaped objects. Under this model, a planar object is a stochastic deformation of a star-shaped template. The residual process, describing the difference between the radius-vector function of the template and the object...

  10. pEVL: A Linear Plasmid for Generating mRNA IVT Templates With Extended Encoded Poly(A Sequences

    Directory of Open Access Journals (Sweden)

    Alexandra E Grier

    2016-01-01

    Full Text Available Increasing demand for large-scale synthesis of in vitro transcribed (IVT mRNA is being driven by the increasing use of mRNA for transient gene expression in cell engineering and therapeutic applications. An important determinant of IVT mRNA potency is the 3′ polyadenosine (poly(A tail, the length of which correlates with translational efficiency. However, present methods for generation of IVT mRNA rely on templates derived from circular plasmids or PCR products, in which homopolymeric tracts are unstable, thus limiting encoded poly(A tail lengths to ≃120 base pairs (bp. Here, we have developed a novel method for generation of extended poly(A tracts using a previously described linear plasmid system, pJazz. We find that linear plasmids can successfully propagate poly(A tracts up to ≃500 bp in length for IVT mRNA production. We then modified pJazz by removing extraneous restriction sites, adding a T7 promoter sequence upstream from an extended multiple cloning site, and adding a unique type-IIS restriction site downstream from the encoded poly(A tract to facilitate generation of IVT mRNA with precisely defined encoded poly(A tracts and 3′ termini. The resulting plasmid, designated pEVL, can be used to generate IVT mRNA with consistent defined lengths and terminal residue(s.

  11. The guitar chord-generating algorithm based on complex network

    Science.gov (United States)

    Ren, Tao; Wang, Yi-fan; Du, Dan; Liu, Miao-miao; Siddiqi, Awais

    2016-02-01

    This paper aims to generate chords for popular songs automatically based on complex network. Firstly, according to the characteristics of guitar tablature, six chord networks of popular songs by six pop singers are constructed and the properties of all networks are concluded. By analyzing the diverse chord networks, the accompaniment regulations and features are shown, with which the chords can be generated automatically. Secondly, in terms of the characteristics of popular songs, a two-tiered network containing a verse network and a chorus network is constructed. With this network, the verse and chorus can be composed respectively with the random walk algorithm. Thirdly, the musical motif is considered for generating chords, with which the bad chord progressions can be revised. This method can make the accompaniments sound more melodious. Finally, a popular song is chosen for generating chords and the new generated accompaniment sounds better than those done by the composers.

  12. Optimal gravitational search algorithm for automatic generation control of interconnected power systems

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2014-09-01

    Full Text Available An attempt is made for the effective application of Gravitational Search Algorithm (GSA to optimize PI/PIDF controller parameters in Automatic Generation Control (AGC of interconnected power systems. Initially, comparison of several conventional objective functions reveals that ITAE yields better system performance. Then, the parameters of GSA technique are properly tuned and the GSA control parameters are proposed. The superiority of the proposed approach is demonstrated by comparing the results of some recently published techniques such as Differential Evolution (DE, Bacteria Foraging Optimization Algorithm (BFOA and Genetic Algorithm (GA. Additionally, sensitivity analysis is carried out that demonstrates the robustness of the optimized controller parameters to wide variations in operating loading condition and time constants of speed governor, turbine, tie-line power. Finally, the proposed approach is extended to a more realistic power system model by considering the physical constraints such as reheat turbine, Generation Rate Constraint (GRC and Governor Dead Band nonlinearity.

  13. Shapes and features of the primordial bispectrum

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Jinn-Ouk [Asia Pacific Center for Theoretical Physics, Cheongam-ro 67, Pohang, 37673 (Korea, Republic of); Palma, Gonzalo A.; Sypsas, Spyros, E-mail: jinn-ouk.gong@apctp.org, E-mail: gpalmaquilod@ing.uchile.cl, E-mail: s.sypsas@gmail.com [Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago, 837.0415 Chile (Chile)

    2017-05-01

    If time-dependent disruptions from slow-roll occur during inflation, the correlation functions of the primordial curvature perturbation should have scale-dependent features, a case which is marginally supported from the cosmic microwave background (CMB) data. We offer a new approach to analyze the appearance of such features in the primordial bispectrum that yields new consistency relations and justifies the search of oscillating patterns modulated by orthogonal and local templates. Under the assumption of sharp features, we find that the cubic couplings of the curvature perturbation can be expressed in terms of the bispectrum in two specific momentum configurations, for example local and equilateral. This allows us to derive consistency relations among different bispectrum shapes, which in principle could be tested in future CMB surveys. Furthermore, based on the form of the consistency relations, we construct new two-parameter templates for features that include all the known shapes.

  14. 'Clicking' on the nanoscale: 1,3-dipolar cycloaddition of terminal acetylenes on azide functionalized, nanometric surface templates with nanometer resolution

    International Nuclear Information System (INIS)

    Haensch, Claudia; Hoeppener, Stephanie; Schubert, Ulrich S

    2009-01-01

    Electro-oxidative lithography is used as a tool to create chemical nanostructures on an n-octadecyltrichlorosilane (OTS) monolayer self-assembled on silicon. The use of a bromine precursor molecule, which is exclusively assembled on these chemical templates, can be used to further functionalize the nanostructures by the site-selective generation of azide functions and performing the highly effective 1,3-dipolar cycloaddition reaction with acetylene functionalized molecules. The versatility of this reaction scheme provides the potential to integrate a large variety of functional molecules, to tailor the surface properties of the nanostructures or to anchor molecular building blocks or particles in confined, pre-defined surface areas. The results demonstrated in the present study introduce a conceivable route towards the functionalization of chemically active surface templates with high fidelity and reliability. It is demonstrated that surface features with a lateral resolution of 50 nm functionalized with propargyl alcohol can be fabricated.

  15. An Empirical Ultraviolet Iron Spectrum Template Applicable to Active Galaxies

    DEFF Research Database (Denmark)

    Vestergaard, Marianne; Wilkes, B. J.

    2001-01-01

    Iron emission is often a severe contaminant in optical-ultraviolet spectra of active galaxies. Its presence complicates emission line studies. A viable solution, already successfully applied at optical wavelengths, is to use an empirical iron emission template. We have generated FeII and Fe......III templates for ultraviolet active galaxy spectra based on HST archival 1100 - 3100 A spectra of IZw1. Their application allows fitting and subtraction of the iron emission in active galaxy spectra. This work has shown that in particular CIII] lambda 1909 can be heavily contaminated by other line emission...

  16. Support vector machine for automatic pain recognition

    Science.gov (United States)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  17. Polymeric Bicontinuous Microemulsions as Templates for Nanostructured Materials

    Science.gov (United States)

    Jones, Brad Howard

    Ternary blends of two homopolymers and a diblock copolymer can self-assemble into interpenetrating, three dimensionally-continuous networks with a characteristic length scale of ˜ 100 nm. In this thesis, it is shown that these liquid phases, known as polymeric bicontinuous microemulsions (BμE), can be designed as versatile precursors to nanoporous materials having pores with uniform sizes of ˜ 100 nm. The model blends from which the porous materials are derived are composed of polyethylene (PE) and a sacrificial polyolefin. The liquid BμE structure is captured by crystallization of the PE, and a three-dimensionally continuous pore network with a narrow pore size distribution is generated by selective extraction of the sacrificial component. The original BμE structure is retained in the resultant nanoporous PE. This monolithic material is then used as a template in the synthesis of other nanoporous materials for which structural control at the nm scale has traditionally been difficult to achieve. These materials, which include a high-temperature ceramic, polymeric thermosets, and a conducting polymer, are produced by a simple nanocasting process, providing an inverse replica of the PE template. On account of the BμE structure of the template, the product materials also possess three-dimensionally continuous pore networks with narrow size distributions centered at ˜ 100 nm. The PE template is further used as a template for the production of hierarchically structured inorganic and polymeric materials by infiltration of mesostructured compounds into its pore network. In the former case, a hierarchically porous SiO2 material is demonstrated, simultaneously possessing two discrete, bicontinuous pore networks with sizes differing by over an order of magnitude. Finally, the templating procedures are extended to thin films supported on substrates and novel conductive polymer films are synthesized. The work described herein represents an unprecedented suite of

  18. 基于UML的自动组卷系统的分析与设计%Design and Analysis of Automatic Generation of Test Paper System Based on UML

    Institute of Scientific and Technical Information of China (English)

    刘慧梅

    2012-01-01

      本文运用基于面向对象的建模语言 UML 对自动组卷系统进行分析和设计,建立了自动组卷系统分析设计模型,为实现自动组卷系统打下良好的基础。%  this paper analyzed and designed the automatic generation of test paper system by UML, construct the design model based on analysis of the automatic generation of test paper system, and fully prepare for the realization of the automatic generation of test paper system.

  19. Mesoporous MEL, BEA, and FAU zeolite crystals obtained by in situ formation of carbon template over metal nanoparticles

    DEFF Research Database (Denmark)

    Abildstrøm, Jacob Oskar; Ali, Zahra Nasrudin; Mentzel, Uffe Vie

    2016-01-01

    Here, we report the synthesis and characterization of hierarchical zeolite materials with MEL, BEA and FAU structures. The synthesis is based on the carbon templating method with an in situ-generated carbon template. Through the decomposition of methane and deposition of coke over nickel nanopart......Here, we report the synthesis and characterization of hierarchical zeolite materials with MEL, BEA and FAU structures. The synthesis is based on the carbon templating method with an in situ-generated carbon template. Through the decomposition of methane and deposition of coke over nickel...... nanoparticles supported on silica, a carbon–silica composite is obtained and exploited as a combined carbon template/silica source for the zeolite synthesis. The mesoporous zeolite materials were all prepared by hydrothermal crystallization in alkaline media followed by removal of the carbon template...... by combustion, which results in zeolite single crystals with intracrystalline pore volumes of up to 0.44 cm3 g−1. The prepared zeolite structures are characterized by XRD, SEM, TEM and N2 physisorption measurements....

  20. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Ehrenstein, T.; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-01-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [de

  1. Multistage feature extraction for accurate face alignment

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2004-01-01

    We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active

  2. BioModels: Content, Features, Functionality, and Use

    Science.gov (United States)

    Juty, N; Ali, R; Glont, M; Keating, S; Rodriguez, N; Swat, MJ; Wimalaratne, SM; Hermjakob, H; Le Novère, N; Laibe, C; Chelliah, V

    2015-01-01

    BioModels is a reference repository hosting mathematical models that describe the dynamic interactions of biological components at various scales. The resource provides access to over 1,200 models described in literature and over 140,000 models automatically generated from pathway resources. Most model components are cross-linked with external resources to facilitate interoperability. A large proportion of models are manually curated to ensure reproducibility of simulation results. This tutorial presents BioModels' content, features, functionality, and usage. PMID:26225232

  3. Synthesis of platinum nanowheels using a bicellar template.

    Science.gov (United States)

    Song, Yujiang; Dorin, Rachel M; Garcia, Robert M; Jiang, Ying-Bing; Wang, Haorong; Li, Peng; Qiu, Yan; van Swol, Frank; Miller, James E; Shelnutt, John A

    2008-09-24

    Disk-like surfactant bicelles provide a unique meso-structured reaction environment for templating the wet-chemical reduction of platinum(II) salt by ascorbic acid to produce platinum nanowheels. The Pt wheels are 496 +/-55 nm in diameter and possess thickened centers and radial dendritic nanosheets (about 2-nm in thickness) culminating in flared dendritic rims. The structural features of the platinum wheels arise from confined growth of platinum within the bilayer that is also limited at edges of the bicelles. The size of CTAB/FC7 bicelles is observed to evolve with the addition of Pt(II) complex and ascorbic acid. Synthetic control is demonstrated by varying the reaction parameters including metal salt concentration, temperature, and total surfactant concentration. This study opens up opportunities for the use of other inhomogeneous soft templates for synthesizing metals, metal alloys, and possibly semiconductors with complex nanostructures.

  4. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  5. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  6. Examining multi-component DNA-templated nanostructures as imaging agents

    Science.gov (United States)

    Jaganathan, Hamsa

    2011-12-01

    Magnetic resonance imaging (MRI) is the leading non-invasive tool for disease imaging and diagnosis. Although MRI exhibits high spatial resolution for anatomical features, the contrast resolution is low. Imaging agents serve as an aid to distinguish different types of tissues within images. Gadolinium chelates, which are considered first generation designs, can be toxic to health, while ultra-small, superparamagnetic nanoparticles (NPs) have low tissue-targeting efficiency and rapid bio-distribution, resulting to an inadequate detection of the MRI signal and enhancement of image contrast. In order to improve the utility of MRI agents, the challenge in composition and structure needs to be addressed. One-dimensional (1D), superparamagnetic nanostructures have been reported to enhance magnetic and in vivo properties and therefore has a potential to improve contrast enhancement in MRI images. In this dissertation, the structure of 1D, multi-component NP chains, scaffolded on DNA, were pre-clinically examined as potential MRI agents. First, research was focused on characterizing and understanding the mechanism of proton relaxation for DNA-templated NP chains using nuclear magnetic resonance (NMR) spectrometry. Proton relaxation and transverse relaxivity were higher in multi-component NP chains compared to disperse NPs, indicating the arrangement of NPs on a 1D structure improved proton relaxation sensitivity. Second, in vitro evaluation for potential issues in toxicity and contrast efficiency in tissue environments using a 3 Tesla clinical MRI scanner was performed. Cell uptake of DNA-templated NP chains was enhanced after encapsulating the nanostructure with layers of polyelectrolytes and targeting ligands. Compared to dispersed NPs, DNA-templated NP chains improved MRI contrast in both the epithelial basement membrane and colon cancer tumors scaffolds. The last part of the project was focused on developing a novel MRI agent that detects changes in DNA methylation

  7. Fabrication of ordered arrays of micro- and nanoscale features with control over their shape and size via templated solid-state dewetting.

    Science.gov (United States)

    Ye, Jongpil

    2015-05-08

    Templated solid-state dewetting of single-crystal films has been shown to be used to produce regular patterns of various shapes. However, the materials for which this patterning method is applicable, and the size range of the patterns produced are still limited. Here, it is shown that ordered arrays of micro- and nanoscale features can be produced with control over their shape and size via solid-state dewetting of patches patterned from single-crystal palladium and nickel films of different thicknesses and orientations. The shape and size characteristics of the patterns are found to be widely controllable with varying the shape, width, thickness, and orientation of the initial patches. The morphological evolution of the patches is also dependent on the film material, with different dewetting behaviors observed in palladium and nickel films. The mechanisms underlying the pattern formation are explained in terms of the influence on Rayleigh-like instability of the patch geometry and the surface energy anisotropy of the film material. This mechanistic understanding of pattern formation can be used to design patches for the precise fabrication of micro- and nanoscale structures with the desired shapes and feature sizes.

  8. Motor automaticity in Parkinson’s disease

    Science.gov (United States)

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  9. Discriminative Chemical Patterns: Automatic and Interactive Design.

    Science.gov (United States)

    Bietz, Stefan; Schomburg, Karen T; Hilbig, Matthias; Rarey, Matthias

    2015-08-24

    The classification of molecules with respect to their inhibiting, activating, or toxicological potential constitutes a central aspect in the field of cheminformatics. Often, a discriminative feature is needed to distinguish two different molecule sets. Besides physicochemical properties, substructures and chemical patterns belong to the descriptors most frequently applied for this purpose. As a commonly used example of this descriptor class, SMARTS strings represent a powerful concept for the representation and processing of abstract chemical patterns. While their usage facilitates a convenient way to apply previously derived classification rules on new molecule sets, the manual generation of useful SMARTS patterns remains a complex and time-consuming process. Here, we introduce SMARTSminer, a new algorithm for the automatic derivation of discriminative SMARTS patterns from preclassified molecule sets. Based on a specially adapted subgraph mining algorithm, SMARTSminer identifies structural features that are frequent in only one of the given molecule classes. In comparison to elemental substructures, it also supports the consideration of general and specific SMARTS features. Furthermore, SMARTSminer is integrated into an interactive pattern editor named SMARTSeditor. This allows for an intuitive visualization on the basis of the SMARTSviewer concept as well as interactive adaption and further improvement of the generated patterns. Additionally, a new molecular matching feature provides an immediate feedback on a pattern's matching behavior across the molecule sets. We demonstrate the utility of the SMARTSminer functionality and its integration into the SMARTSeditor software in several different classification scenarios.

  10. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  11. Automatic Generation of Wide Dynamic Range Image without Pseudo-Edge Using Integration of Multi-Steps Exposure Images

    Science.gov (United States)

    Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi

    Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.

  12. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    Science.gov (United States)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  13. Fast fitting of non-Gaussian state-space models to animal movement data via Template Model Builder

    DEFF Research Database (Denmark)

    Albertsen, Christoffer Moesgaard; Whoriskey, Kim; Yurkowski, David

    2015-01-01

    recommend using the Laplace approximation combined with automatic differentiation (as implemented in the novel R package Template Model Builder; TMB) for the fast fitting of continuous-time multivariate non-Gaussian SSMs. Through Argos satellite tracking data, we demonstrate that the use of continuous...... are able to estimate additional parameters compared to previous methods, all without requiring a substantial increase in computational time. The model implementation is made available through the R package argosTrack....

  14. Growth of aragonite calcium carbonate nanorods in the biomimetic anodic aluminum oxide template

    Science.gov (United States)

    Lee, Inho; Han, Haksoo; Lee, Sang-Yup

    2010-04-01

    In this study, a biomimetic template was prepared and applied for growing calcium carbonate (CaCO 3) nanorods whose shape and polymorphism were controlled. A biomimetic template was prepared by adsorbing catalytic dipeptides into the pores of an anodic aluminum oxide (AAO) membrane. Using this peptide-adsorbed template, mineralization and aggregation of CaCO 3 was carried out to form large nanorods in the pores. The nanorods were aragonite and had a structure similar to nanoneedle assembly. This aragonite nanorod formation was driven by both the AAO template and catalytic function of dipeptides. The AAO membrane pores promoted generation of aragonite polymorph and guided nanorod formation by guiding the nanorod growth. The catalytic dipeptides promoted the aggregation and further dehydration of calcium species to form large nanorods. Functions of the AAO template and catalytic dipeptides were verified through several control experiments. This biomimetic approach makes possible the production of functional inorganic materials with controlled shapes and crystalline structures.

  15. Biometric Template Security

    Directory of Open Access Journals (Sweden)

    Abhishek Nagar

    2008-03-01

    Full Text Available Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intrauser variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.

  16. Secret-key and identification rates for biometric identification systems with protected templates

    NARCIS (Netherlands)

    Ignatenko, T.; Willems, F.M.J.

    2010-01-01

    In this paper we consider secret generation in biometric identification systems with protected templates. This problem is closely related to the study of the bio metric identification capacity [Willems et al., 2003] and [O’Sullivan and Sclmmid, 2002] and the common randomness generation scheme

  17. Effect of BaSi2 template growth duration on the generation of defects and performance of p-BaSi2/n-Si heterojunction solar cells

    Science.gov (United States)

    Yachi, Suguru; Takabe, Ryota; Deng, Tianguo; Toko, Kaoru; Suemasu, Takashi

    2018-04-01

    We investigated the effect of BaSi2 template growth duration (t RDE = 0-20 min) on the defect generation and performance of p-BaSi2/n-Si heterojunction solar cells. The p-BaSi2 layer grown by molecular beam epitaxy (MBE) was 15 nm thick with a hole concentration of 2 × 1018 cm-3. The conversion efficiency η increased for films grown at long t RDE, owing to improvements of the open-circuit voltage (V OC) and fill factor (FF), reaching a maximum of η = 8.9% at t RDE = 7.5 min. However, η decreased at longer and shorter t RDE owing to lower V OC and FF. Using deep-level transient spectroscopy, we detected a hole trap level 190 meV above the valence band maximum for the sample grown without the template (t RDE = 0 min). An electron trap level 106 meV below the conduction band minimum was detected for a sample grown with t RDE = 20 min. The trap densities for both films were (1-2) × 1013 cm-3. The former originated from the diffusion of Ba into the n-Si region; the latter originated from defects in the template layer. The crystalline qualities of the template and MBE-grown layers were discussed. The root-mean-square surface roughness of the template reached a minimum of 0.51 nm at t RDE = 7.5 min. The a-axis orientation of p-BaSi2 thin films degraded as t RDE exceeded 10 min. In terms of p-BaSi2 crystalline quality and solar cell performance, the optimum t RDE was determined to be 7.5 min, corresponding to approximately 4 nm in thickness.

  18. Decentralized automatic generation control of interconnected power systems incorporating asynchronous tie-lines.

    Science.gov (United States)

    Ibraheem; Hasan, Naimul; Hussein, Arkan Ahmed

    2014-01-01

    This Paper presents the design of decentralized automatic generation controller for an interconnected power system using PID, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The designed controllers are tested on identical two-area interconnected power systems consisting of thermal power plants. The area interconnections between two areas are considered as (i) AC tie-line only (ii) Asynchronous tie-line. The dynamic response analysis is carried out for 1% load perturbation. The performance of the intelligent controllers based on GA and PSO has been compared with the conventional PID controller. The investigations of the system dynamic responses reveal that PSO has the better dynamic response result as compared with PID and GA controller for both type of area interconnection.

  19. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  20. Open-loop glucose control: Automatic IOB-based super-bolus feature for commercial insulin pumps.

    Science.gov (United States)

    Rosales, Nicolás; De Battista, Hernán; Vehí, Josep; Garelli, Fabricio

    2018-06-01

    Although there has been significant progress towards closed-loop type 1 diabetes mellitus (T1DM) treatments, most diabetic patients still treat this metabolic disorder in an open-loop manner, based on insulin pump therapy (basal and bolus insulin infusion). This paper presents a method for automatic insulin bolus shaping based on insulin-on-board (IOB) as an alternative to conventional bolus dosing. The methodology presented allows the pump to generate the so-called super-bolus (SB) employing a two-compartment IOB dynamic model. The extra amount of insulin to boost the bolus and the basal cutoff time are computed using the duration of insulin action (DIA). In this way, the pump automatically re-establishes basal insulin when IOB reaches its basal level. Thus, detrimental transients caused by manual or a-priori computations are avoided. The potential of this method is illustrated via in-silico trials over a 30 patients cohort in single meal and single day scenarios. In the first ones, improvements were found (standard treatment vs. automatic SB) both in percentage time in euglycemia (75g meal: 81.9 ± 15.59 vs. 89.51 ± 11.95, ρ ≃ 0; 100g meal: 75.12 ± 18.23 vs. 85.46 ± 14.96, ρ ≃ 0) and time in hypoglecymia (75g meal: 5.92 ± 14.48 vs. 0.97 ± 4.15, ρ=0.008; 100g meal: 9.5 ± 17.02 vs. 1.85 ± 7.05, ρ=0.014). In a single day scenario, considering intra-patient variability, the time in hypoglycemia was reduced (9.57 ± 14.48 vs. 4.21 ± 6.18, ρ=0.028) and improved the time in euglycemia (79.46 ± 17.46 vs. 86.29 ± 11.73, ρ=0.007). The automatic IOB-based SB has the potential of a better performance in comparison with the standard treatment, particularly for high glycemic index meals with high carbohydrate content. Both glucose excursion and time spent in hypoglycemia were reduced. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Synthesis of mesoporous zeolite catalysts by in situ formation of carbon template over nickel nanoparticles

    DEFF Research Database (Denmark)

    Abildstrøm, Jacob Oskar; Kegnæs, Marina; Hytoft, Glen

    2016-01-01

    A novel synthesis procedure for the preparation of the hierarchical zeolite materials with MFI structure based on the carbon templating method with in situ generated carbon template is presented in this study. Through chemical vapour deposition of coke on nickel nanoparticles supported on silica...... oxide, a carbon-silica composite is obtained and exploited as a combined carbon template/silica source for zeolite synthesis. This approach has several advantages in comparison with conventional carbon templating methods, where relatively complicated preparative strategies involving multistep...... impregnation procedures and rather expensive chemicals are used. Removal of the carbon template by combustion results in zeolite single crystals with intracrystalline pore volumes between 0.28 and 0.48 cm3/g. The prepared zeolites are characterized by XRD, SEM, TEM and physisorption analysis. The isomerization...

  2. Automatic face morphing for transferring facial animation

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Bui, T.D.; Poel, Mannes; Heylen, Dirk K.J.; Nijholt, Antinus; Hamza, H.M.

    2003-01-01

    In this paper, we introduce a novel method of automatically finding the training set of RBF networks for morphing a prototype face to represent a new face. This is done by automatically specifying and adjusting corresponding feature points on a target face. The RBF networks are then used to transfer

  3. Within-subject template estimation for unbiased longitudinal image analysis.

    Science.gov (United States)

    Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce

    2012-07-16

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    International Nuclear Information System (INIS)

    Qiu, J; Yang, D

    2015-01-01

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets, and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from

  5. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    Energy Technology Data Exchange (ETDEWEB)

    Qiu, J [Washington University in St Louis, Taian, Shandong (China); Yang, D [Washington University School of Medicine, St Louis, MO (United States)

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets, and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from

  6. Automatic Texture and Orthophoto Generation from Registered Panoramic Views

    DEFF Research Database (Denmark)

    Krispel, Ulrich; Evers, Henrik Leander; Tamke, Martin

    2015-01-01

    are automatically identified from the geometry and an image per view is created via projection. We combine methods of computer vision to train a classifier to detect the objects of interest from these orthographic views. Furthermore, these views can be used for automatic texturing of the proxy geometry....... from range data only. In order to detect these elements, we developed a method that utilizes range data and color information from high-resolution panoramic images of indoor scenes, taken at the scanners position. A proxy geometry is derived from the point clouds; orthographic views of the scene...

  7. Templates, Numbers & Watercolors.

    Science.gov (United States)

    Clemesha, David J.

    1990-01-01

    Describes how a second-grade class used large templates to draw and paint five-digit numbers. The lesson integrated artistic knowledge and vocabulary with their mathematics lesson in place value. Students learned how draftspeople use templates, and they studied number paintings by Charles Demuth and Jasper Johns. (KM)

  8. Interactivity in automatic control: foundations and experiences

    OpenAIRE

    Dormido Bencomo, Sebastián; Guzmán Sánchez, José Luis; Costa Castelló, Ramon; Berenguel, M

    2012-01-01

    The first part of this paper presents the concepts of interactivity and visualization and its essential role in learning the fundamentals and techniques of automatic control. More than 10 years experience of the authors in the development and design of interactive tools dedicated to the study of automatic control concepts are also exposed. The second part of the paper summarizes the main features of the “Automatic Control with Interactive Tools” text that has been recently published by Pea...

  9. Automatic summary generating technology of vegetable traceability for information sharing

    Science.gov (United States)

    Zhenxuan, Zhang; Minjing, Peng

    2017-06-01

    In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.

  10. Major NSSS design features of the Korean next generation reactor

    International Nuclear Information System (INIS)

    Kim, Insk; Kim, Dong-Su

    1999-01-01

    In order to meet national needs for increasing electric power generation in the Republic of Korea in the 2000s, the Korean nuclear development group (KNDG) is developing a standardized evolutionary advanced light water reactor (ALWR), the Korean Next Generation Reactor (KNGR). It is an advanced version of the successful Korean Standard Nuclear Power Plant (KSNP) design, which meets utility needs for safety enhancement, performance improvement and ease of operation and maintenance. The KNGR design starts fro the proven design concept of the currently operating KSNPs with uprated power and advanced design features required by the utility. The KNGR design is currently in the final stage of the basic design, and the paper describes the major nuclear steam supply system (NSSS) design features of the KNGR together with introduction of the KNGR development program. (author)

  11. AUTOMATIC EXTRACTION AND TOPOLOGY RECONSTRUCTION OF URBAN VIADUCTS FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2015-08-01

    Full Text Available Urban viaducts are important infrastructures for the transportation system of a city. In this paper, an original method is proposed to automatically extract urban viaducts and reconstruct topology of the viaduct network just with airborne LiDAR point cloud data. It will greatly simplify the effort-taking procedure of viaducts extraction and reconstruction. In our method, the point cloud first is filtered to divide all the points into ground points and none-ground points. Region growth algorithm is adopted to find the viaduct points from the none-ground points by the features generated from its general prescriptive designation rules. Then, the viaduct points are projected into 2D images to extract the centerline of every viaduct and generate cubic functions to represent passages of viaducts by least square fitting, with which the topology of the viaduct network can be rebuilt by combining the height information. Finally, a topological graph of the viaducts network is produced. The full-automatic method can potentially benefit the application of urban navigation and city model reconstruction.

  12. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ole Green

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  13. The Role of Templating in the Emergence of RNA from the Prebiotic Chemical Mixture

    Directory of Open Access Journals (Sweden)

    Andrew S. Tupper

    2017-10-01

    Full Text Available Biological RNA is a uniform polymer in three senses: it uses nucleotides of a single chirality; it uses only ribose sugars and four nucleobases rather than a mixture of other sugars and bases; and it uses only 3′-5′ bonds rather than a mixture of different bond types. We suppose that prebiotic chemistry would generate a diverse mixture of potential monomers, and that random polymerization would generate non-uniform strands of mixed chirality, monomer composition, and bond type. We ask what factors lead to the emergence of RNA from this mixture. We show that template-directed replication can lead to the emergence of all the uniform properties of RNA by the same mechanism. We study a computational model in which nucleotides react via polymerization, hydrolysis, and template-directed ligation. Uniform strands act as templates for ligation of shorter oligomers of the same type, whereas mixed strands do not act as templates. The three uniform properties emerge naturally when the ligation rate is high. If there is an exact symmetry, as with the chase of chirality, the uniform property arises via a symmetry-breaking phase transition. If there is no exact symmetry, as with monomer selection and backbone regioselectivity, the uniform property emerges gradually as the rate of template-directed ligation is increased.

  14. Action video game play facilitates the development of better perceptual templates

    Science.gov (United States)

    Bejjanki, Vikranth R.; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C. Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-01-01

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play. PMID:25385590

  15. Action video game play facilitates the development of better perceptual templates.

    Science.gov (United States)

    Bejjanki, Vikranth R; Zhang, Ruyuan; Li, Renjie; Pouget, Alexandre; Green, C Shawn; Lu, Zhong-Lin; Bavelier, Daphne

    2014-11-25

    The field of perceptual learning has identified changes in perceptual templates as a powerful mechanism mediating the learning of statistical regularities in our environment. By measuring threshold-vs.-contrast curves using an orientation identification task under varying levels of external noise, the perceptual template model (PTM) allows one to disentangle various sources of signal-to-noise changes that can alter performance. We use the PTM approach to elucidate the mechanism that underlies the wide range of improvements noted after action video game play. We show that action video game players make use of improved perceptual templates compared with nonvideo game players, and we confirm a causal role for action video game play in inducing such improvements through a 50-h training study. Then, by adapting a recent neural model to this task, we demonstrate how such improved perceptual templates can arise from reweighting the connectivity between visual areas. Finally, we establish that action gamers do not enter the perceptual task with improved perceptual templates. Instead, although performance in action gamers is initially indistinguishable from that of nongamers, action gamers more rapidly learn the proper template as they experience the task. Taken together, our results establish for the first time to our knowledge the development of enhanced perceptual templates following action game play. Because such an improvement can facilitate the inference of the proper generative model for the task at hand, unlike perceptual learning that is quite specific, it thus elucidates a general learning mechanism that can account for the various behavioral benefits noted after action game play.

  16. Development on quantitative safety analysis method of accident scenario. The automatic scenario generator development for event sequence construction of accident

    International Nuclear Information System (INIS)

    Kojima, Shigeo; Onoue, Akira; Kawai, Katsunori

    1998-01-01

    This study intends to develop a more sophisticated tool that will advance the current event tree method used in all PSA, and to focus on non-catastrophic events, specifically a non-core melt sequence scenario not included in an ordinary PSA. In the non-catastrophic event PSA, it is necessary to consider various end states and failure combinations for the purpose of multiple scenario construction. Therefore it is anticipated that an analysis work should be reduced and automated method and tool is required. A scenario generator that can automatically handle scenario construction logic and generate the enormous size of sequences logically identified by state-of-the-art methodology was developed. To fulfill the scenario generation as a technical tool, a simulation model associated with AI technique and graphical interface, was introduced. The AI simulation model in this study was verified for the feasibility of its capability to evaluate actual systems. In this feasibility study, a spurious SI signal was selected to test the model's applicability. As a result, the basic capability of the scenario generator could be demonstrated and important scenarios were generated. The human interface with a system and its operation, as well as time dependent factors and their quantification in scenario modeling, was added utilizing human scenario generator concept. Then the feasibility of an improved scenario generator was tested for actual use. Automatic scenario generation with a certain level of credibility, was achieved by this study. (author)

  17. Clustering-based Feature Learning on Variable Stars

    Science.gov (United States)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    2016-04-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.

  18. CLUSTERING-BASED FEATURE LEARNING ON VARIABLE STARS

    International Nuclear Information System (INIS)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    2016-01-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline

  19. CLUSTERING-BASED FEATURE LEARNING ON VARIABLE STARS

    Energy Technology Data Exchange (ETDEWEB)

    Mackenzie, Cristóbal; Pichara, Karim [Computer Science Department, Pontificia Universidad Católica de Chile, Santiago (Chile); Protopapas, Pavlos [Institute for Applied Computational Science, Harvard University, Cambridge, MA (United States)

    2016-04-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.

  20. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); University of Newcastle, Callaghan, New South Wales (Australia); Sun, Jidi [University of Newcastle, Callaghan, New South Wales (Australia); Pichler, Peter [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Rivest-Hénault, David; Ghose, Soumya [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Richardson, Haylea [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Wratten, Chris; Martin, Jarad [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Arm, Jameen [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Best, Leah [Department of Radiology, Hunter New England Health, New Lambton, New South Wales (Australia); Chandra, Shekhar S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland (Australia); Fripp, Jurgen [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Menk, Frederick W. [University of Newcastle, Callaghan, New South Wales (Australia); Greer, Peter B. [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia)

    2015-12-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s