WorldWideScience

Sample records for automatically generated anatomically

  1. Algorithms to automatically quantify the geometric similarity of anatomical surfaces

    CERN Document Server

    Boyer, D; Clair, E St; Puente, J; Funkhouser, T; Patel, B; Jernvall, J; Daubechies, I

    2011-01-01

    We describe new approaches for distances between pairs of 2-dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not requir...

  2. Automatic generation of documents

    OpenAIRE

    Rosa Gini; Jacopo Pasquini

    2006-01-01

    This paper describes a natural interaction between Stata and markup languages. Stata’s programming and analysis features, together with the flexibility in output formatting of markup languages, allow generation and/or update of whole documents (reports, presentations on screen or web, etc.). Examples are given for both LaTeX and HTML. Stata’s commands are mainly dedicated to analysis of data on a computer screen and output of analysis stored in a log file available to researchers for later re...

  3. 4D measurement system for automatic location of anatomical structures

    Science.gov (United States)

    Witkowski, Marcin; Sitnik, Robert; Kujawińska, Małgorzata; Rapp, Walter; Kowalski, Marcin; Haex, Bart; Mooshake, Sven

    2006-04-01

    Orthopedics and neurosciences are fields of medicine where the analysis of objective movement parameters is extremely important for clinical diagnosis. Moreover, as there are significant differences between static and dynamic parameters, there is a strong need of analyzing the anatomical structures under functional conditions. In clinical gait analysis the benefits of kinematical methods are undoubted. In this paper we present a 4D (3D + time) measurement system capable of automatic location of selected anatomical structures by locating and tracing the structures' position and orientation in time. The presented system is designed to help a general practitioner in diagnosing selected lower limbs' dysfunctions (e.g. knee injuries) and also determine if a patient should be directed for further examination (e.g. x-ray or MRI). The measurement system components are hardware and software. For the hardware part we adapt the laser triangulation method. In this way we can evaluate functional and dynamic movements in a contact-free, non-invasive way, without the use of potentially harmful radiation. Furthermore, opposite to marker-based video-tracking systems, no preparation time is required. The software part consists of a data acquisition module, an image processing and point clouds (point cloud, set of points described by coordinates (x, y, z)) calculation module, a preliminary processing module, a feature-searching module and an external biomechanical module. The paper briefly presents the modules mentioned above with the focus on the feature-searching module. Also we present some measurement and analysis results. These include: parameters maps, landmarks trajectories in time sequence and animation of a simplified model of lower limbs.

  4. Automatic Generation of Technical Documentation

    OpenAIRE

    Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of ...

  5. Automatic Generation of Technical Documentation

    CERN Document Server

    Reiter, E R; Levine, J; Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of our experiences with IDAS and the lessons we have learned from it will be beneficial for other researchers who wish to build technical-documentation generation systems.

  6. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  7. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  8. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  9. Automatic detection of anatomical landmarks in uterine cervix images.

    Science.gov (United States)

    Greenspan, Hayit; Gordon, Shiri; Zimmerman, Gali; Lotenberg, Shelly; Jeronimo, Jose; Antani, Sameer; Long, Rodney

    2009-03-01

    The work focuses on a unique medical repository of digital cervicographic images ("Cervigrams") collected by the National Cancer Institute (NCI) in longitudinal multiyear studies. NCI, together with the National Library of Medicine (NLM), is developing a unique web-accessible database of the digitized cervix images to study the evolution of lesions related to cervical cancer. Tools are needed for automated analysis of the cervigram content to support cancer research. We present a multistage scheme for segmenting and labeling regions of anatomical interest within the cervigrams. In particular, we focus on the extraction of the cervix region and fine detection of the cervix boundary; specular reflection is eliminated as an important preprocessing step; in addition, the entrance to the endocervical canal (the "os"), is detected. Segmentation results are evaluated on three image sets of cervigrams that were manually labeled by NCI experts.

  10. Automatic anatomical structures location based on dynamic shape measurement

    Science.gov (United States)

    Witkowski, Marcin; Rapp, Walter; Sitnik, Robert; Kujawinska, Malgorzata; Vander Sloten, Jos; Haex, Bart; Bogaert, Nico; Heitmann, Kjell

    2005-09-01

    New image processing methods and active photonics apparatus have made possible the development of relatively inexpensive optical systems for complex shape and object measurements. We present dynamic 360° scanning method for analysis of human lower body biomechanics, with an emphasis on the analysis of the knee joint. The anatomical structure (of high medical interest) that is possible to scan and analyze, is patella. Tracking of patella position and orientation under dynamic conditions may lead to detect pathological patella movements and help in knee joint disease diagnosis. The processed data is obtained from a dynamic laser triangulation surface measurement system, able to capture slow to normal movements with a scan frequency between 15 and 30 Hz. These frequency rates are enough to capture controlled movements used e.g. for medical examination purposes. The purpose of the work presented is to develop surface analysis methods that may be used as support of diagnosis of motoric abilities of lower limbs. The paper presents algorithms used to process acquired lower limbs surface data in order to find the position and orientation of patella. The algorithms implemented include input data preparation, curvature description methods, knee region discrimination and patella assumed position/orientation calculation. Additionally, a method of 4D (3D + time) medical data visualization is proposed. Also some exemplary results are presented.

  11. Semi-Automatic Anatomical Tree Matching for Landmark-Based Elastic Registration of Liver Volumes

    Directory of Open Access Journals (Sweden)

    Klaus Drechsler

    2010-01-01

    Full Text Available One promising approach to register liver volume acquisitions is based on the branching points of the vessel trees as anatomical landmarks inherently available in the liver. Automated tree matching algorithms were proposed to automatically find pair-wise correspondences between two vessel trees. However, to the best of our knowledge, none of the existing automatic methods are completely error free. After a review of current literature and methodologies on the topic, we propose an efficient interaction method that can be employed to support tree matching algorithms with important pre-selected correspondences or after an automatic matching to manually correct wrongly matched nodes. We used this method in combination with a promising automatic tree matching algorithm also presented in this work. The proposed method was evaluated by 4 participants and a CT dataset that we used to derive multiple artificial datasets.

  12. Automatic Segmentation Framework of Building Anatomical Mouse Model for Bioluminescence Tomography

    OpenAIRE

    Abdullah Alali

    2013-01-01

    Bioluminescence tomography is known as a highly ill-posed inverse problem. To improve the reconstruction performance by introducing anatomical structures as a priori knowledge, an automatic segmentation framework has been proposed in this paper to extract the mouse whole-body organs and tissues, which enables to build up a heterogeneous mouse model for reconstruction of bioluminescence tomography. Finally, an in vivo mouse experiment has been conducted to evaluate this framework by using an X...

  13. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature

    OpenAIRE

    Destrieux, Christophe; Fischl, Bruce; Dale, Anders; Halgren, Eric

    2010-01-01

    Precise localization of sulco-gyral structures of the human cerebral cortex is important for the interpretation of morpho-functional data, but requires anatomical expertise and is time consuming because of the brain s geometric complexity. Software developed to automatically identify sulco-gyral structures has improved substantially as a result of techniques providing topologically-correct reconstructions permitting inflated views of the human brain. Here we describe a complete parcellation o...

  14. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    OpenAIRE

    Veena Thakur; Trupti Gedam

    2015-01-01

    Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS) are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify...

  15. Hierarchical word clustering - automatic thesaurus generation

    OpenAIRE

    Hodge, V.J.; Austin, J.

    2002-01-01

    In this paper, we propose a hierarchical, lexical clustering neural network algorithm that automatically generates a thesaurus (synonym abstraction) using purely stochastic information derived from unstructured text corpora and requiring no prior word classifications. The lexical hierarchy overcomes the Vocabulary Problem by accommodating paraphrasing through using synonym clusters and overcomes Information Overload by focusing search within cohesive clusters. We describe existing word catego...

  16. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature.

    Science.gov (United States)

    Destrieux, Christophe; Fischl, Bruce; Dale, Anders; Halgren, Eric

    2010-10-15

    Precise localization of sulco-gyral structures of the human cerebral cortex is important for the interpretation of morpho-functional data, but requires anatomical expertise and is time consuming because of the brain's geometric complexity. Software developed to automatically identify sulco-gyral structures has improved substantially as a result of techniques providing topologically correct reconstructions permitting inflated views of the human brain. Here we describe a complete parcellation of the cortical surface using standard internationally accepted nomenclature and criteria. This parcellation is available in the FreeSurfer package. First, a computer-assisted hand parcellation classified each vertex as sulcal or gyral, and these were then subparcellated into 74 labels per hemisphere. Twelve datasets were used to develop rules and algorithms (reported here) that produced labels consistent with anatomical rules as well as automated computational parcellation. The final parcellation was used to build an atlas for automatically labeling the whole cerebral cortex. This atlas was used to label an additional 12 datasets, which were found to have good concordance with manual labels. This paper presents a precisely defined method for automatically labeling the cortical surface in standard terminology. PMID:20547229

  17. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT

    International Nuclear Information System (INIS)

    Highlights: •Automatic segmentation and labeling of the thoracolumbar spine. •Automatically generated double-angulated and aligned axial images of spine segments. •High grade of accurateness for the symmetric depiction of anatomical structures. •Time-saving and may improve workflow in daily practice. -- Abstract: Objectives: To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. Material and methods: 77 patients (28 women, 49 men, mean age 65.3 ± 14.4 years) with known or suspected spinal disorders (degenerative spine disease n = 32; disc herniation n = 36; traumatic vertebral fractures n = 9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. Results: In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1 min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p < 0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p < 0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. Conclusion: The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time

  18. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT

    Energy Technology Data Exchange (ETDEWEB)

    Scholtz, Jan-Erik, E-mail: janerikscholtz@gmail.com; Wichmann, Julian L.; Kaup, Moritz; Fischer, Sebastian; Kerl, J. Matthias; Lehnert, Thomas; Vogl, Thomas J.; Bauer, Ralf W.

    2015-03-15

    Highlights: •Automatic segmentation and labeling of the thoracolumbar spine. •Automatically generated double-angulated and aligned axial images of spine segments. •High grade of accurateness for the symmetric depiction of anatomical structures. •Time-saving and may improve workflow in daily practice. -- Abstract: Objectives: To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. Material and methods: 77 patients (28 women, 49 men, mean age 65.3 ± 14.4 years) with known or suspected spinal disorders (degenerative spine disease n = 32; disc herniation n = 36; traumatic vertebral fractures n = 9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. Results: In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1 min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p < 0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p < 0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. Conclusion: The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time

  19. Automatic Test Pattern Generation for Digital Circuits

    Directory of Open Access Journals (Sweden)

    S. Hemalatha

    2014-04-01

    Full Text Available Digital circuits complexity and density are increasing and at the same time it should have more quality and reliability. It leads with high test costs and makes the validation more complex. The main aim is to develop a complete behavioral fault simulation and automatic test pattern generation (ATPG system for digital circuits modeled in verilog and VHDL. An integrated Automatic Test Generation (ATG and Automatic Test Executing/Equipment (ATE system for complex boards is developed here. An approach to use memristors (resistors with memory in programmable analog circuits. The Main idea consists in a circuit design in which low voltages are applied to memristors during their operation as analog circuit elements and high voltages are used to program the memristor’s states. This way, as it was demonstrated in recent experiments, the state of memristors does not essentially change during analog mode operation. As an example of our approach, we have built several programmable analog circuits demonstrating memristor -based programming of threshold, gain and frequency. In these circuits the role of memristor is played by a memristor emulator developed by us. A multiplexer is developed to generate a class of minimum transition sequences. The entire hardware is realized as digital logical circuits and the test results are simulated in Model sim software. The results of this research show that behavioral fault simulation will remain as a highly attractive alternative for the future generation of VLSI and system-on-chips (SoC.

  20. Different Manhattan project: automatic statistical model generation

    Science.gov (United States)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  1. XML-Based Automatic Test Data Generation

    OpenAIRE

    Halil Ibrahim Bulbul; Turgut Bakir

    2012-01-01

    Software engineering aims at increasing quality and reliability while decreasing the cost of the software. Testing is one of the most time-consuming phases of the software development lifecycle. Improvement in software testing results in decrease in cost and increase in quality of the software. Automation in software testing is one of the most popular ways of software cost reduction and reliability improvement. In our work we propose a system called XML-based automatic test data generation th...

  2. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2015-10-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  3. Automatic Caption Generation for Electronics Textbooks

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2014-12-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  4. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  5. Automatic Metadata Generation using Associative Networks

    CERN Document Server

    Rodriguez, Marko A; Van de Sompel, Herbert

    2008-01-01

    In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete...

  6. Development of an automatic block generation algorithm

    Science.gov (United States)

    Eberhardt, Scott; Kim, Byoungsoo

    1995-01-01

    A method for automatic multiblock grid generation is described. The method combines the modified advancing front method as a predictor with an elliptic scheme as a corrector. It advances a collection of cells by one cell height in the outward direction using modified advancing front method, and then corrects newly-obtained cell positions by solving elliptic equations. This predictor-corrector type scheme is repeatedly applied until the field of interest is filled with hexahedral grid cells. Given the configuration surface grid, the scheme produces block layouts as well as grid cells with overall smoothness as its output. The method saves human-time and reduces the burden on the user in generating grids for general 3D configurations. It is used to generate multiblock grids for wings in their high-lift configuration.

  7. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  8. Automatic tool path generation for finish machining

    Energy Technology Data Exchange (ETDEWEB)

    Kwok, Kwan S.; Loucks, C.S.; Driessen, B.J.

    1997-03-01

    A system for automatic tool path generation was developed at Sandia National Laboratories for finish machining operations. The system consists of a commercially available 5-axis milling machine controlled by Sandia developed software. This system was used to remove overspray on cast turbine blades. A laser-based, structured-light sensor, mounted on a tool holder, is used to collect 3D data points around the surface of the turbine blade. Using the digitized model of the blade, a tool path is generated which will drive a 0.375 inch diameter CBN grinding pin around the tip of the blade. A fuzzified digital filter was developed to properly eliminate false sensor readings caused by burrs, holes and overspray. The digital filter was found to successfully generate the correct tool path for a blade with intentionally scanned holes and defects. The fuzzified filter improved the computation efficiency by a factor of 25. For application to general parts, an adaptive scanning algorithm was developed and presented with simulation results. A right pyramid and an ellipsoid were scanned successfully with the adaptive algorithm.

  9. The Role of Item Models in Automatic Item Generation

    Science.gov (United States)

    Gierl, Mark J.; Lai, Hollis

    2012-01-01

    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  10. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  11. Generation of anatomically realistic numerical phantoms for optoacoustic breast imaging

    Science.gov (United States)

    Lou, Yang; Mitsuhashi, Kenji; Appleton, Catherine M.; Oraevsky, Alexander; Anastasio, Mark A.

    2016-03-01

    Because optoacoustic tomography (OAT) can provide functional information based on hemoglobin contrast, it is a promising imaging modality for breast cancer diagnosis. Developing an effective OAT breast imaging system requires balancing multiple design constraints, which can be expensive and time-consuming. Therefore, computer- simulation studies are often conducted to facilitate this task. However, most existing computer-simulation studies of OAT breast imaging employ simple phantoms such as spheres or cylinders that over-simplify the complex anatomical structures in breasts, thus limiting the value of these studies in guiding real-world system design. In this work, we propose a method to generate realistic numerical breast phantoms for OAT research based on clinical magnetic resonance imaging (MRI) data. The phantoms include a skin layer that defines breast-air boundary, major vessel branches that affect light absorption in the breast, and fatty tissue and fibroglandular tissue whose acoustical heterogeneity perturbs acoustic wave propagation. By assigning realistic optical and acoustic parameters to different tissue types, we establish both optic and acoustic breast phantoms, which will be exported into standard data formats for cross-platform usage.

  12. Generating IDS Attack Pattern Automatically Based on Attack Tree

    Institute of Scientific and Technical Information of China (English)

    向尕; 曹元大

    2003-01-01

    Generating attack pattern automatically based on attack tree is studied. The extending definition of attack tree is proposed. And the algorithm of generating attack tree is presented. The method of generating attack pattern automatically based on attack tree is shown, which is tested by concrete attack instances. The results show that the algorithm is effective and efficient. In doing so, the efficiency of generating attack pattern is improved and the attack trees can be reused.

  13. Automatic generation of a view to geographical database

    OpenAIRE

    Dunkars, Mats

    2001-01-01

    This thesis concerns object oriented modelling and automatic generalisation of geographic information. The focus however is not on traditional paper maps, but on screen maps that are automatically generated from a geographical database. Object oriented modelling is used to design screen maps that are equipped with methods that automatically extracts information from a geographical database, generalises the information and displays it on a screen. The thesis consists of three parts: a theoreti...

  14. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  15. System for Automatic Generation of Examination Papers in Discrete Mathematics

    Science.gov (United States)

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  16. A New Approach to Fully Automatic Mesh Generation

    Institute of Scientific and Technical Information of China (English)

    闵卫东; 张征明; 等

    1995-01-01

    Automatic mesh generation is one of the most important parts in CIMS (Computer Integrated Manufacturing System).A method based on mesh grading propagation which automatically produces a triangular mesh in a multiply connected planar region is presented in this paper.The method decomposes the planar region into convex subregions,using algorithms which run in linear time.For every subregion,an algorithm is used to generate shrinking polygons according to boundary gradings and form delaunay triangulation between two adjacent shrinking polygons,both in linear time.It automatically propagates boundary gradings into the interior of the region and produces satisfactory quasi-uniform mesh.

  17. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18F-FDG imaging and the relationship between the 18F-FDG accumulation of brain anatomic function area and the 18F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18F-FDG accumulation of every brain anatomic function area and the 18F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  18. Automatic finite elements mesh generation from planar contours of the brain: an image driven 'blobby' approach

    CERN Document Server

    Bucki, M; Bucki, Marek; Payan, Yohan

    2005-01-01

    In this paper, we address the problem of automatic mesh generation for finite elements modeling of anatomical organs for which a volumetric data set is available. In the first step a set of characteristic outlines of the organ is defined manually or automatically within the volume. The outlines define the "key frames" that will guide the procedure of surface reconstruction. Then, based on this information, and along with organ surface curvature information extracted from the volume data, a 3D scalar field is generated. This field allows a 3D reconstruction of the organ: as an iso-surface model, using a marching cubes algorithm; or as a 3D mesh, using a grid "immersion" technique, the field value being used as the outside/inside test. The final reconstruction respects the various topological changes that occur within the organ, such as holes and branching elements.

  19. Integrated Coordinated Optimization Control of Automatic Generation Control and Automatic Voltage Control in Regional Power Grids

    OpenAIRE

    Qiu-Yu Lu; Wei Hu; Le Zheng; Yong Min; Miao Li; Xiao-Ping Li; Wei-Chun Ge; Zhi-Ming Wang

    2012-01-01

    Automatic Generation Control (AGC) and Automatic Voltage Control (AVC) are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC) system for AGC and AVC. Concerning the diversity of contro...

  20. Automatic Test Case Generation of C Program Using CFG

    Directory of Open Access Journals (Sweden)

    Sangeeta Tanwer

    2010-07-01

    Full Text Available Software quality and assurance in a software company is the only way to gain the customer confidence by removing all possible errors. It can be done by automatic test case generation. Taking popularly C programs as tests object, this paper explores how to create CFG of a C program and generate automatic Test Cases. It explores the feasibility and nonfeasibility of path basis upon no. of iteration. First C is code converted to instrumented code. Then test cases are generated by using Symbolic Testing and random Testing. System is developed by using C#.net in Visual Studio 2008. In addition some future research directions are also explored.

  1. Vox populi: a tool for automatically generating video documentaries

    OpenAIRE

    Bocconi, S.; Nack, Frank; Hardman, Hazel Lynda

    2005-01-01

    Vox Populi is a system that automatically generates video documentaries. Our application domain is video interviews about controversial topics. Via a Web interface the user selects one of the possible topics and a point of view she would like the generated sequence to present, and the engine selects and assembles video material from the repository to satisfy the user request.

  2. Automatic Generation of Video Narratives from Shared UGC

    NARCIS (Netherlands)

    Zsombori, V.; Frantzis, M.; Guimarães, R.L.; Ursu, M.; Cesar Garcia, P.S.; Kegel, I.; Craigie, R.; Bulterman, D.C.A.

    2011-01-01

    This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video n

  3. Vox populi: a tool for automatically generating video documentaries

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.-M.; Hardman, L.

    2005-01-01

    Vox Populi is a system that automatically generates video documentaries. Our application domain is video interviews about controversial topics. Via a Web interface the user selects one of the possible topics and a point of view she would like the generated sequence to present, and the engine selects

  4. Automatic generation of a neural network architecture using evolutionary computation

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.; Johnson, R.

    1995-01-01

    This paper reports the application of evolutionary computation in the automatic generation of a neural network architecture. It is a usual practice to use trial and error to find a suitable neural network architecture. This is not only time consuming but may not generate an optimal solution for a gi

  5. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.-M.; Hardman, L.

    2008-01-01

    In this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical presentation patte

  6. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Science.gov (United States)

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI.

  7. Automatic Building Information Model Query Generation

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao

    2015-12-01

    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  8. Procedure for the automatic mesh generation of innovative gear teeth

    Directory of Open Access Journals (Sweden)

    Radicella Andrea Chiaramonte

    2016-01-01

    Full Text Available After having described gear wheels with teeth having the two sides constituted by different involutes and their importance in engineering applications, we stress the need for an efficient procedure for the automatic mesh generation of innovative gear teeth. First, we describe the procedure for the subdivision of the tooth profile in the various possible cases, then we show the method for creating the subdivision mesh, defined by two series of curves called meridians and parallels. Finally, we describe how the above procedure for automatic mesh generation is able to solve specific cases that may arise when dealing with teeth having the two sides constituted by different involutes.

  9. An efficient method for parallel CRC automatic generation

    Institute of Scientific and Technical Information of China (English)

    陈红胜; 张继承; 王勇; 陈抗生

    2003-01-01

    The State Transition Equation (STE) based method to automatically generate the parallel CRC circuits for any generator polynomial or required amount of parallelism is presented. The parallel CRC circuit so generated is partially optimized before being fed to synthesis tools and works properly in our LAN transceiv-er. Compared with the cascading method, the proposed method gives better timing results and significantly re-duces the synthesis time, in particular.

  10. Algorithm for Automatic Generation of Curved and Compound Twills

    Institute of Scientific and Technical Information of China (English)

    WANG Mei-zhen; WANG Fu-mei; WANG Shan-yuan

    2005-01-01

    A new arithmetic using matrix left-shift functions for the quicker generation of curved and compound twills is introduced in this paper. A matrix model for the generation of regular, curved and compound twill structures is established and its computing simulation realization are elaborated. Examples of the algorithm applying in the simulation and the automatic generation of curved and compound twills in fabric CAD are obtained.

  11. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  12. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  13. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...

  14. Automatic Generation of Tests from Domain and Multimedia Ontologies

    Science.gov (United States)

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  15. Automatic generation of gene finders for eukaryotic species

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Krogh, A.

    2006-01-01

    Background The number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity...

  16. Mppsocgen: A framework for automatic generation of mppsoc architecture

    CERN Document Server

    Kallel, Emna; Baklouti, Mouna; Abid, Mohamed

    2012-01-01

    Automatic code generation is a standard method in software engineering since it improves the code consistency and reduces the overall development time. In this context, this paper presents a design flow for automatic VHDL code generation of mppSoC (massively parallel processing System-on-Chip) configuration. Indeed, depending on the application requirements, a framework of Netbeans Platform Software Tool named MppSoCGEN was developed in order to accelerate the design process of complex mppSoC. Starting from an architecture parameters design, VHDL code will be automatically generated using parsing method. Configuration rules are proposed to have a correct and valid VHDL syntax configuration. Finally, an automatic generation of Processor Elements and network topologies models of mppSoC architecture will be done for Stratix II device family. Our framework improves its flexibility on Netbeans 5.5 version and centrino duo Core 2GHz with 22 Kbytes and 3 seconds average runtime. Experimental results for reduction al...

  17. Algorithm for automatic generating motion trajectories of plant maintenance robot

    International Nuclear Information System (INIS)

    The algorithm for automatic generating motion trajectories of robot manipulator is proposed as a new method to operate plant maintenance robots. This algorithm consists of two procedures, motion trajectories of the end effecter and the posture of robot manipulator. Motion trajectories of the end effecter are generated by using a concept of repulsive force vector field. The motion trajectories model which consists of many virtual springs and mass points are changed their form using the repulsive force from obstacles. Then, a posture of robot manipulator is also automatically generated with the same concept. By using this algorithm, an experiment of generating motion with the 7 degrees of freedom (DOF) manipulator was carried out. As a result, it was confirmed that the proposed method realizes obstacle avoidance during task motion. We are planning to apply this system to nuclear power plants. This system can realize shortening of preparation and operation periods for maintenance work in the nuclear reactor. (author)

  18. AN APPROACH TO GENERATE TEST CASES AUTOMATICALLY USING GENETIC ALGORITHM

    OpenAIRE

    Deepika Sharma*, Dr. Sanjay Tyagi

    2016-01-01

    Software testing is a very crucial part among all phases of software life cycle model in software engineering, which leads to better software quality and reliability. The main issue of software testing is the incompleteness of testing due to the vast amount of possible test cases which increase the effort and cost of the software. So generating adequate test cases will help to reduce the effort and cost of the software. The purpose of this research paper is to automatically generate test case...

  19. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    Energy Technology Data Exchange (ETDEWEB)

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 {times} 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs.

  20. Central Pattern Generator for Locomotion: Anatomical, Physiological and Pathophysiological Considerations

    Directory of Open Access Journals (Sweden)

    Pierre A. Guertin

    2013-02-01

    Full Text Available This article provides a perspective on major innovations over the past century in research on the spinal cord and, specifically, on specialized spinal circuits involved in the control of rhythmic locomotor pattern generation and modulation. Pioneers such as Charles Sherrington and Thomas Graham Brown have conducted experiments in the early twentieth century that changed our views of the neural control of locomotion. Their seminal work supported subsequently by several decades of evidence has led to the conclusion that walking, flying and swimming are largely controlled by a network of spinal neurons generally referred to as the central pattern generator (CPG for locomotion. It has been subsequently demonstrated across all vertebrate species examined, from lampreys to humans, that this CPG is capable, under some conditions, to self-produce, even in absence of descending or peripheral inputs, basic rhythmic and coordinated locomotor movements. Recent evidence suggests, in turn, that plasticity changes of some CPG elements may contribute to the development of specific pathophysiological conditions associated with impaired locomotion or spontaneous locomotor-like movements. This article constitutes a comprehensive review summarizing key findings on the CPG as well as on its potential role in Restless Leg Syndrome (RLS, Periodic Leg Movement (PLM, and Alternating Leg Muscle Activation (ALMA. Special attention will be paid to the role of the CPG in a recently identified, and uniquely different neurological disorder, called the Uner Tan Syndrome.

  1. Central pattern generator for locomotion: anatomical, physiological, and pathophysiological considerations.

    Science.gov (United States)

    Guertin, Pierre A

    2012-01-01

    This article provides a perspective on major innovations over the past century in research on the spinal cord and, specifically, on specialized spinal circuits involved in the control of rhythmic locomotor pattern generation and modulation. Pioneers such as Charles Sherrington and Thomas Graham Brown have conducted experiments in the early twentieth century that changed our views of the neural control of locomotion. Their seminal work supported subsequently by several decades of evidence has led to the conclusion that walking, flying, and swimming are largely controlled by a network of spinal neurons generally referred to as the central pattern generator (CPG) for locomotion. It has been subsequently demonstrated across all vertebrate species examined, from lampreys to humans, that this CPG is capable, under some conditions, to self-produce, even in absence of descending or peripheral inputs, basic rhythmic, and coordinated locomotor movements. Recent evidence suggests, in turn, that plasticity changes of some CPG elements may contribute to the development of specific pathophysiological conditions associated with impaired locomotion or spontaneous locomotor-like movements. This article constitutes a comprehensive review summarizing key findings on the CPG as well as on its potential role in Restless Leg Syndrome, Periodic Leg Movement, and Alternating Leg Muscle Activation. Special attention will be paid to the role of the CPG in a recently identified, and uniquely different neurological disorder, called the Uner Tan Syndrome.

  2. An Application of Reverse Engineering to Automatic Item Generation: A Proof of Concept Using Automatically Generated Figures

    Science.gov (United States)

    Lorié, William A.

    2013-01-01

    A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…

  3. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  4. A semi-automatic framework of measuring pulmonary arterial metrics at anatomic airway locations using CT imaging

    Science.gov (United States)

    Jin, Dakai; Guo, Junfeng; Dougherty, Timothy M.; Iyer, Krishna S.; Hoffman, Eric A.; Saha, Punam K.

    2016-03-01

    Pulmonary vascular dysfunction has been implicated in smoking-related susceptibility to emphysema. With the growing interest in characterizing arterial morphology for early evaluation of the vascular role in pulmonary diseases, there is an increasing need for the standardization of a framework for arterial morphological assessment at airway segmental levels. In this paper, we present an effective and robust semi-automatic framework to segment pulmonary arteries at different anatomic airway branches and measure their cross-sectional area (CSA). The method starts with user-specified endpoints of a target arterial segment through a custom-built graphical user interface. It then automatically detect the centerline joining the endpoints, determines the local structure orientation and computes the CSA along the centerline after filtering out the adjacent pulmonary structures, such as veins or airway walls. Several new techniques are presented, including collision-impact based cost function for centerline detection, radial sample-line based CSA computation, and outlier analysis of radial distance to subtract adjacent neighboring structures in the CSA measurement. The method was applied to repeat-scan pulmonary multirow detector CT (MDCT) images from ten healthy subjects (age: 21-48 Yrs, mean: 28.5 Yrs; 7 female) at functional residual capacity (FRC). The reproducibility of computed arterial CSA from four airway segmental regions in middle and lower lobes was analyzed. The overall repeat-scan intra-class correlation (ICC) of the computed CSA from all four airway regions in ten subjects was 96% with maximum ICC found at LB10 and RB4 regions.

  5. Progressive Concept Evaluation Method for Automatically Generated Concept Variants

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2014-07-01

    Full Text Available Conceptual design is one of the most critical and important phases of design process with least computer support system. Conceptual design support tool (CDST is a conceptual design support system developed to automatically generate concepts for each subfunction in functional structure. The automated concept generation process results in large number of concept variants which require a thorough evaluation process to select the best design. To address this, a progressive concept evaluation technique consisting of absolute comparison, concept screening and weighted decision matrix using analytical hierarchy process (AHP is proposed to eliminate infeasible concepts at each stage. The software implementation of the proposed method is demonstrated.

  6. [Automatic analysis pipeline of next-generation sequencing data].

    Science.gov (United States)

    Wenke, Li; Fengyu, Li; Siyao, Zhang; Bin, Cai; Na, Zheng; Yu, Nie; Dao, Zhou; Qian, Zhao

    2014-06-01

    The development of next-generation sequencing has generated high demand for data processing and analysis. Although there are a lot of software for analyzing next-generation sequencing data, most of them are designed for one specific function (e.g., alignment, variant calling or annotation). Therefore, it is necessary to combine them together for data analysis and to generate interpretable results for biologists. This study designed a pipeline to process Illumina sequencing data based on Perl programming language and SGE system. The pipeline takes original sequence data (fastq format) as input, calls the standard data processing software (e.g., BWA, Samtools, GATK, and Annovar), and finally outputs a list of annotated variants that researchers can further analyze. The pipeline simplifies the manual operation and improves the efficiency by automatization and parallel computation. Users can easily run the pipeline by editing the configuration file or clicking the graphical interface. Our work will facilitate the research projects using the sequencing technology.

  7. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  8. Visual definition of procedures for automatic virtual scene generation

    CERN Document Server

    Lucanin, Drazen

    2012-01-01

    With more and more digital media, especially in the field of virtual reality where detailed and convincing scenes are much required, procedural scene generation is a big helping tool for artists. A problem is that defining scene descriptions through these procedures usually requires a knowledge in formal language grammars, programming theory and manually editing textual files using a strict syntax, making it less intuitive to use. Luckily, graphical user interfaces has made a lot of tasks on computers easier to perform and out of the belief that creating computer programs can also be one of them, visual programming languages (VPLs) have emerged. The goal in VPLs is to shift more work from the programmer to the integrated development environment (IDE), making programming an user-friendlier task. In this thesis, an approach of using a VPL for defining procedures that automatically generate virtual scenes is presented. The methods required to build a VPL are presented, including a novel method of generating read...

  9. Automatic structures and growth functions for finitely generated abelian groups

    CERN Document Server

    Kamei, Satoshi

    2011-01-01

    In this paper, we consider the formal power series whose n-th coefficient is the number of copies of a given finite graph in the ball of radius n centred at the identity element in the Cayley graph of a finitely generated group and call it the growth function. Epstein, Iano-Fletcher and Uri Zwick proved that the growth function is a rational function if the group has a geodesic automatic structure. We compute the growth function in the case where the group is abelian and see that the denominator of the rational function is determined from the rank of the group.

  10. Automatic generation of application specific FPGA multicore accelerators

    DEFF Research Database (Denmark)

    Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo;

    2014-01-01

    . In this paper we propose a tool flow, which automatically generates highly optimized hardware multicore systems based on parameters. Profiling feedback is used to adjust these parameters to improve performance and lower the power consumption. For an image processing application we show that our tools are able......High performance computing systems make increasing use of hardware accelerators to improve performance and power properties. For large high-performance FPGAs to be successfully integrated in such computing systems, methods to raise the abstraction level of FPGA programming are required...

  11. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  12. Integrated Coordinated Optimization Control of Automatic Generation Control and Automatic Voltage Control in Regional Power Grids

    Directory of Open Access Journals (Sweden)

    Qiu-Yu Lu

    2012-09-01

    Full Text Available Automatic Generation Control (AGC and Automatic Voltage Control (AVC are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC system for AGC and AVC. Concerning the diversity of control devices and the characteristics of discrete control interaction with a continuously operating power system, the ICOC system is designed in a hierarchical structure and driven by security, quality and economic events, consequently reducing optimization complexity and realizing multi-target quasi-optimization. In addition, an innovative model of Loss Minimization Control (LMC taking into consideration active and reactive power regulation is proposed to achieve a substantial reduction in network losses and a cross iterative method for AGC and AVC instructions is also presented to decrease negative interference between control systems. The ICOC system has already been put into practice in some provincial regional power grids in China. Open-looping operation tests have proved the validity of the presented control strategies.

  13. Automatic Tamil lyric generation based on ontological interpretation for semantics

    Indian Academy of Sciences (India)

    Rajeswari Sridhar; D Jalin Gladis; Kameswaran Ganga; G Dhivya Prabha

    2014-02-01

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of the system includes the ontological interpretation of the scenario, and the selection of the appropriate tri-grams for generating the lyrics. To fulfill this, we have designed a new ontology with weighted edges, where the edges correspond to a set of sentences, which indicate a relationship, and are represented as a tri-gram. Once the appropriate tri-grams are selected, the root words from these tri-grams are sent to the morphological generator, to form words in their packed form. These words are then assembled to form the final lyrics. Parameters of poetry like rhyme, alliteration, simile, vocative words, etc., are also taken care of by the system. Using this approach, we achieved an average accuracy of 77.3% with respect to the exact semantic details being conveyed in the generated lyrics.

  14. Automatic Mesh Generation on a Regular Background Grid

    Institute of Scientific and Technical Information of China (English)

    LO S.H; 刘剑飞

    2002-01-01

    This paper presents an automatic mesh generation procedure on a 2D domainbased on a regular background grid. The idea is to devise a robust mesh generation schemewith equal emphasis on quality and efficiency. Instead of using a traditional regular rectangulargrid, a mesh of equilateral triangles is employed to ensure triangular element of the best qualitywill be preserved in the interior of the domain.As for the boundary, it is to be generated by a node/segment insertion process. Nodes areinserted into the background mesh one by one following the sequence of the domain boundary.The local structure of the mesh is modified based on the Delaunay criterion with the introduc-tion of each node. Those boundary segments, which are not produced in the phase of nodeinsertion, will be recovered through a systematic element swap process. Two theorems will bepresented and proved to set up the theoretical basic of the boundary recovery part. Exampleswill be presented to demonstrate the robustness and the quality of the mesh generated by theproposed technique.

  15. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning......, based on questionnaires administered to players after playing different levels. The contributions of the current paper are (1) more accurate models based on a much larger data set; (2) a mechanism for adapting level design parameters to given players and playing style; (3) evaluation of this adaptation...... mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players....

  16. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  17. Spline-based automatic path generation of welding robot

    Institute of Scientific and Technical Information of China (English)

    Niu Xuejuan; Li Liangyu

    2007-01-01

    This paper presents a flexible method for the representation of welded seam based on spline interpolation. In this method, the tool path of welding robot can be generated automatically from a 3D CAD model. This technique has been implemented and demonstrated in the FANUC Arc Welding Robot Workstation. According to the method, a software system is developed using VBA of SolidWorks 2006. It offers an interface between SolidWorks and ROBOGUIDE, the off-line programming software of FANUC robot. It combines the strong modeling function of the former and the simulating function of the latter. It also has the capability of communication with on-line robot. The result data have shown its high accuracy and strong reliability in experiments. This method will improve the intelligence and the flexibility of the welding robot workstation.

  18. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  19. Automatic generation of matrix element derivatives for tight binding models

    Science.gov (United States)

    Elena, Alin M.; Meister, Matthias

    2005-10-01

    Tight binding (TB) models are one approach to the quantum mechanical many-particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to simulations of the dynamics of a system, also derivatives of matrix elements can appear. In this work we give general expressions for first and second derivatives of such matrix elements. Implemented in a tight binding computer program, like, for instance, DINAMO, they obviate the need to type all the required derivatives of all occurring matrix elements by hand.

  20. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  1. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  2. Intelligent control schemes applied to Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Dingguo Chen

    2016-04-01

    Full Text Available Integrating ever increasing amount of renewable generating resources to interconnected power systems has created new challenges to the safety and reliability of today‟s power grids and posed new questions to be answered in the power system modeling, analysis and control. Automatic Generation Control (AGC must be extended to be able to accommodate the control of renewable generating assets. In addition, AGC is mandated to operate in accordance with the NERC‟s Control Performance Standard (CPS criteria, which represent a greater flexibility in relaxing the control of generating resources and yet assuring the stability and reliability of interconnected power systems when each balancing authority operates in full compliance. Enhancements in several aspects to the traditional AGC must be made in order to meet the aforementioned challenges. It is the intention of this paper to provide a systematic, mathematical formulation for AGC as a first attempt in the context of meeting the NERC CPS requirements and integrating renewable generating assets, which has not been seen reported in the literature to the best knowledge of the authors. Furthermore, this paper proposes neural network based predictive control schemes for AGC. The proposed controller is capable of handling complicated nonlinear dynamics in comparison with the conventional Proportional Integral (PI controller which is typically most effective to handle linear dynamics. The neural controller is designed in such a way that it has the capability of controlling the system generation in the relaxed manner so the ACE is controlled to a desired range instead of driving it to zero which would otherwise increase the control effort and cost; and most importantly the resulting system control performance meets the NERC CPS requirements and/or the NERC Balancing Authority’s ACE Limit (BAAL compliance requirements whichever are applicable.

  3. Provenance-Powered Automatic Workflow Generation and Composition

    Science.gov (United States)

    Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.

    2015-12-01

    In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.

  4. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    Science.gov (United States)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  5. Reinforcement-Based Fuzzy Neural Network ontrol with Automatic Rule Generation

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A reinforcemen-based fuzzy neural network control with automatic rule generation RBFNNC) is pro-posed. A set of optimized fuzzy control rules can be automatically generated through reinforcement learning based onthe state variables of object system. RBFNNC was applied to a cart-pole balancing system and simulation resultshows significant improvements on the rule generation.

  6. Shape design sensitivities using fully automatic 3-D mesh generation

    Science.gov (United States)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  7. Automatic Overset Grid Generation with Heuristic Feedback Control

    Science.gov (United States)

    Robinson, Peter I.

    2001-01-01

    An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.

  8. An Algorithm to Automatically Generate the Combinatorial Orbit Counting Equations.

    Science.gov (United States)

    Melckenbeeck, Ine; Audenaert, Pieter; Michoel, Tom; Colle, Didier; Pickavet, Mario

    2016-01-01

    Graphlets are small subgraphs, usually containing up to five vertices, that can be found in a larger graph. Identification of the graphlets that a vertex in an explored graph touches can provide useful information about the local structure of the graph around that vertex. Actually finding all graphlets in a large graph can be time-consuming, however. As the graphlets grow in size, more different graphlets emerge and the time needed to find each graphlet also scales up. If it is not needed to find each instance of each graphlet, but knowing the number of graphlets touching each node of the graph suffices, the problem is less hard. Previous research shows a way to simplify counting the graphlets: instead of looking for the graphlets needed, smaller graphlets are searched, as well as the number of common neighbors of vertices. Solving a system of equations then gives the number of times a vertex is part of each graphlet of the desired size. However, until now, equations only exist to count graphlets with 4 or 5 nodes. In this paper, two new techniques are presented. The first allows to generate the equations needed in an automatic way. This eliminates the tedious work needed to do so manually each time an extra node is added to the graphlets. The technique is independent on the number of nodes in the graphlets and can thus be used to count larger graphlets than previously possible. The second technique gives all graphlets a unique ordering which is easily extended to name graphlets of any size. Both techniques were used to generate equations to count graphlets with 4, 5 and 6 vertices, which extends all previous results. Code can be found at https://github.com/IneMelckenbeeck/equation-generator and https://github.com/IneMelckenbeeck/graphlet-naming.

  9. Learning Techniques for Automatic Test Pattern Generation using Boolean Satisfiability

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2013-07-01

    Full Text Available Automatic Test Pattern Generation (ATPG is one of the core problems in testing of digital circuits. ATPG algorithms based on Boolean Satisfiability (SAT turned out to be very powerful, due to great advances in the performance of satisfiability solvers for propositional logic in the last two decades. SAT-based ATPG clearly outperforms classical approaches especially for hard-to-detect faults. But its inaccessibility of structural information and don’t care, there exists the over-specification problem of input patterns. In this paper we present techniques to delve into an additional layer to make use of structural properties of the circuit and value justification relations to a generic SAT algorithm. It joins binary decision graphs (BDD and SAT techniques to improve the efficiency of ATPG. It makes a study of inexpensive reconvergent fanout analysis of circuit to gather information on the local signal correlation by using BDD learning, then uses the above learned information to restrict and focus the overall search space of SAT-based ATPG. The learning technique is effective and lightweight. Experimental results show the effectiveness of the approach.

  10. On A Semi-Automatic Method for Generating Composition Tables

    CERN Document Server

    Liu, Weiming

    2011-01-01

    Originating from Allen's Interval Algebra, composition-based reasoning has been widely acknowledged as the most popular reasoning technique in qualitative spatial and temporal reasoning. Given a qualitative calculus (i.e. a relation model), the first thing we should do is to establish its composition table (CT). In the past three decades, such work is usually done manually. This is undesirable and error-prone, given that the calculus may contain tens or hundreds of basic relations. Computing the correct CT has been identified by Tony Cohn as a challenge for computer scientists in 1995. This paper addresses this problem and introduces a semi-automatic method to compute the CT by randomly generating triples of elements. For several important qualitative calculi, our method can establish the correct CT in a reasonable short time. This is illustrated by applications to the Interval Algebra, the Region Connection Calculus RCC-8, the INDU calculus, and the Oriented Point Relation Algebras. Our method can also be us...

  11. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    Science.gov (United States)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  12. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  13. Audio watermarking technologies for automatic cue sheet generation systems

    Science.gov (United States)

    Caccia, Giuseppe; Lancini, Rosa C.; Pascarella, Annalisa; Tubaro, Stefano; Vicario, Elena

    2001-08-01

    Usually watermark is used as a way for hiding information on digital media. The watermarked information may be used to allow copyright protection or user and media identification. In this paper we propose a watermarking scheme for digital audio signals that allow automatic identification of musical pieces transmitted in TV broadcasting programs. In our application the watermark must be, obviously, imperceptible to the users, should be robust to standard TV and radio editing and have a very low complexity. This last item is essential to allow a software real-time implementation of the insertion and detection of watermarks using only a minimum amount of the computation power of a modern PC. In the proposed method the input audio sequence is subdivided in frames. For each frame a watermark spread spectrum sequence is added to the original data. A two steps filtering procedure is used to generate the watermark from a Pseudo-Noise (PN) sequence. The filters approximate respectively the threshold and the frequency masking of the Human Auditory System (HAS). In the paper we discuss first the watermark embedding system then the detection approach. The results of a large set of subjective tests are also presented to demonstrate the quality and robustness of the proposed approach.

  14. PUS Services Software Building Block Automatic Generation for Space Missions

    Science.gov (United States)

    Candia, S.; Sgaramella, F.; Mele, G.

    2008-08-01

    The Packet Utilization Standard (PUS) has been specified by the European Committee for Space Standardization (ECSS) and issued as ECSS-E-70-41A to define the application-level interface between Ground Segments and Space Segments. The ECSS-E- 70-41A complements the ECSS-E-50 and the Consultative Committee for Space Data Systems (CCSDS) recommendations for packet telemetry and telecommand. The ECSS-E-70-41A characterizes the identified PUS Services from a functional point of view and the ECSS-E-70-31 standard specifies the rules for their mission-specific tailoring. The current on-board software design for a space mission implies the production of several PUS terminals, each providing a specific tailoring of the PUS services. The associated on-board software building blocks are developed independently, leading to very different design choices and implementations even when the mission tailoring requires very similar services (from the Ground operative perspective). In this scenario, the automatic production of the PUS services building blocks for a mission would be a way to optimize the overall mission economy and improve the robusteness and reliability of the on-board software and of the Ground-Space interactions. This paper presents the Space Software Italia (SSI) activities for the development of an integrated environment to support: the PUS services tailoring activity for a specific mission. the mission-specific PUS services configuration. the generation the UML model of the software building block implementing the mission-specific PUS services and the related source code, support documentation (software requirements, software architecture, test plans/procedures, operational manuals), and the TM/TC database. The paper deals with: (a) the project objectives, (b) the tailoring, configuration, and generation process, (c) the description of the environments supporting the process phases, (d) the characterization of the meta-model used for the generation, (e) the

  15. Automatic Grasp Generation and Improvement for Industrial Bin-Picking

    DEFF Research Database (Denmark)

    Kraft, Dirk; Ellekilde, Lars-Peter; Rytz, Jimmy Alison

    2014-01-01

    and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented...... work allows to minimize cycle time as well as setup cost, which are essential factors in automatic bin-picking. It therefore leads to a wider applicability of bin-picking in industry....

  16. Historical Author Affiliations Assist Verification of Automatically Generated MEDLINE® Citations

    OpenAIRE

    Sabir, Tehseen F.; Hauser, Susan E.; Thoma, George R.

    2006-01-01

    High OCR error rates encountered in author affiliations increase the manual labor needed to verify MEDLINE citations automatically created from scanned journal articles. This is due to poor OCR recognition of the small text and italics frequently used in printed affiliations. Using author-affiliation relationships found in existing MEDLINE records, the SeekAffiliation (SA) program automatically finds potentially correct and complete affiliations, thereby reducing manual effort and increasing ...

  17. Automatic Generation of Remote Visualization Tools with WATT

    Science.gov (United States)

    Jensen, P. A.; Bollig, E. F.; Yuen, D. A.; Erlebacher, G.; Momsen, A. R.

    2006-12-01

    The ever increasing size and complexity of geophysical and other scientific datasets has forced developers to turn to more powerful alternatives for visualizing results of computations and experiments. These alternative need to be faster, scalable, more efficient, and able to be run on large machines. At the same time, advances in scripting languages and visualization libraries have significantly decreased the development time of smaller, desktop visualization tools. Ideally, programmers would be able to develop visualization tools in a high-level, local, scripted environment and then automatically convert their programs into compiled, remote visualization tools for integration into larger computation environments. The Web Automation and Translation Toolkit (WATT) [1] converts a Tcl script for the Visualization Toolkit (VTK) [2] into a standards-compliant web service. We will demonstrate the used of WATT for the automated conversion of a desktop visualization application (written in Tcl for VTK) into a remote visualization service of interest to geoscientists. The resulting service will allow real-time access to a large dataset through the Internet, and will be easily integrated into the existing architecture of the Virtual Laboratory for Earth and Planetary Materials (VLab) [3]. [1] Jensen, P.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kigelman, D.G., Shukh, E.A., Automated Generation of Web Services for Visualization Toolkits, Eos Trans. AGU, 86(52), Fall Meet. Suppl., Abstract IN42A-06, 2005. [2] The Visualization Toolkit, http://www.vtk.org [3] The Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu

  18. A strategy for automatically generating programs in the lucid programming language

    Science.gov (United States)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  19. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  20. Automatic Texture and Orthophoto Generation from Registered Panoramic Views

    DEFF Research Database (Denmark)

    Krispel, Ulrich; Evers, Henrik Leander; Tamke, Martin;

    2015-01-01

    Recent trends in 3D scanning are aimed at the fusion of range data and color information from images. The combination of these two outputs allows to extract novel semantic information. The workflow presented in this paper allows to detect objects, such as light switches, that are hard to identify...... from range data only. In order to detect these elements, we developed a method that utilizes range data and color information from high-resolution panoramic images of indoor scenes, taken at the scanners position. A proxy geometry is derived from the point clouds; orthographic views of the scene...... are automatically identified from the geometry and an image per view is created via projection. We combine methods of computer vision to train a classifier to detect the objects of interest from these orthographic views. Furthermore, these views can be used for automatic texturing of the proxy geometry....

  1. Automated Theorem Proving for Cryptographic Protocols with Automatic Attack Generation

    OpenAIRE

    Jan Juerjens; Thomas A. Kuhn

    2016-01-01

    Automated theorem proving is both automatic and can be quite efficient. When using theorem proving approaches for security protocol analysis, however, the problem is often that absence of a proof of security of a protocol may give little hint as to where the security weakness lies, to enable the protocol designer to improve the protocol. For our approach to verify cryptographic protocols using automated theorem provers for first-order logic (such as e-SETHEO or SPASS), we demonstrate a method...

  2. Research on Object-oriented Software Testing Cases of Automatic Generation

    Directory of Open Access Journals (Sweden)

    Junli Zhang

    2013-11-01

    Full Text Available In the research on automatic generation of testing cases, there are different execution paths under drivers of different testing cases. The probability of these paths being executed is also different. For paths which are easy to be executed, more redundant testing case tend to be generated; But only fewer testing cases are generated for the control paths which are hard to be executed. Genetic algorithm can be used to instruct the automatic generation of testing cases. For the former paths, it can restrict the generation of these kinds of testing cases. On the contrary, the algorithm will encourage the generation of such testing cases as much as possible. So based on the study on the technology of path-oriented testing case automatic generation, the genetic algorithm is adopted to construct the process of automatic generation. According to the triggering path during the dynamic execution of program, the generated testing cases are separated into different equivalence class. The number of testing case is adjusted dynamicly by the fitness corresponding to the paths. The method can create a certain number of testing cases for each execution path to ensure the sufficiency. It also reduces redundant testing cases so it is an effective method for automatic generation of testing cases.

  3. System and Component Software Specification, Run-time Verification and Automatic Test Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  4. The challenge of Automatic Level Generation for platform videogames based on Stories and Quests

    OpenAIRE

    Mourato, Fausto; Birra, Fernando; Santos, Manuel Próspero dos

    2013-01-01

    In this article we bring the concepts of narrativism and ludology to automatic level generation for platform videogames. The initial motivation is to understand how this genre has been used as a storytelling medium. Based on a narrative theory of games, the differences among several titles have been identified. In addition, we propose a set of abstraction layers to describe the content of a quest-based story in the particular context of videogames. Regarding automatic level generation for pla...

  5. A system for automatically generating documentation for (C)LP programs

    OpenAIRE

    Hermenegildo, Manuel V.

    2000-01-01

    We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between t...

  6. A complete discrimination system for polynomials with complex coefficients and its automatic generation

    Institute of Scientific and Technical Information of China (English)

    梁松新; 张景中

    1999-01-01

    By establishing a complete discrimination system for polynomials, the problem of complete root classification for polynomials with complex coefficients is utterly solved, furthermore, the algorithm obtained is made into a general program in Maple, which enables the complete discrimination system and complete root classification of a polynomial to be automatically generated by computer, without any human intervention. Besides, by using the automatic generation of root classification, a method to determine the positive definiteness of a polynomial in one or two indeterminates is automatically presented.

  7. A stochastic approach for automatic registration and fusion of left atrial electroanatomic maps with 3D CT anatomical images

    International Nuclear Information System (INIS)

    The integration of electroanatomic maps with highly resolved computed tomography cardiac images plays an important role in the successful planning of the ablation procedure of arrhythmias. In this paper, we present and validate a fully-automated strategy for the registration and fusion of sparse, atrial endocardial electroanatomic maps (CARTO maps) with detailed left atrial (LA) anatomical reconstructions segmented from a pre-procedural MDCT scan. Registration is accomplished by a parameterized geometric transformation of the CARTO points and by a stochastic search of the best parameter set which minimizes the misalignment between transformed CARTO points and the LA surface. The subsequent fusion of electrophysiological information on the registered CT atrium is obtained through radial basis function interpolation. The algorithm is validated by simulation and by real data from 14 patients referred to CT imaging prior to the ablation procedure. Results are presented, which show the validity of the algorithmic scheme as well as the accuracy and reproducibility of the integration process. The obtained results encourage the application of the integration method in post-intervention ablation assessment and basic AF research and suggest the development for real-time applications in catheter guiding during ablation intervention

  8. Automatic Generation Control Strategy Based on Balance of Daily Electric Energy

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    An automatic generation control strategy based on balance of daily total electric energy is put forward. It makes the balance between actual total generated energy controlled by automatic generation system and planned total energy on base of area control error, and makes the actual 24-hour active power load curve to approach the planned load curve. The generated energy is corrected by velocity weighting factor so that it conducts dynamic regulation and reaches the speed of response. Homologous strategy is used according to the real-time data in the operation of automatic generation control. Results of simulation are perfect and power energy compensation control with ideal effect can be achieved in the particular duration.

  9. AROMA: Automatic Generation of Radio Maps for Localization Systems

    CERN Document Server

    Eleryan, Ahmed; Youssef, Moustafa

    2010-01-01

    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced wi...

  10. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    Science.gov (United States)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  11. Towards a Pattern-based Automatic Generation of Logical Specifications for Software Models

    OpenAIRE

    Klimek, Radoslaw

    2014-01-01

    The work relates to the automatic generation of logical specifications, considered as sets of temporal logic formulas, extracted directly from developed software models. The extraction process is based on the assumption that the whole developed model is structured using only predefined workflow patterns. A method of automatic transformation of workflow patterns to logical specifications is proposed. Applying the presented concepts enables bridging the gap between the benefits of deductive rea...

  12. Validating EHR documents: automatic schematron generation using archetypes.

    Science.gov (United States)

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations. PMID:24825691

  13. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  14. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas;

    2015-01-01

    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To addr......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns...... constitutive parameters to all components of the finite element model. This model can then be used to study in silico the effects of the electrical stimulation of the cochlear implant. Results are shown on a total of 25 models of patients. In all cases, a final mesh suitable for finite element simulations...

  15. AUTOMATIC MESH GENERATION OF 3-D GEOMETRIC MODELS

    Institute of Scientific and Technical Information of China (English)

    刘剑飞

    2003-01-01

    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  16. Validation of simple quantification methods for 18F FP CIT PET Using Automatic Delineation of volumes of interest based on statistical probabilistic anatomical mapping and isocontour margin setting

    International Nuclear Information System (INIS)

    18F FP CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, 18F FP CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) fir the stratum. in this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy five 18F FP CIT images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake volume product(QUVP) was calculated for each striatal region. QSPAMand QUVPwas calculated for each visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAMand QUVPwere significantly different according to visual grading (0.001). The agreements of QUVPand QSPAMwith visual grading were slight to fair for the caudate nucleus (K= 0.421 and 0.291, respectively) and good to prefect to the putamen (K=0.663 and 0.607, respectively). Also, QSPAMand QUVPhad a significant correlation with each other (0.001). Cerebral atrophy made a significant difference in QSPAMand QUVPof the caudate nuclei regions with decreased 18F FP CIT uptake. Simple quantitative measurements of QSPAMand QUVPshowed acceptable agreement with visual grad-ing. although QSPAMin some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of F FP CIT PET in usual clinical

  17. Impact of automatic threshold capture on pulse generator longevity

    Institute of Scientific and Technical Information of China (English)

    CHEN Ruo-han; CHEN Ke-ping; WANG Fang-zheng; HUA Wei; ZHANG Shu

    2006-01-01

    Background The automatic, threshold tracking, pacing algorithm developed by St. Jude Medical, verifies ventricular capture beat by beat by recognizing the evoked response following each pacemaker stimulus. This function was assumed to be not only energy saving but safe. This study estimated the extension in longevity obtained by AutoCapture (AC) compared with pacemakers programmed to manually optimized, nominal output.Methods Thirty-four patients who received the St. Jude Affinity series pacemaker were included in the study.The following measurements were taken: stimulation and sensing threshold, impedance of leads, evoked response and polarization signals by 3501 programmer during followup, battery current and battery impedance under different conditions. For longevity comparison, ventricular output was programmed under three different conditions: (1) AC on; (2) AC off with nominal output, and (3) AC off with pacing output set at twice the pacing threshold with a minimum of 2.0 V. Patients were divided into two groups: chronic threshold is higher or lower than 1 V. The efficacy of AC was evaluated.Results Current drain in the AC on group, AC off with optimized programming or nominal output was (14.33±2.84) mA, (16.74±2.75) mA and (18.4±2.44) mA, respectively (AC on or AC off with optimized programming vs. nominal output, P < 0.01). Estimated longevity was significantly extended by AC on when compared with nominal setting [(103 ± 27) months, (80 ± 24) months, P < 0.01). Furthermore, compared with the optimized programming, AC extends the longevity when the pacing threshold is higher than 1 V.Conclusion AC could significantly prolong pacemaker longevity; especially in the patient with high pacing threshold.

  18. AN ALGORITHM FOR AUTOMATICALLY GENERATING BLACK-BOX TEST CASES

    Institute of Scientific and Technical Information of China (English)

    XuBaowen; NieChanghai; 等

    2003-01-01

    Selection of test cases plays a key role in improving testing efficiency.Black-box testing is an important way of testing,and is validity lies on the secection of test cases in some sense.A reasonable and effective method about the selection and generation of test cascs is urgently needed.This letter first introduces some usual methods on black-box test case generation,then proposes a new glgorithm based on interface parameters and discusses its properties,finally shows the effectiveness of the algorithm.

  19. AN ALGORITHM FOR AUTOMATICALLY GENERATING BLACK-BOX TEST CASES

    Institute of Scientific and Technical Information of China (English)

    Xu Baowen; Nie Changhai; Shi Qunfeng; Lu Hong

    2003-01-01

    Selection of test cases plays a key role in improving testing efficiency. Black-box testing is an important way of testing, and its validity lies on the selection of test cases in some sense. A reasonable and effective method about the selection and generation of test cases is urgently needed. This letter first introduces some usualmethods on black-box test case generation,then proposes a new algorithm based on interface parameters and discusses its properties, finally shows the effectiveness of the algorithm.

  20. Automatic generation of min-weighted persistent formations

    Institute of Scientific and Technical Information of China (English)

    Luo Xiao-Yuan; Li Shao-Bao; Guan Xin-Ping

    2009-01-01

    This paper researched into some methods for generating min-weighted rigid graphs and min-weighted persistent graphs.Rigidity and persistence are currently used in various studies on coordination and control of autonomous multi-agent formations.To minimize the communication complexity of formations and reduce energy consumption,this paper introduces the rigidity matrix and presents three algorithms for generating min-weighted rigid and min weighted persistent graphs.First,the existence of a min-weighted rigid graph is proved by using the rigidity matrix,and algorithm 1 is presented to generate the min-weighted rigid graphs.Second,the algorithm 2 based on the rigidity matrix is presented to direct the edges of min-weighted rigid graphs to generate min-weighted persistent graphs.Third,the formations with range constraints are considered,and algorithm 3 is presented to find whether a framework can form a min-weighted persistent formation.Finally,some simulations are given to show the efficiency of our research.

  1. VOX POPULI: Automatic Generation of Biased Video Sequences

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.-M.

    2004-01-01

    We describe our experimental rhetoric engine Vox Populi that generates biased video-sequences from a repository of video interviews and other related audio-visual web sources. Users are thus able to explore their own opinions on controversial topics covered by the repository. The repository contains

  2. VOX POPULI: automatic generation of biased video sequences

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.-M.

    2004-01-01

    We describe our experimental rhetoric engine Vox Populi that generates biased video-sequences from a repository of video interviews and other related audio-visual web sources. Users are thus able to explore their own opinions on controversial topics covered by the repository. The repository contains

  3. Automatic Generation of Network Protocol Gateways

    DEFF Research Database (Denmark)

    Bromberg, Yérom-David; Réveillère, Laurent; Lawall, Julia;

    2009-01-01

    , however, requires an intimate knowledge of the relevant protocols and a substantial understanding of low-level network programming, which can be a challenge for many application programmers. This paper presents a generative approach to gateway construction, z2z, based on a domain-specific language...

  4. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  5. Template Authoring Environment for the Automatic Generation of Narrative Content

    Science.gov (United States)

    Caropreso, Maria Fernanda; Inkpen, Diana; Keshtkar, Fazel; Khan, Shahzad

    2012-01-01

    Natural Language Generation (NLG) systems can make data accessible in an easily digestible textual form; but using such systems requires sophisticated linguistic and sometimes even programming knowledge. We have designed and implemented an environment for creating and modifying NLG templates that requires no programming knowledge, and can operate…

  6. Intermediate leak protection/automatic shutdown for B and W helical coil steam generator

    International Nuclear Information System (INIS)

    The report summarizes a follow-on study to the multi-tiered Intermediate Leak/Automatic Shutdown System report. It makes the automatic shutdown system specific to the Babcock and Wilcox (B and W) helical coil steam generator and to the Large Development LMFBR Plant. Threshold leak criteria specific to this steam generator design are developed, and performance predictions are presented for a multi-tier intermediate leak, automatic shutdown system applied to this unit. Preliminary performance predictions for application to the helical coil steam generator were given in the referenced report; for the most part, these predictions have been confirmed. The importance of including a cover gas hydrogen meter in this unit is demonstrated by calculation of a response time one-fifth that of an in-sodium meter at hot standby and refueling conditions

  7. Automatic exposure control in CT: the effect of patient size, anatomical region and prescribed modulation strength on tube current and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Papadakis, Antonios E. [University Hospital of Heraklion, Department of Medical Physics, Stavrakia, P.O. Box 1352, Heraklion, Crete (Greece); Perisinakis, Kostas; Damilakis, John [University of Crete, Faculty of Medicine, Department of Medical Physics, P.O. Box 2208, Heraklion, Crete (Greece)

    2014-10-15

    To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAs{sub mod}) values were recorded. Image noise was measured at selected anatomical sites. The mAs{sub mod} recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P < 0.05). The mAs{sub mod} was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAs{sub mod} ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. circle ATCM efficiency is related to the size of the patient's body. (orig.)

  8. Use of design pattern layout for automatic metrology recipe generation

    Science.gov (United States)

    Tabery, Cyrus; Page, Lorena

    2005-05-01

    As critical dimension control requirements become more challenging, due to complex designs, aggressive lithography, and the constant need to shrink,metrology recipe generation and design evaluation have also become very complex. Hundreds of unique sites must be measured and monitored to ensure good device performance and high yield. The use of the design and layout for automated metrology recipe generation will be critical to that challenge. The DesignGauge from Hitachi implements a system enabling arbitrary recipe generation and control of SEM observations performed on the wafer, based only on the design information. This concept for recipe generation can reduce the time to develop a technology node from RET and design rule selection, through OPC model calibration and verification, and all the way to high volume manufacturing. Conventional recipe creation for a large number of measurement targets requires a significant amount of engineering time. Often these recipes are used only once or twice during mask and process verification or OPC calibration data acquisition. This process of manual setup and analysis is also potentially error prone. CD-SEM recipe creation typically requires an actual wafer, so the recipe creation cannot occur until the scanner and reticle are in house. All of these problems with conventional CD SEM lead to increased development time and reduced final process quality. The new model of CD-SEM recipe generation and management utilizes design-to-SEM matching technology. This new technology extracts an idealized shape from the designed pattern, and utilizes the shape information for pattern matching. As a result, the designed pattern is used as basis for the template instead of the actual SEM image. Recipe creation can be achieved in a matter of seconds once the target site list is finalized. The sequence of steps for creating a recipe are: generate a target site list, pass the design polygons (GDS) and site list to the CD SEM, define references

  9. Semantic annotation of requirements for automatic UML class diagram generation

    CERN Document Server

    Amdouni, Soumaya; Bouabid, Sondes

    2011-01-01

    The increasing complexity of software engineering requires effective methods and tools to support requirements analysts' activities. While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In this context, we propose a tool for transforming text documents describing users' requirements to an UML model. The presented tool uses Natural Language Processing (NLP) and semantic rules to generate an UML class diagram. The main contribution of our tool is to provide assistance to designers facilitating the transition from a textual description of user requirements to their UML diagrams based on GATE (General Architecture of Text) by formulating necessary rules that generate new semantic annotations.

  10. Facilitate generation connections on Orkney by automatic distribution network management

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This report summarises the results of a study assessing the capability and limitations of the Orkney Network under a variety of conditions of demand, generation connections, network configuration, and reactive compensation). A conceptual active management scheme (AMS) suitable for the conditions on Orkney is developed and evaluated. Details are given of a proposed framework for the design and evaluation of future active management schemes, logic control sequences for managed generation units, and a proposed evaluation method for the active management scheme. Implications of introducing the proposed AMS are examined, and the commercial aspects of an AMS and system security are considered. The existing Orkney network is described; and an overview of the SHEPDL (Scottish Hydro Electric Power Distribution Ltd.) SCADA system is presented with a discussion of AMS identification, selection, and development.

  11. An automatically generated code for relativistic inhomogeneous cosmologies

    CERN Document Server

    Bentivegna, Eloisa

    2016-01-01

    The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated-code-generation capabilities provided by its component Kranc.

  12. Semantic annotation of requirements for automatic UML class diagram generation

    Directory of Open Access Journals (Sweden)

    Soumaya Amdouni

    2011-05-01

    Full Text Available The increasing complexity of software engineering requires effective methods and tools to support requirements analysts' activities. While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In this context, we propose a tool for transforming text documents describing users' requirements to an UML model. The presented tool uses Natural Language Processing (NLP and semantic rules to generate an UML class diagram. The main contribution of our tool is to provide assistance to designers facilitating the transition from a textual description of user requirements to their UML diagrams based on GATE (General Architecture of Text by formulating necessary rules that generate new semantic annotations.

  13. VOX POPULI: Automatic Generation of Biased Video Sequences

    OpenAIRE

    Bocconi, S.; Nack, Frank

    2004-01-01

    We describe our experimental rhetoric engine Vox Populi that generates biased video-sequences from a repository of video interviews and other related audio-visual web sources. Users are thus able to explore their own opinions on controversial topics covered by the repository. The repository contains interviews with United States residents stating their opinion on the events occurring after the terrorist attack on the United States on the 11th of September 2001. We present a model for biased d...

  14. FAsTA: A Folksonomy-Based Automatic Metadata Generator

    OpenAIRE

    Al-Khalifa, Hend S.; Davis, Hugh C.

    2007-01-01

    Folksonomies provide a free source of keywords describing web resources, however, these keywords are free form and unstructured. In this paper, we describe a novel tool that converts folksonomy tags into semantic metadata, and present a case study consisting of a framework for evaluating the usefulness of this metadata within the context of a particular eLearning application. The evaluation shows the number of ways in which the generated semantic metadata adds value to the raw folksonomy tags.

  15. A hybrid approach to automatic generation of NC programs

    OpenAIRE

    G. Payeganeh; M. Tolouei-Rad

    2005-01-01

    Purpose: This paper describes AGNCP, an intelligent system for integrating commercial CAD and CAM systems for 2.5D milling operations at a low cost.Design/methodology/approach: It deals with different machining problems with the aid of two expert systems. It recognizes machining features, determines required machining process plans, cutting tools and parameters necessary for generation of NC programs.Findings: The system deals with different machining problems with the aid of two expert syste...

  16. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2016-08-01

    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  17. Automatic Geometry Generation from Point Clouds for BIM

    OpenAIRE

    Charles Thomson; Jan Boehm

    2015-01-01

    The need for better 3D documentation of the built environment has come to the fore in recent years, led primarily by city modelling at the large scale and Building Information Modelling (BIM) at the smaller scale. Automation is seen as desirable as it removes the time-consuming and therefore costly amount of human intervention in the process of model generation. BIM is the focus of this paper as not only is there a commercial need, as will be shown by the number of commercial solutions, but a...

  18. Automatic generation of Feynman rules in the Schroedinger functional

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shinji [Humboldt Universitaet zu Berlin, Newtonstr. 15, 12489 Berlin (Germany)], E-mail: takeda@physik.hu-berlin.de

    2009-04-11

    We provide an algorithm to generate vertices for the Schroedinger functional with an abelian background gauge field. The background field has a non-trivial color structure, therefore we mainly focus on a manipulation of the color matrix part. We propose how to implement the algorithm especially in python code. By using python outputs produced by the code, we also show how to write a numerical expression of vertices in the time-momentum as well as the coordinate space into a Feynman diagram calculation code. As examples of the applications of the algorithm, we provide some one-loop results, ratios of the {lambda} parameters between the plaquette gauge action and the improved gauge actions composed from six-link loops (rectangular, chair and parallelogram), the determination of the O(a) boundary counter term to this order, and the perturbative cutoff effects of the step scaling function of the Schroedinger functional coupling constant.

  19. Contribution of supraspinal systems to generation of automatic postural responses

    Directory of Open Access Journals (Sweden)

    Tatiana G Deliagina

    2014-10-01

    Full Text Available Different species maintain a particular body orientation in space due to activity of the closed-loop postural control system. In this review we discuss the role of neurons of descending pathways in operation of this system as revealed in animal models of differing complexity: lower vertebrate (lamprey and higher vertebrates (rabbit and cat.In the lamprey and quadruped mammals, the role of spinal and supraspinal mechanisms in the control of posture is different. In the lamprey, the system contains one closed-loop mechanism consisting of supraspino-spinal networks. Reticulospinal (RS neurons play a key role in generation of postural corrections. Due to vestibular input, any deviation from the stabilized body orientation leads to activation of a specific population of RS neurons. Each of the neurons activates a specific motor synergy. Collectively, these neurons evoke the motor output necessary for the postural correction. In contrast to lampreys, postural corrections in quadrupeds are primarily based not on the vestibular input but on the somatosensory input from limb mechanoreceptors. The system contains two closed-loop mechanisms – spinal and spino-supraspinal networks, which supplement each other. Spinal networks receive somatosensory input from the limb signaling postural perturbations, and generate spinal postural limb reflexes. These reflexes are relatively weak, but in intact animals they are enhanced due to both tonic supraspinal drive and phasic supraspinal commands. Recent studies of these supraspinal influences are considered in this review. A hypothesis suggesting common principles of operation of the postural systems stabilizing body orientation in a particular plane in the lamprey and quadrupeds, that is interaction of antagonistic postural reflexes, is discussed.

  20. Automatic Generation of Setup for CNC Spring Coiler Based on Case-based Reasoning

    Institute of Scientific and Technical Information of China (English)

    KU Xiangchen; WANG Runxiao; LI Jishun; WANG Dongbo

    2006-01-01

    When producing special-shape spring in CNC spring coiler, the setup of the coiler is often a manual work using a trial-and-error method. As a result, the setup of coiler consumes so much time and becomes the bottleneck of the spring production process. In order to cope with this situation, this paper proposes an automatic generation system of setup for CNC spring coiler using case-based reasoning (CBR). The core of the study contains: (1) integrated reasoning model of CBR system;(2) spatial shape describe of special-shape spring based on feature;(3) coiling case representation using shape feature matrix; and (4) case similarity measure algorithm. The automatic generation system has implemented with C++ Builder 6.0 and is helpful in improving the automaticity and efficiency of spring coiler.

  1. Automatic Generation of Proof Tactics for Finite-Valued Logics

    Directory of Open Access Journals (Sweden)

    João Marcos

    2010-03-01

    Full Text Available A number of flexible tactic-based logical frameworks are nowadays available that can implement a wide range of mathematical theories using a common higher-order metalanguage. Used as proof assistants, one of the advantages of such powerful systems resides in their responsiveness to extensibility of their reasoning capabilities, being designed over rule-based programming languages that allow the user to build her own `programs to construct proofs' - the so-called proof tactics. The present contribution discusses the implementation of an algorithm that generates sound and complete tableau systems for a very inclusive class of sufficiently expressive finite-valued propositional logics, and then illustrates some of the challenges and difficulties related to the algorithmic formation of automated theorem proving tactics for such logics. The procedure on whose implementation we will report is based on a generalized notion of analyticity of proof systems that is intended to guarantee termination of the corresponding automated tactics on what concerns theoremhood in our targeted logics.

  2. Automatic Generation of Printed Catalogs: An Initial Attempt

    Directory of Open Access Journals (Sweden)

    Jared Camins-Esakov

    2010-06-01

    Full Text Available Printed catalogs are useful in a variety of contexts. In special collections, they are often used as reference tools and to commemorate exhibits. They are useful in settings, such as in developing countries, where reliable access to the Internet—or even electricity—is not available. In addition, many private collectors like to have printed catalogs of their collections. All the information needed for creating printed catalogs is readily available in the MARC bibliographic records used by most libraries, but there are no turnkey solutions available for the conversion from MARC to printed catalog. This article describes the development of a system, available on github, that uses XSLT, Perl, and LaTeX to produce press-ready PDFs from MARCXML files. The article particularly focuses on the two XSLT stylesheets which comprise the core of the system, and do the "heavy lifting" of sorting and indexing the entries in the catalog. The author also highlights points where the data stored in MARC bibliographic records requires particular "massaging," and suggests improvements for future attempts at automated printed catalog generation.

  3. A hybrid approach to automatic generation of NC programs

    Directory of Open Access Journals (Sweden)

    G. Payeganeh

    2005-12-01

    Full Text Available Purpose: This paper describes AGNCP, an intelligent system for integrating commercial CAD and CAM systems for 2.5D milling operations at a low cost.Design/methodology/approach: It deals with different machining problems with the aid of two expert systems. It recognizes machining features, determines required machining process plans, cutting tools and parameters necessary for generation of NC programs.Findings: The system deals with different machining problems with the aid of two expert systems. The first communicates with CAD system for recognizing machining features. It is developed in LISP as machining features can be properly represented by LISP codes is ideal for manipulating lists and input data. The second expert system requires extensive communications with several databases for retrieving tooling and machining information and VP-Expert shell was found to be the most suitable package to perform this task.Research limitations/implications: 2.5D milling covers a wide range of operations. However, work is in progress cover 3D milling operations. The system can also be modified to be used for other activities such as turning, flame cutting, electro discharge machining (EDM, punching, etc.Practical implications: Use of AGNCP resulted in improved efficiency, noticeable time savings, and elimination of the need for expert process planners.Originality/value: The paper describes a method for eliminating the need for extensive user intervention for CAD/CAM integration.

  4. The Connect-The-Dots Family of Puzzles: Design and Automatic Generation

    NARCIS (Netherlands)

    Löffler, Maarten; Kaiser, Mira; van Kapel, Tim; Klappe, Gerwin; van Kreveld, Marc; Staals, Frank

    2014-01-01

    In this paper we introduce several innovative variants on the classic Connect-The-Dots puzzle. We study the underlying geometric principles and investigate methods for the automatic generation of high-quality puzzles from line drawings. Specifically, we introduce three new variants of the classic Co

  5. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  6. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    Science.gov (United States)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  7. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    Science.gov (United States)

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of…

  8. Revisiting the Steam-Boiler Case Study with LUTESS : Modeling for Automatic Test Generation

    OpenAIRE

    Papailiopoulou, Virginia; Seljimi, Besnik; Parissis, Ioannis

    2009-01-01

    International audience LUTESS is a testing tool for synchronous software making possible to automatically build test data generators. The latter rely on a formal model of the program environment composed of a set of invariant properties, supposed to hold for every software execution. Additional assumptions can be used to guide the test data generation. The environment descriptions together with the assumptions correspond to a test model of the program. In this paper, we apply this modeling...

  9. Accuracy assessment of building point clouds automatically generated from iphone images

    Science.gov (United States)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  10. The mesh-matching algorithm: an automatic 3D mesh generator for Finite element structures

    CERN Document Server

    Couteau, B; Lavallee, S; Payan, Yohan; Lavallee, St\\'{e}phane

    2000-01-01

    Several authors have employed Finite Element Analysis (FEA) for stress and strain analysis in orthopaedic biomechanics. Unfortunately, the use of three-dimensional models is time consuming and consequently the number of analysis to be performed is limited. The authors have investigated a new method allowing automatically 3D mesh generation for structures as complex as bone for example. This method called Mesh-Matching (M-M) algorithm generated automatically customized 3D meshes of bones from an already existing model. The M-M algorithm has been used to generate FE models of ten proximal human femora from an initial one which had been experimentally validated. The new meshes seemed to demonstrate satisfying results.

  11. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    Science.gov (United States)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the

  12. HIGH QUALITY IMPLEMENTATION FOR AUTOMATIC GENERATION C# CODE BY EVENT-B PATTERN

    Directory of Open Access Journals (Sweden)

    Eman K Elsayed

    2014-01-01

    Full Text Available In this paper we proposed the logical correct path to implement automatically any algorithm or model in verified C# code. Our proposal depends on using the event-B as a formal method. It is suitable solution for un-experience in programming language and profession in mathematical modeling. Our proposal also integrates requirements, codes and verification in system development life cycle. We suggest also using event-B pattern. Our suggestion is classify into two cases, the algorithm case and the model case. The benefits of our proposal are reducing the prove effort, reusability, increasing the automation degree and generate high quality code. In this paper we applied and discussed the three phases of automatic code generation philosophy on two case studies the first is “minimum algorithm” and the second one is a model for ATM.

  13. Automatic Generation of Deep Web Wrappers based on Discovery of Repetition

    OpenAIRE

    Nakatoh, Tetsuya; Yamada, Yasuhiro; Hirokawa, Sachio

    2004-01-01

    A Deep Web wrapper is a program that extracts contents from search results. We propose a new automatic wrapper generation algorithm which discovers a repetitive pattern from search results. The repetitive pattern is expressed by token sequences which consist of HTML tags, plain texts and wild-cards. The algorithm applies a string matching with mismatches to unify the variation from the template and uses FFT(fast Fourier transformation) to attain efficiency. We show an empirical evaluation of ...

  14. Deriving Safety Cases for the Formal Safety Certification of Automatically Generated Code

    OpenAIRE

    Basir, Nurlida; Denney, Ewen; Fischer, Bernd

    2008-01-01

    We present an approach to systematically derive safety cases for automatically generated code from information collected during a formal, Hoare-style safety certification of the code. This safety case makes explicit the formal and informal reasoning principles, and reveals the top-level assumptions and external dependencies that must be taken into account; however, the evidence still comes from the formal safety proofs. It uses a generic goal-based argument that is instantiated with respect t...

  15. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  16. Morphologlcal and anatomical structure of generative organs of Salsola kali ssp. ruthenica (lljin Soó at the SEM level

    Directory of Open Access Journals (Sweden)

    Krystyna Idzikowska

    2011-04-01

    Full Text Available The morphology and anatomy of generative organs of Salsola kali ssp. ruthenica was examined in detail using the light (LM and scanning electron microscopy (SEM. The whole flowers, fruits and their parts (pistil, stamens, sepals, embryo, seed were observed in different developmental stages. In the first stage (June, flower buds were closed. In the second stage (August, flowers were ready for pollination/fertilization. In the third stage (September, fruits were mature. Additionally, the anatomical and morphological structure of sepals was observed by means of LM and SEM. Thanks to the transverse and longitudinal semi-sections through sepals, the first phase of wing formation was recorded by SEM. The appearance of stomata in the epidermal cells of sepals above the forming wings was very interesting, too. The stomata were observed also in mature fruits.

  17. Automatic generation of stop word lists for information retrieval and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  18. AUTOMATIC GENERATION CONTROL OF MULTI AREA POWER SYSTEMS USING ANN CONTROLLER

    Directory of Open Access Journals (Sweden)

    Bipasha Bhatia

    2012-07-01

    Full Text Available This paper presents the use of one of the methods of artificial intelligence to study the automatic generation control of interconnected power systems. In the given paper, a control line of track is established for interconnected three area thermal-thermal-thermal power system using generation rate constraints (GRC &Artificial Neural Network (ANN. The working of the controllers is simulated using MATLAB/SIMULINK package. The outputs using both controllers are compared and it is established that ANN based approach is better than GRC for 1% step load conditions.

  19. Distribution of airway narrowing responses across generations and at branching points, assessed in vitro by anatomical optical coherence tomography

    Directory of Open Access Journals (Sweden)

    Eastwood Peter R

    2010-01-01

    Full Text Available Abstract Background Previous histological and imaging studies have shown the presence of variability in the degree of bronchoconstriction of airways sampled at different locations in the lung (i.e., heterogeneity. Heterogeneity can occur at different airway generations and at branching points in the bronchial tree. Whilst heterogeneity has been detected by previous experimental approaches, its spatial relationship either within or between airways is unknown. Methods In this study, distribution of airway narrowing responses across a portion of the porcine bronchial tree was determined in vitro. The portion comprised contiguous airways spanning bronchial generations (#3-11, including the associated side branches. We used a recent optical imaging technique, anatomical optical coherence tomography, to image the bronchial tree in three dimensions. Bronchoconstriction was produced by carbachol administered to either the adventitial or luminal surface of the airway. Luminal cross sectional area was measured before and at different time points after constriction to carbachol and airway narrowing calculated from the percent decrease in luminal cross sectional area. Results When administered to the adventitial surface, the degree of airway narrowing was progressively increased from proximal to distal generations (r = 0.80 to 0.98, P Conclusions Our findings demonstrate that the bronchial tree expresses intrinsic serial heterogeneity, such that narrowing increases from proximal to distal airways, a relationship that is influenced by the route of drug administration but not by structural variations accompanying branching sites.

  20. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  1. Design of an Intelligent Interlocking System Based on Automatically Generated Interlocking Table

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Y.S. [Namseoul University, Chonan (Korea)

    2002-03-01

    In this paper, we propose an expert system for electronic interlocking which enhances the safety, efficiency and expanability of the existing system by designing real-time interlocking control based on the interlocking table automatically generated using artificial intelligence approach. The expert system consists of two parts; an interlocking table generation part and a real-time interlocking control part. The former generates automatically the interlocking relationship of all possible routes by searching dynamically the station topology which is obtained from station database. On the other hand, the latter controls the status of station facilities in real-time by applying the generated interlocking relationship to the signal facilities such as signal devices, points, track circuits for a given route. The expert system is implemented in C language which is suitable to implement the interlocking table generation part using the dynamic memory allocation technique. Finally, the effectiveness of the expert system is proved by simulating for the typical station model. (author). 11 refs., 9 figs., 2 tabs.

  2. Automatic geometric modeling, mesh generation and FE analysis for pipelines with idealized defects and arbitrary location

    Energy Technology Data Exchange (ETDEWEB)

    Motta, R.S.; Afonso, S.M.B.; Willmersdorf, R.B.; Lyra, P.R.M. [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Cabral, H.L.D. [TRANSPETRO, Rio de Janeiro, RJ (Brazil); Andrade, E.Q. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Although the Finite Element Method (FEM) has proved to be a powerful tool to predict the failure pressure of corroded pipes, the generation of good computational models of pipes with corrosion defects can take several days. This makes the use of computational simulation procedure difficult to apply in practice. The main purpose of this work is to develop a set of computational tools to produce automatically models of pipes with defects, ready to be analyzed with commercial FEM programs, starting from a few parameters that locate and provide the main dimensions of the defect or a series of defects. Here these defects can be internal and external and also assume general spatial locations along the pipe. Idealized rectangular and elliptic geometries can be generated. These tools were based on MSC.PATRAN pre and post-processing programs and were written with PCL (Patran Command Language). The program for the automatic generation of models (PIPEFLAW) has a simplified and customized graphical interface, so that an engineer with basic notions of computational simulation with the FEM can generate rapidly models that result in precise and reliable simulations. Some examples of models of pipes with defects generated by the PIPEFLAW system are shown, and the results of numerical analyses, done with the tools presented in this work, are compared with, empiric results. (author)

  3. Development of ANJOYMC Program for Automatic Generation of Monte Carlo Cross Section Libraries

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Lee, Chung Chan

    2007-03-15

    The NJOY code developed at Los Alamos National Laboratory is to generate the cross section libraries in ACE format for the Monte Carlo codes such as MCNP and McCARD by processing the evaluated nuclear data in ENDF/B format. It takes long time to prepare all the NJOY input files for hundreds of nuclides with various temperatures, and there can be some errors in the input files. In order to solve these problems, ANJOYMC program has been developed. By using a simple user input deck, this program is not only to generate all the NJOY input files automatically, but also to generate a batch file to perform all the NJOY calculations. The ANJOYMC program is written in Fortran90 and can be executed under the WINDOWS and LINUX operating systems in Personal Computer. Cross section libraries in ACE format can be generated in a short time and without an error by using a simple user input deck.

  4. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  5. Z Specification Automatic Generator%Z规格说明自动生成器

    Institute of Scientific and Technical Information of China (English)

    赵正旭; 温晋杰

    2016-01-01

    The formalized Z language can improve the reliability and robustness of the software via using complex mathematical theories. However, only a few people can understand these theories and compile with Z specification. At present, the main research of Z language focuses on the theoretical research. There is no corresponding tools support the automatic generation of Z specification. The research of Z specification automatic generator introduced in this article can help with the compilation of the Z specification and cut the cost of formal development. This automatic generator has great significance for the large-scale promotion of the Z language.%形式化Z语言采用严格的数学理论可以有效提高软件的可靠性和鲁棒性,但是由于其包含的数学理论使得只有少数人能够熟练应用Z语言进行形式化规格说明书的编写.目前,多数对于Z语言的研究集中在理论阶段,还没有相应的工具支持Z规格说明的自动生成.本文中对于Z规格说明自动生成器的研究有助于降低Z规格说明书的编写难度,降低了形式化开发的难度及成本,对于形式化Z语言的推广具有重要的意义.

  6. An Automatic K-Point Grid Generation Scheme for Enhanced Efficiency and Accuracy in DFT Calculations

    Science.gov (United States)

    Mohr, Jennifer A.-F.; Shepherd, James J.; Alavi, Ali

    2013-03-01

    We seek to create an automatic k-point grid generation scheme for density functional theory (DFT) calculations that improves the efficiency and accuracy of the calculations and is suitable for use in high-throughput computations. Current automated k-point generation schemes often result in calculations with insufficient k-points, which reduces the reliability of the results, or too many k-points, which can significantly increase computational cost. By controlling a wider range of k-point grid densities for the Brillouin zone based upon factors of conductivity and symmetry, a scalable k-point grid generation scheme can lower calculation runtimes and improve the accuracy of energy convergence. Johns Hopkins University

  7. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  8. Application of GA optimization for automatic generation control design in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Golpira, H., E-mail: hemin.golpira@uok.ac.i [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Bevrani, H. [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Golpira, H. [Department of Industrial Engineering, Islamic Azad University, Sanandaj Branch, PO Box 618, Kurdistan (Iran, Islamic Republic of)

    2011-05-15

    Highlights: {yields} A realistic model for automatic generation control (AGC) design is proposed. {yields} The model considers GRC, Speed governor dead band, filters and time delay. {yields} The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  9. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  10. Design and construction of a graphical interface for automatic generation of simulation code GEANT4

    International Nuclear Information System (INIS)

    This work is set in the context of the engineering studies final project; it is accomplished in the center of nuclear sciences and technologies in Sidi Thabet. This project is about conceiving and developing a system based on graphical user interface which allows an automatic codes generation for simulation under the GEANT4 engine. This system aims to facilitate the use of GEANT4 by scientific not necessary expert in this engine and to be used in different areas: research, industry and education. The implementation of this project uses Root library and several programming languages such as XML and XSL. (Author). 5 refs

  11. Reliable Mining of Automatically Generated Test Cases from Software Requirements Specification (SRS)

    CERN Document Server

    Raamesh, Lilly

    2010-01-01

    Writing requirements is a two-way process. In this paper we use to classify Functional Requirements (FR) and Non Functional Requirements (NFR) statements from Software Requirements Specification (SRS) documents. This is systematically transformed into state charts considering all relevant information. The current paper outlines how test cases can be automatically generated from these state charts. The application of the states yields the different test cases as solutions to a planning problem. The test cases can be used for automated or manual software testing on system level. And also the paper presents a method for reduction of test suite by using mining methods thereby facilitating the mining and knowledge extraction from test cases.

  12. Automatic Generation of Mashups for Personalized Commerce in Digital TV by Semantic Reasoning

    Science.gov (United States)

    Blanco-Fernández, Yolanda; López-Nores, Martín; Pazos-Arias, José J.; Martín-Vicente, Manuela I.

    The evolution of information technologies is consolidating recommender systems as essential tools in e-commerce. To date, these systems have focused on discovering the items that best match the preferences, interests and needs of individual users, to end up listing those items by decreasing relevance in some menus. In this paper, we propose extending the current scope of recommender systems to better support trading activities, by automatically generating interactive applications that provide the users with personalized commercial functionalities related to the selected items. We explore this idea in the context of Digital TV advertising, with a system that brings together semantic reasoning techniques and new architectural solutions for web services and mashups.

  13. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  14. LanHEP - a package for automatic generation of Feynman rules in gauge models

    CERN Document Server

    Semenov, A Yu

    1996-01-01

    We consider the general problem of derivation of the Feynman rules for the matrix elements in momentum representation from the given Lagrangian in coordinate space invariant under the transformation of some gauge group. LanHEP package presented in this paper allows to define in a convenient way the gauge model Lagrangian in canonical form and then to generate automatically the Feynman rules that can be used in the following calculation of the physical processes by means of CompHEP package. The detailed description of LanHEP commands is given and several examples of LanHEP applications (QED, QCD, Standard Model in the t'Hooft-Feynman gauge) are presented.

  15. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  16. Automatic Generation Control in Multi Area Interconnected Power System by using HVDC Links

    Directory of Open Access Journals (Sweden)

    Yogendra Arya

    2012-01-01

    Full Text Available This paper investigates the effects of HVDC link in parallel with HVAC link on automatic generation control (AGC problem for a multi-area power system taking into consideration system parameter variations. A fuzzy logic controller is proposed for four area power system interconnected via parallel HVAC/HVDC transmission link which is also referred as asynchronous tie-lines. The linear model of HVAC/HVDC link is developed and the system responses to sudden load change are studied. The simulation studies are carried out for a four area interconnected thermal power system. Suitable solution for automatic generation control problem of four area electrical power system is obtained by means of improving the dynamic performance of power system under study. Robustness of controller is also checked by varying parameters. Simulation results indicate that the scheme works well. The dynamic analyses have been done with and without HVDC link using fuzzy logic controller in Matlab-Simulink. Further a comparison between the two is presented and it has been shown that the performance of the proposed scheme is superior in terms of overshoot and settling time.

  17. Semi-Automatic Mapping Generation for the DBpedia Information Extraction Framework

    Directory of Open Access Journals (Sweden)

    Arup Sarkar, Ujjal Marjit, Utpal Biswas

    2013-03-01

    Full Text Available DBpedia is one of the very well known live projectsfrom the Semantic Web. It is likeamirror version ofthe Wikipedia site in Semantic Web. Initially itpublishes the information collected from theWikipedia, but only that part which is relevant tothe Semantic Web.Collecting information forSemantic Web from the Wikipedia is demonstratedas the extraction of structured data. DBpedianormally do this by using a specially designedframework called DBpedia Information ExtractionFramework. This extraction framework do itsworks thorough the evaluation of the similarproperties from the DBpedia Ontology and theWikipedia template. This step is known as DBpediamapping.At present mostof the mapping jobs aredone complete manually.In this paper a newframework is introduced considering the issuesrelated to the template to ontology mapping. A semi-automatic mapping tool for the DBpedia projectisproposedwith the capability of automaticsuggestion generation for the end usersso thatusers can identify the similar Ontology and templateproperties.Proposed framework is useful since afterselection of similar properties, the necessary code tomaintain the mapping between Ontology andtemplate is generated automatically.

  18. Hybrid Chaotic Particle Swarm Optimization Based Gains For Deregulated Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Cheshta Jain Dr. H. K. Verma

    2011-12-01

    Full Text Available Generation control is an important objective of power system operation. In modern power system, the traditional automatic generation control (AGC is modified by incorporating the effect of bilateral contracts. This paper investigates application of chaotic particle swarm optimization (CPSO for optimized operation of restructured AGC system. To obtain optimum gains of controllers, application of adaptive inertia weight factor and constriction factors is proposed to improve performance of particle swarm optimization (PSO algorithm. It is also observed that chaos mapping using logistic map sequence increases convergence rate of traditional PSO algorithm. The hybrid method presented in this paper gives global optimum gains of controller with significant improvement in convergence rate over basic PSO algorithm. The effectiveness and efficiency of the proposed algorithm have been tested on two area restructure system.

  19. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    CERN Document Server

    Fujimoto, J

    2003-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplitudes at tree-level are automatically created. The Monte-Carlo phase space integration by means of BASES gives the total and differential cross sections. When combined with SPRING, an event generator, the program package provides us with the simulation of the SUSY particle productions.

  20. Automatic deodorizing system for waste water from radioisotope facilities using an ozone generator

    International Nuclear Information System (INIS)

    We applied an ozone generator to sterilize and to deodorize the waste water from radioisotope facilities. A small tank connected to the generator is placed outside of the drainage facility founded previously, not to oxidize the other apparatus. The waste water is drained 1 m3 at a time from the tank of drainage facility, treated with ozone and discharged to sewer. All steps proceed automatically once the draining work is started remotely in the office. The waste water was examined after the ozone treatment for 0 (original), 0.5, 1.0, 1.5 and 2.0 h. Regarding original waste water, the sum of coliform groups varied with every examination repeated - probably depend on the colibacilli used in experiments; hydrogen sulfide, biochemical oxygen demand and the offensive odor increased with increasing coliform groups. The ozone treatment remarkably decreased hydrogen sulfide and the offensive odor, decreased coliform groups when the original water had rich coliforms. (author)

  1. An automatic granular structure generation and finite element analysis of heterogeneous semi-solid materials

    Science.gov (United States)

    Sharifi, Hamid; Larouche, Daniel

    2015-09-01

    The quality of cast metal products depends on the capacity of the semi-solid metal to sustain the stresses generated during the casting. Predicting the evolution of these stresses with accuracy in the solidification interval should be highly helpful to avoid the formation of defects like hot tearing. This task is however very difficult because of the heterogeneous nature of the material. In this paper, we propose to evaluate the mechanical behaviour of a metal during solidification using a mesh generation technique of the heterogeneous semi-solid material for a finite element analysis at the microscopic level. This task is done on a two-dimensional (2D) domain in which the granular structure of the solid phase is generated surrounded by an intergranular and interdendritc liquid phase. Some basic solid grains are first constructed and projected in the 2D domain with random orientations and scale factors. Depending on their orientation, the basic grains are combined to produce larger grains or separated by a liquid film. Different basic grain shapes can produce different granular structures of the mushy zone. As a result, using this automatic grain generation procedure, we can investigate the effect of grain shapes and sizes on the thermo-mechanical behaviour of the semi-solid material. The granular models are automatically converted to the finite element meshes. The solid grains and the liquid phase are meshed properly using quadrilateral elements. This method has been used to simulate the microstructure of a binary aluminium-copper alloy (Al-5.8 wt% Cu) when the fraction solid is 0.92. Using the finite element method and the Mie-Grüneisen equation of state for the liquid phase, the transient mechanical behaviour of the mushy zone under tensile loading has been investigated. The stress distribution and the bridges, which are formed during the tensile loading, have been detected.

  2. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling.

    Science.gov (United States)

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to

  3. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  4. Automatic generation of 3D motifs for classification of protein binding sites

    Directory of Open Access Journals (Sweden)

    Herzyk Pawel

    2007-08-01

    Full Text Available Abstract Background Since many of the new protein structures delivered by high-throughput processes do not have any known function, there is a need for structure-based prediction of protein function. Protein 3D structures can be clustered according to their fold or secondary structures to produce classes of some functional significance. A recent alternative has been to detect specific 3D motifs which are often associated to active sites. Unfortunately, there are very few known 3D motifs, which are usually the result of a manual process, compared to the number of sequential motifs already known. In this paper, we report a method to automatically generate 3D motifs of protein structure binding sites based on consensus atom positions and evaluate it on a set of adenine based ligands. Results Our new approach was validated by generating automatically 3D patterns for the main adenine based ligands, i.e. AMP, ADP and ATP. Out of the 18 detected patterns, only one, the ADP4 pattern, is not associated with well defined structural patterns. Moreover, most of the patterns could be classified as binding site 3D motifs. Literature research revealed that the ADP4 pattern actually corresponds to structural features which show complex evolutionary links between ligases and transferases. Therefore, all of the generated patterns prove to be meaningful. Each pattern was used to query all PDB proteins which bind either purine based or guanine based ligands, in order to evaluate the classification and annotation properties of the pattern. Overall, our 3D patterns matched 31% of proteins with adenine based ligands and 95.5% of them were classified correctly. Conclusion A new metric has been introduced allowing the classification of proteins according to the similarity of atomic environment of binding sites, and a methodology has been developed to automatically produce 3D patterns from that classification. A study of proteins binding adenine based ligands showed that

  5. An immunochromatographic biosensor combined with a water-swellable polymer for automatic signal generation or amplification.

    Science.gov (United States)

    Kim, Kahee; Joung, Hyou-Arm; Han, Gyeo-Re; Kim, Min-Gon

    2016-11-15

    An immunochromatographic assay (ICA) strip is one of the most widely used platforms in the field of point-of-care biosensors for the detection of various analytes in a simple, fast, and inexpensive manner. Currently, several approaches for sequential reactions in ICA platforms have improved their usability, sensitivity, and versatility. In this study, a new, simple, and low-cost approach using automatic sequential-reaction ICA strip is described. The automatic switching of a reagent pad from separation to attachment to the test membrane was achieved using a water-swellable polymer. The reagent pad was dried with an enzyme substrate for signal generation or with signal-enhancing materials. The strip design and system operation were confirmed by the characterization of the raw materials and flow analysis. We demonstrated the operation of the proposed sensor by using various chemical reaction-based assays, including metal-ion amplification, enzyme-colorimetric reaction, and enzyme-catalyzed chemiluminescence. Furthermore, by employing C-reactive protein as a model, we successfully demonstrated that the new water-swellable polymer-based ICA sensor can be utilized to detect biologically relevant analytes in human serum. PMID:27203463

  6. HELAC-Onia: an automatic matrix element generator for heavy quarkonium physics

    CERN Document Server

    Shao, Hua-Sheng

    2013-01-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code \\mtt{HELAC} to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub \\mtt{HELAC-Onia}. We rewrote the original \\mtt{HELAC} to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, \\mtt{HELAC-Onia} is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. $h_{c,b},\\chi_{c,b}$) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  7. HELAC-Onia: An automatic matrix element generator for heavy quarkonium physics

    Science.gov (United States)

    Shao, Hua-Sheng

    2013-11-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code HELAC to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub HELAC-Onia. We rewrote the original HELAC to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, HELAC-Onia is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. h,χ) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  8. Automatic Generation of Building Models with Levels of Detail 1-3

    Science.gov (United States)

    Nguatem, W.; Drauschke, M.; Mayer, H.

    2016-06-01

    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  9. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    Science.gov (United States)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  10. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  11. Utilizing Linked Open Data Sources for Automatic Generation of Semantic Metadata

    Science.gov (United States)

    Nummiaho, Antti; Vainikainen, Sari; Melin, Magnus

    In this paper we present an application that can be used to automatically generate semantic metadata for tags given as simple keywords. The application that we have implemented in Java programming language creates the semantic metadata by linking the tags to concepts in different semantic knowledge bases (CrunchBase, DBpedia, Freebase, KOKO, Opencyc, Umbel and/or WordNet). The steps that our application takes in doing so include detecting possible languages, finding spelling suggestions and finding meanings from amongst the proper nouns and common nouns separately. Currently, our application supports English, Finnish and Swedish words, but other languages could be included easily if the required lexical tools (spellcheckers, etc.) are available. The created semantic metadata can be of great use in, e.g., finding and combining similar contents, creating recommendations and targeting advertisements.

  12. Automatic Generation of Theorems and Proofs on Enumerating Consecutive-Wilf classes

    CERN Document Server

    Baxter, Andrew; Zeilberger, Doron

    2011-01-01

    This article, dedicated to Herbert Saul Wilf on the occaison of his forthcoming 80-th birthday, describes two complementary approaches to enumeration, the "positive" and the "negative", each with its advantages and disadvantages. Both approaches are amenable to automation, and when applied to the currently active subarea, initiated in 2003 by Sergi Elizalde and Marc Noy, of enumerating consecutive-Wilf classes (i.e. consecutive pattern-avoidance) in permutations, were successfully pursued by DZ's two current PhD students, Andrew Baxter and Brian Nakamura. The Maple packages SERGI and ELIZALDE, implementing the algorithms enable the computer to "do research" by deriving, "all by itself", functional equations for the generating functions that enable polynomial-time enumeration for any set of patterns. In the case of ELIZALDE (the "negative" approach), these functional equations can be sometimes (automatically!) simplified, and imply "explicit" formulas, that previously were derived by humans using ad-hoc method...

  13. Automatic Generation of 3D Caricatures Based on Artistic Deformation Styles.

    Science.gov (United States)

    Clarke, Lyndsey; Chen, Min; Mora, Benjamin

    2011-06-01

    Caricatures are a form of humorous visual art, usually created by skilled artists for the intention of amusement and entertainment. In this paper, we present a novel approach for automatic generation of digital caricatures from facial photographs, which capture artistic deformation styles from hand-drawn caricatures. We introduced a pseudo stress-strain model to encode the parameters of an artistic deformation style using "virtual" physical and material properties. We have also developed a software system for performing the caricaturistic deformation in 3D which eliminates the undesirable artifacts in 2D caricaturization. We employed a Multilevel Free-Form Deformation (MFFD) technique to optimize a 3D head model reconstructed from an input facial photograph, and for controlling the caricaturistic deformation. Our results demonstrated the effectiveness and usability of the proposed approach, which allows ordinary users to apply the captured and stored deformation styles to a variety of facial photographs.

  14. A QUANTIFIER-ELIMINATION BASED HEURISTIC FOR AUTOMATICALLY GENERATING INDUCTIVE ASSERTIONS FOR PROGRAMS

    Institute of Scientific and Technical Information of China (English)

    Deepak KAPUR

    2006-01-01

    A method using quantifier-elimination is proposed for automatically generating program invariants/inductive assertions. Given a program, inductive assertions, hypothesized as parameterized formulas in a theory, are associated with program locations. Parameters in inductive assertions are discovered by generating constraints on parameters by ensuring that an inductive assertion is indeed preserved by all execution paths leading to the associated location of the program. The method can be used to discover loop invariants-properties of variables that remain invariant at the entry of a loop. The parameterized formula can be successively refined by considering execution paths one by one; heuristics can be developed for determining the order in which the paths are considered. Initialization of program variables as well as the precondition and postcondition, if available, can also be used to further refine the hypothesized invariant. The method does not depend on the availability of the precondition and postcondition of a program. Constraints on parameters generated in this way are solved for possible values of parameters. If no solution is possible, this means that an invariant of the hypothesized form is not likely to exist for the loop under the assumptions/approximations made to generate the associated verification condition. Otherwise, if the parametric constraints are solvable, then under certain conditions on methods for generating these constraints, the strongest possible invariant of the hypothesized form can be generated from most general solutions of the parametric constraints. The approach is illustrated using the logical languages of conjunction of polynomial equations as well as Presburger arithmetic for expressing assertions.

  15. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    Science.gov (United States)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  16. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  17. Development of an Immersive Environment to Aid in Automatic Mesh Generation LDRD Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, Constantine J.

    1998-10-01

    The purpose of this work was to explore the use of immersive technologies, such as those used in synthetic environments (commordy referred to as virtual realily, or VR), in enhancing the mesh- generation process for 3-dimensional (3D) engineering models. This work was motivated by the fact that automatic mesh generation systems are still imperfect - meshing algorithms, particularly in 3D, are sometimes unable to construct a mesh to completion, or they may produce anomalies or undesirable complexities in the resulting mesh. It is important that analysts and meshing code developers be able to study their meshes effectively in order to understand the topology and qualily of their meshes. We have implemented prototype capabilities that enable such exploration of meshes in a highly visual and intuitive manner. Since many applications are making use of increasingly large meshes, we have also investigated approaches to handle large meshes while maintaining interactive response. Ideally, it would also be possible to interact with the meshing process, allowing interactive feedback which corrects problems and/or somehow enables proper completion of the meshing process. We have implemented some functionality towards this end -- in doing so, we have explored software architectures that support such an interactive meshing process. This work has incorporated existing technologies developed at SandiaNational Laboratories, including the CUBIT mesh generation system, and the EIGEN/VR (previously known as MUSE) and FLIGHT systems, which allow applications to make use of immersive technologies and advanced human computer interfaces. 1

  18. Tra-la-Lyrics 2.0: Automatic Generation of Song Lyrics on a Semantic Domain

    Science.gov (United States)

    Gonçalo Oliveira, Hugo

    2015-12-01

    Tra-la-Lyrics is a system that generates song lyrics automatically. In its original version, the main focus was to produce text where stresses matched the rhythm of given melodies. There were no concerns on whether the text made sense or if the selected words shared some kind of semantic association. In this article, we describe the development of a new version of Tra-la-Lyrics, where text is generated on a semantic domain, defined by one or more seed words. This effort involved the integration of the original rhythm module of Tra-la-Lyrics in PoeTryMe, a generic platform that generates poetry with semantically coherent sentences. To measure our progress, the rhythm, the rhymes, and the semantic coherence in lyrics produced by the original Tra-la-Lyrics were analysed and compared with lyrics produced by the new instantiation of this system, dubbed Tra-la-Lyrics 2.0. The analysis showed that, in the lyrics by the new system, words have higher semantic association among them and with the given seeds, while the rhythm is still matched and rhymes are present. The previous analysis was complemented with a crowdsourced evaluation, where contributors answered a survey about relevant features of lyrics produced by the previous and the current versions of Tra-la-Lyrics. Though tight, the survey results confirmed the improvements of the lyrics by Tra-la-Lyrics 2.0.

  19. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery. PMID:26577253

  20. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  1. The NetVISA automatic association tool. Next generation software testing and performance under realistic conditions.

    Science.gov (United States)

    Le Bras, Ronan; Arora, Nimar; Kushida, Noriyuki; Tomuta, Elena; Kebede, Fekadu; Feitio, Paulino

    2016-04-01

    The CTBTO's International Data Centre is in the process of developing the next generation software to perform the automatic association step. The NetVISA software uses a Bayesian approach with a forward physical model using probabilistic representations of the propagation, station capabilities, background seismicity, noise detection statistics, and coda phase statistics. The software has been in development for a few years and is now reaching the stage where it is being tested in a realistic operational context. An interactive module has been developed where the NetVISA automatic events that are in addition to the Global Association (GA) results are presented to the analysts. We report on a series of tests where the results are examined and evaluated by seasoned analysts. Consistent with the statistics previously reported (Arora et al., 2013), the first test shows that the software is able to enhance analysis work by providing additional event hypothesis for consideration by analysts. A test on a three-day data set was performed and showed that the system found 42 additional real events out of 116 examined, including 6 that pass the criterion for the Reviewed Event Bulletin of the IDC. The software was functional in a realistic, real-time mode, during the occurrence of the fourth nuclear test claimed by the Democratic People's Republic of Korea on January 6th, 2016. Confirming a previous statistical observation, the software found more associated stations (51, including 35 primary stations) than GA (36, including 26 primary stations) for this event. Nimar S. Arora, Stuart Russell, Erik Sudderth. Bulletin of the Seismological Society of America (BSSA) April 2013, vol. 103 no. 2A pp709-729.

  2. A Simulink Library of cryogenic components to automatically generate control schemes for large Cryorefrigerators

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Hoa, Christine; Bonnay, Patrick; Bon-Mardion, Michel; Monteiro, Lionel

    2015-12-01

    In this article, we present a new Simulink library of cryogenics components (such as valve, phase separator, mixer, heat exchanger...) to assemble to generate model-based control schemes. Every component is described by its algebraic or differential equation and can be assembled with others to build the dynamical model of a complete refrigerator or the model of a subpart of it. The obtained model can be used to automatically design advanced model based control scheme. It also can be used to design a model based PI controller. Advanced control schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT- 60SA). The paper gives the example of the generation of the dynamical model of the 400W@1.8K refrigerator and shows how to build a Constrained Model Predictive Control for it. Based on the scheme, experimental results will be given. This work is being supported by the French national research agency (ANR) through the ANR-13-SEED-0005 CRYOGREEN program.

  3. A review of metaphase chromosome image selection techniques for automatic karyotype generation.

    Science.gov (United States)

    Arora, Tanvi; Dhir, Renu

    2016-08-01

    The karyotype is analyzed to detect the genetic abnormalities. It is generated by arranging the chromosomes after extracting them from the metaphase chromosome images. The chromosomes are non-rigid bodies that contain the genetic information of an individual. The metaphase chromosome image spread contains the chromosomes, but these chromosomes are not distinct bodies; they can either be individual chromosomes or be touching one another; they may be bent or even may be overlapping and thus forming a cluster of chromosomes. The extraction of chromosomes from these touching and overlapping chromosomes is a very tedious process. The segmentation of a random metaphase chromosome image may not give us correct and accurate results. Therefore, before taking up a metaphase chromosome image for analysis, it must be analyzed for the orientation of the chromosomes it contains. The various reported methods for metaphase chromosome image selection for automatic karyotype generation are compared in this paper. After analysis, it has been concluded that each metaphase chromosome image selection method has its advantages and disadvantages.

  4. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  5. Automatic generation and verification of railway interlocking control tables using FSM and NuSMV

    Directory of Open Access Journals (Sweden)

    Mohammad B. YAZDI

    2009-01-01

    Full Text Available Due to their important role in providing safe conditions for train movements, railway interlocking systems are considered as safety critical systems. The reliability, safety and integrity of these systems, relies on reliability and integrity of all stages in their lifecycle including the design, verification, manufacture, test, operation and maintenance.In this paper, the Automatic generation and verification of interlocking control tables, as one of the most important stages in the interlocking design process has been focused on, by the safety critical research group in the School of Railway Engineering, SRE. Three different subsystems including a graphical signalling layout planner, a Control table generator and a Control table verifier, have been introduced. Using NuSMV model checker, the control table verifier analyses the contents of control table besides the safe train movement conditions and checks for any conflicting settings in the table. This includes settings for conflicting routes, signals, points and also settings for route isolation and single and multiple overlap situations. The latest two settings, as route isolation and multiple overlap situations are from new outcomes of the work comparing to works represented on the subject recently.

  6. A review of metaphase chromosome image selection techniques for automatic karyotype generation.

    Science.gov (United States)

    Arora, Tanvi; Dhir, Renu

    2016-08-01

    The karyotype is analyzed to detect the genetic abnormalities. It is generated by arranging the chromosomes after extracting them from the metaphase chromosome images. The chromosomes are non-rigid bodies that contain the genetic information of an individual. The metaphase chromosome image spread contains the chromosomes, but these chromosomes are not distinct bodies; they can either be individual chromosomes or be touching one another; they may be bent or even may be overlapping and thus forming a cluster of chromosomes. The extraction of chromosomes from these touching and overlapping chromosomes is a very tedious process. The segmentation of a random metaphase chromosome image may not give us correct and accurate results. Therefore, before taking up a metaphase chromosome image for analysis, it must be analyzed for the orientation of the chromosomes it contains. The various reported methods for metaphase chromosome image selection for automatic karyotype generation are compared in this paper. After analysis, it has been concluded that each metaphase chromosome image selection method has its advantages and disadvantages. PMID:26676686

  7. Wind power integration into the automatic generation control of power systems with large-scale wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Altin, Müfit;

    2014-01-01

    Transmission system operators have an increased interest in the active participation of wind power plants (WPP) in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC...

  8. Automatic Generation of Building Mapping Using Digital, Vertical and Aerial High Resolution Photographs and LIDAR Point Clouds

    Science.gov (United States)

    Barragán, W.; Campos, A.; Sanchez, G.

    2016-06-01

    The objective of this research is automatic generation of buildings in the interest areas. This research was developed by using high resolution vertical aerial photographs and the LIDAR point cloud through radiometric and geometric digital processes. The research methodology usesknown building heights and various segmentation algorithms and spectral band combination. The overall effectiveness of the algorithm is 97.2% with the test data.

  9. Use of an Automatic Problem Generator to Teach Basic Skills in a First Course in Assembly Language.

    Science.gov (United States)

    Benander, Alan; And Others

    1989-01-01

    Discussion of the use of computer aided instruction (CAI) and instructional software in college level courses highlights an automatic problem generator, AUTOGEN, that was written for computer science students learning assembly language. Design of the software is explained, and student responses are reported. (nine references) (LRW)

  10. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  11. Iqpc 2015 Track: Evaluation of Automatically Generated 2d Footprints from Urban LIDAR Data

    Science.gov (United States)

    Truong-Hong, L.; Laefer, D.; Bisheng, Y.; Ronggang, H.; Jianping, L.

    2015-08-01

    Over the last decade, several automatic approaches have been proposed to extract and reconstruct 2D building footprints and 2D road profiles from ALS data, satellite images, and/or aerial imagery. Since these methods have to date been applied to various data sets and assessed through a variety of different quality indicators and ground truths, comparing the relative effectiveness of the techniques and identifying their strengths and short-comings has not been possible in a systematic way. This contest as part of IQPC15 was designed to determine pros and cons of submitted approaches in generating 2D footprint of a city region from ALS data. Specifically, participants were asked to submit 2D footprints (building outlines and road profiles) derived from ALS data from a highly dense dataset (approximately 225 points/m2) across a 1km2 of Dublin, Ireland's city centre. The proposed evaluation strategies were designed to measure not only the capacity of each method to detect and reconstruct 2D buildings and roads but also the quality of the reconstructed building and road models in terms of shape similarity and positional accuracy.

  12. Automatic generation control application with craziness based particle swarm optimization in a thermal power system

    Energy Technology Data Exchange (ETDEWEB)

    Gozde, Haluk; Taplamacioglu, M. Cengiz [Gazi University, Faculty of Engineering, Department of Electrical and Electronics Engineering, 06750 Maltepe, Ankara (Turkey)

    2011-01-15

    In this study, a novel gain scheduling Proportional-plus-Integral (PI) control strategy is suggested for automatic generation control (AGC) of the two area thermal power system with governor dead-band nonlinearity. In this strategy, the control is evaluated as an optimization problem, and two different cost functions with tuned weight coefficients are derived in order to increase the performance of convergence to the global optima. One of the cost functions is derived through the frequency deviations of the control areas and tie-line power changes. On the other hand, the other one includes the rate of changes which can be variable depends on the time in these deviations. These weight coefficients of the cost functions are also optimized as the controller gains have been done. The craziness based particle swarm optimization (CRAZYPSO) algorithm is preferred to optimize the parameters, because of convergence superiority. At the end of the study, the performance of the control system is compared with the performance which is obtained with classical integral of the squared error (ISE) and the integral of time weighted squared error (ITSE) cost functions through transient response analysis method. The results show that the obtained optimal PI-controller improves the dynamic performance of the power system as expected as mentioned in literature. (author)

  13. Matching and Clustering: Two Steps Towards Automatic Model Generation in Computer Vision

    OpenAIRE

    Gros, Patrick

    1993-01-01

    International audience In this paper, we present a general frame for a system of automatic modelling and recognition of 3D polyhedral objects. Such a system has many applications for robotics : recognition, localization, grasping,...Here we focus upon one main aspect of the system : when many images of one 3D object are taken from different unknown viewpoints, how to recognize those of them which represent the same aspect of the object ? Briefly, it is possible to determine automatically i...

  14. Development on quantitative safety analysis method of accident scenario. The automatic scenario generator development for event sequence construction of accident

    International Nuclear Information System (INIS)

    This study intends to develop a more sophisticated tool that will advance the current event tree method used in all PSA, and to focus on non-catastrophic events, specifically a non-core melt sequence scenario not included in an ordinary PSA. In the non-catastrophic event PSA, it is necessary to consider various end states and failure combinations for the purpose of multiple scenario construction. Therefore it is anticipated that an analysis work should be reduced and automated method and tool is required. A scenario generator that can automatically handle scenario construction logic and generate the enormous size of sequences logically identified by state-of-the-art methodology was developed. To fulfill the scenario generation as a technical tool, a simulation model associated with AI technique and graphical interface, was introduced. The AI simulation model in this study was verified for the feasibility of its capability to evaluate actual systems. In this feasibility study, a spurious SI signal was selected to test the model's applicability. As a result, the basic capability of the scenario generator could be demonstrated and important scenarios were generated. The human interface with a system and its operation, as well as time dependent factors and their quantification in scenario modeling, was added utilizing human scenario generator concept. Then the feasibility of an improved scenario generator was tested for actual use. Automatic scenario generation with a certain level of credibility, was achieved by this study. (author)

  15. Embedded Platform for Automatic Testing and Optimizing of FPGA Based Cryptographic True Random Number Generators

    Directory of Open Access Journals (Sweden)

    M. Varchola

    2009-12-01

    Full Text Available This paper deals with an evaluation platform for cryptographic True Random Number Generators (TRNGs based on the hardware implementation of statistical tests for FPGAs. It was developed in order to provide an automatic tool that helps to speed up the TRNG design process and can provide new insights on the TRNG behavior as it will be shown on a particular example in the paper. It enables to test sufficient statistical properties of various TRNG designs under various working conditions on the fly. Moreover, the tests are suitable to be embedded into cryptographic hardware products in order to recognize TRNG output of weak quality and thus increase its robustness and reliability. Tests are fully compatible with the FIPS 140 standard and are implemented by the VHDL language as an IP-Core for vendor independent FPGAs. A recent Flash based Actel Fusion FPGA was chosen for preliminary experiments. The Actel version of the tests possesses an interface to the Actel’s CoreMP7 softcore processor that is fully compatible with the industry standard ARM7TDMI. Moreover, identical tests suite was implemented to the Xilinx Virtex 2 and 5 in order to compare the performance of the proposed solution with the performance of already published one based on the same FPGAs. It was achieved 25% and 65% greater clock frequency respectively while consuming almost equal resources of the Xilinx FPGAs. On the top of it, the proposed FIPS 140 architecture is capable of processing one random bit per one clock cycle which results in 311.5 Mbps throughput for Virtex 5 FPGA.

  16. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ole Green

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  17. Automatic test pattern generation for stuck-at and delay faults in combinational circuits

    International Nuclear Information System (INIS)

    The present studies are developed to propose the automatic test pattern generation (ATG) algorithms for combinational circuits. These ATG algorithms are realized in two ATG programs: One is the ATG program for stuck-at fault and the other one for delay faults. In order to accelerate the ATG process, these two ATG programs have a common feature (the search method based on the concept of the degree of freedom), whereas only ATG program for the delay fault utilizes the 19-valued logic, a type of composite valued logic. This difference between two ATG programs results from the difference of the target fault. Accelerating the ATG process is indispensable for improving the ATG algorithms. This acceleration is mainly achieved by reducing the number of the unnecessary backtrackings, making the earlier detection of the conflicts, and shortening the computation time between the implication. Because of this purpose, the developed ATG programs include the new search method based on the concept of the degree of freedom (DF). The DF concept, computed directly and easily from the system descriptions such as types of gates and their interconnections, is the criterion to decide which, among several alternate lines' logic values required along each path, promises to be the most effective in order to accelerate and improve the ATG process. This DF concept is utilized to develop and improve both of ATG programs for stuck-at and delay faults in combinational circuits. In addition to improving the ATG process, reducing number of test pattern is indispensable for testing the delay faults because the size of the delay faults grows rapidly as increasing the size of the circuit. In order to improve the compactness of the test set, 19-valued logic are derived. Unlike other TG logic systems, 19-valued logic is utilized to generate the robustly hazard-free test pattern. This is achieved by using the basic 5-valued logic, proposed in this work, where the transition with no hazard is

  18. Generating 3D anatomically detailed models of the retina from OCT data sets: implications for computational modelling

    Science.gov (United States)

    Shalbaf, Farzaneh; Dokos, Socrates; Lovell, Nigel H.; Turuwhenua, Jason; Vaghefi, Ehsan

    2015-12-01

    Retinal prosthesis has been proposed to restore vision for those suffering from the retinal pathologies that mainly affect the photoreceptors layer but keep the inner retina intact. Prior to costly risky experimental studies computational modelling of the retina will help to optimize the device parameters and enhance the outcomes. Here, we developed an anatomically detailed computational model of the retina based on OCT data sets. The consecutive OCT images of individual were subsequently segmented to provide a 3D representation of retina in the form of finite elements. Thereafter, the electrical properties of the retina were modelled by implementing partial differential equation on the 3D mesh. Different electrode configurations, that is bipolar and hexapolar configurations, were implemented and the results were compared with the previous computational and experimental studies. Furthermore, the possible effects of the curvature of retinal layers on the current steering through the retina were proposed and linked to the clinical observations.

  19. Automatic Generation of Analytic Equations for Vibrational and Rovibrational Constants from Fourth-Order Vibrational Perturbation Theory

    Science.gov (United States)

    Matthews, Devin A.; Gong, Justin Z.; Stanton, John F.

    2014-06-01

    The derivation of analytic expressions for vibrational and rovibrational constants, for example the anharmonicity constants χij and the vibration-rotation interaction constants α^B_r, from second-order vibrational perturbation theory (VPT2) can be accomplished with pen and paper and some practice. However, the corresponding quantities from fourth-order perturbation theory (VPT4) are considerably more complex, with the only known derivations by hand extensively using many layers of complicated intermediates and for rotational quantities requiring specialization to orthorhombic cases or the form of Watson's reduced Hamiltonian. We present an automatic computer program for generating these expressions with full generality based on the adaptation of an existing numerical program based on the sum-over-states representation of the energy to a computer algebra context. The measures taken to produce well-simplified and factored expressions in an efficient manner are discussed, as well as the framework for automatically checking the correctness of the generated equations.

  20. Minimal-resource computer program for automatic generation of ocean wave ray or crest diagrams in shoaling waters

    Science.gov (United States)

    Poole, L. R.; Lecroy, S. R.; Morris, W. D.

    1977-01-01

    A computer program for studying linear ocean wave refraction is described. The program features random-access modular bathymetry data storage. Three bottom topography approximation techniques are available in the program which provide varying degrees of bathymetry data smoothing. Refraction diagrams are generated automatically and can be displayed graphically in three forms: Ray patterns with specified uniform deepwater ray density, ray patterns with controlled nearshore ray density, or crest patterns constructed by using a cubic polynomial to approximate crest segments between adjacent rays.

  1. An Automatic Generation System for Examination ID Cards%考场考证自动生成系统

    Institute of Scientific and Technical Information of China (English)

    杨姝

    2001-01-01

    The paper describes the feature of the automatic generation system for examination ID cards,the strategy of random scheduling and the main algorithm for creating examination ID cards.%论述了考场考证自动生成系统的特点,随机生成考证的策略及考证生成的主要算法.

  2. Development of user interface to support automatic program generation of nuclear power plant analysis by module-based simulation system

    International Nuclear Information System (INIS)

    Module-based Simulation System (MSS) has been developed to realize a new software work environment enabling versatile dynamic simulation of a complex nuclear power system flexibly. The MSS makes full use of modern software technology to replace a large fraction of human software works in complex, large-scale program development by computer automation. Fundamental methods utilized in MSS and developmental study on human interface system SESS-1 to help users in generating integrated simulation programs automatically are summarized as follows: (1) To enhance usability and 'communality' of program resources, the basic mathematical models of common usage in nuclear power plant analysis are programed as 'modules' and stored in a module library. The information on usage of individual modules are stored in module database with easy registration, update and retrieval by the interactive management system. (2) Target simulation programs and the input/output files are automatically generated with simple block-wise languages by a precompiler system for module integration purpose. (3) Working time for program development and analysis in an example study of an LMFBR plant thermal-hydraulic transient analysis was demonstrated to be remarkably shortened, with the introduction of an interface system SESS-1 developed as an automatic program generation environment. (author)

  3. AUTOMATIC GENERATION OF SQL TEST CASE SETS%SQL测试用例集的自动生成

    Institute of Scientific and Technical Information of China (English)

    丁祥武; 张钦; 韩朱忠

    2012-01-01

    Compiling SQL sentences is an import part for test database management system. Automatic generation of SQL sentences can effectively reduce the workload of the tester. There is almost no automatic tool supporting the direct generation of SQL sentences at present. By simulating the direct derivation process of generative formals the SQL sentences which are generated in accordance with the grammar based on SQL grammars are presented, this is used as the approach for test cases. In the paper we study on the automatic progress from BNF files which are the representation of the grammar to generating SQL test case sets. The process has several stages; Each non-terminals of SQL grammar is converted to a corresponding parse function and the set of all these parse functions forms the rules library. The generative formals of the grammar are traversed to generate SQL test cases automatically. The use of weight arrays in conjunction with stochastic numbers increases the flexibility of test cases generation. Maximum calling times of the non-terminals are employed to terminate the generation of SQL test cases. Through the tool prototype introduced, the SQL test cases in conformity with SQL grammar can be derived.%编写SQL语句是测试数据库管理系统的一个重要部分.自动生成SQL语句可以有效减少测试人员的工作量,而目前没有直接生成SQL语句的自动化工具.通过模拟产生式的直接推导过程,根据SQL文法,给出生成符合该文法的SQL语句,用作测试用例的方法;研究从表示文法的BNF文件生成SQL测试用例集合的自动化过程.这个过程包括几个阶段:将SQL文法的每一个非终结符转换成一个对应的解析函数,所有解析函数的集合构成规则库;遍历文法的产生式自动生成SQL测试用例;使用权值数组结合随机数,加大生成测试用例的灵活性;使用非终结符的最大调用次数来终止SQL测试用例的生成.通过介绍的工具原型,可以得到符合SQL语法的SQL测试用例.

  4. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  5. ARP: Automatic rapid processing for the generation of problem dependent SAS2H/ORIGEN-s cross section libraries

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Hermann, O.W.; Bowman, S.M.; Parks, C.V.

    1998-04-01

    In this report, a methodology is described which serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. ARP, Automatic Rapid Processing, is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent fuel isotopic compositions for PWR and BWR systems.

  6. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;

    2014-01-01

    and heterogeneity, which spatially scarce borehole lithology data may overlook, are well resolved in AEM surveys. This study presents a semi-automatic sequential hydrogeophysical inversion method for the integration of AEM and borehole data into regional groundwater models in sedimentary areas, where sand/ clay...

  7. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Science.gov (United States)

    Habib, A.; Kwak, E.; Al-Durgham, M.

    2011-12-01

    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  8. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    Science.gov (United States)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  9. Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier; Lindstaedt, Stefanie; Veas, Eduardo; Di Sciascio, Cecilia

    2015-01-01

        Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new ...

  10. Presentation: Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier

    2015-01-01

    Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new quality metrics and co...

  11. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  12. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...

  13. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul;

    2013-01-01

    scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies......We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...

  14. A Prototype Expert System for Automatic Generation of Image Processing Programs

    Institute of Scientific and Technical Information of China (English)

    宋茂强; FelixGrimm; 等

    1991-01-01

    A prototype expert system for generating image processing programs using the subroutine package SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program using SPIDER routines.

  15. Automatic selection of informative sentences: The sentences that can generate multiple choice questions

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-12-01

    Full Text Available Traditional education cannot meet the expectation and requirement of a Smart City; it require more advance forms like active learning, ICT education etc. Multiple choice questions (MCQs play an important role in educational assessment and active learning which has a key role in Smart City education. MCQs are effective to assess the understanding of well-defined concepts. A fraction of all the sentences of a text contain well-defined concepts or information that can be asked as a MCQ. These informative sentences are required to be identified first for preparing multiple choice questions manually or automatically. In this paper we propose a technique for automatic identification of such informative sentences that can act as the basis of MCQ. The technique is based on parse structure similarity. A reference set of parse structures is compiled with the help of existing MCQs. The parse structure of a new sentence is compared with the reference structures and if similarity is found then the sentence is considered as a potential candidate. Next a rule-based post-processing module works on these potential candidates to select the final set of informative sentences. The proposed approach is tested in sports domain, where many MCQs are easily available for preparing the reference set of structures. The quality of the system selected sentences is evaluated manually. The experimental result shows that the proposed technique is quite promising.

  16. A Solar Automatic Tracking System that Generates Power for Lighting Greenhouses

    Directory of Open Access Journals (Sweden)

    Qi-Xun Zhang

    2015-07-01

    Full Text Available In this study we design and test a novel solar tracking generation system. Moreover, we show that this system could be successfully used as an advanced solar power source to generate power in greenhouses. The system was developed after taking into consideration the geography, climate, and other environmental factors of northeast China. The experimental design of this study included the following steps: (i the novel solar tracking generation system was measured, and its performance was analyzed; (ii the system configuration and operation principles were evaluated; (iii the performance of this power generation system and the solar irradiance were measured according to local time and conditions; (iv the main factors affecting system performance were analyzed; and (v the amount of power generated by the solar tracking system was compared with the power generated by fixed solar panels. The experimental results indicated that compared to the power generated by fixed solar panels, the solar tracking system generated about 20% to 25% more power. In addition, the performance of this novel power generating system was found to be closely associated with solar irradiance. Therefore, the solar tracking system provides a new approach to power generation in greenhouses.

  17. A Solar Automatic Tracking System that Generates Power for Lighting Greenhouses

    OpenAIRE

    Qi-Xun Zhang; Hai-Ye Yu; Qiu-Yuan Zhang; Zhong-Yuan Zhang; Cheng-Hui Shao; Di Yang

    2015-01-01

    In this study we design and test a novel solar tracking generation system. Moreover, we show that this system could be successfully used as an advanced solar power source to generate power in greenhouses. The system was developed after taking into consideration the geography, climate, and other environmental factors of northeast China. The experimental design of this study included the following steps: (i) the novel solar tracking generation system was measured, and its performance was analyz...

  18. Uav Aerial Survey: Accuracy Estimation for Automatically Generated Dense Digital Surface Model and Orthothoto Plan

    Science.gov (United States)

    Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.

    2016-06-01

    A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.

  19. An Approach to Automatic Generation of Test Cases Based on Use Cases in the Requirements Phase

    Directory of Open Access Journals (Sweden)

    U.Senthil Kumaran

    2011-01-01

    Full Text Available The main aim of this paper is to generate test cases from the use cases. In the real-time scenario we have to face several issues like inaccuracy, ambiguity, and incompleteness in requirements this is because the requirements are not properly updated after various change requests. This will reduce the quality of test cases. To overcome these problems we develop a solution which generates test cases at the early stages of system development life cycle which captures maximum number of requirements. As requirements are best captured by use cases our focus lies on generating test cases from use case diagrams.

  20. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    Science.gov (United States)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  1. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  2. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    Science.gov (United States)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  3. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  4. On the Automatic Generation of Plans for Life Cycle Assembly Processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-01-01

    Designing products for easy assembly and disassembly during their entire life cycles for purposes including product assembly, product upgrade, product servicing and repair, and product disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and manufacturing plan selection criteria, as compared to initial assembly, require re-visiting significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or to applied studies of life cycle assembly processes that give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for, optimize, and analyze the cycle assembly processes. The study of assembly planning is at the very heart of manufacturing research facilities and academic engineering institutions; and, in recent years a number of significant advances in the field of assembly planning have been made. These advances have ranged from the development of automated assembly planning systems, such as Sandia's Automated Assembly Analysis System Archimedes 3.0{copyright}, to the startling revolution in microprocessors and computer-controlled production tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), flexible manufacturing systems (EMS), and computer-integrated manufacturing (CIM). These results have kindled considerable interest in the study of algorithms for life cycle related assembly processes and have blossomed into a field of intense interest. The intent of this manuscript is to bring together the fundamental results in this area, so that the unifying principles and underlying concepts of algorithm design may more easily be implemented in practice.

  5. Automatic Proxy Generation And Load-Balancing-Based Dynamic Choice Of Services

    Directory of Open Access Journals (Sweden)

    Jarosław Dabrowski

    2012-01-01

    Full Text Available The paper addresses the issues of invoking services from within workflows which are becoming an increasingly popular paradigm of distributed programming. The main idea of our research is to develop a facility which enables load balancing between the available services and their instances. The system consists of three main modules: a proxy generator for a specific service according to its interface type, a proxy that redirects requests to a concrete instance of the service and load-balancer (LB to choose the least loaded virtual machine (VM which hosts a single service instance. The proxy generator was implemented as a bean (in compliance to EJB standard which generates proxy according to the WSDL service interface description using XSLT engine and then deploys it on a GlassFish application server using GlassFish API, the proxy is a BPEL module and load-balancer is a stateful Web Service.

  6. Automatic generation of analogy questions for student assessment: an Ontology-based approach

    Directory of Open Access Journals (Sweden)

    Bijan Parsia

    2012-08-01

    Full Text Available Different computational models for generating analogies of the form “A is to B as C is to D” have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ format that can be used for students’ assessment. We propose to use existing high-quality ontologies as a source for mining analogies to avoid the classic problem of hand-coding concepts in previous methods. We also describe the characteristics of a good analogy question and report on experiments carried out to evaluate the new approach.

  7. AUTOMATIC GENERATION CONTROL OF TWO AREA POWER SYSTEM WITH AND WITHOUT SMES: FROM CONVENTIONAL TO MODERN AND INTELLIGENT CONTROL

    Directory of Open Access Journals (Sweden)

    SATHANS,

    2011-05-01

    Full Text Available This work proposes a Fuzzy Gain Scheduled Proportional-Integral (FGSPI controller for automatic generation control (AGC of two-equal area interconnected thermal power system including the Superconducting Magnetic Energy Storage (SMES unit in both areas. The reheat effect nonlinearity of the steam turbine is also consideredin this study. Simulation results show that the proposed control scheme with SMES is very effective in damping the frequency and tie-line power oscillations due to load perturbations in one of the areas. To further improve the performance of the controller, a new formulation of the area control error (ACE is also adopted. Theproposed FGSPI controller is compared against conventional PI controller and state feedback LQR controller using settling times, overshoots and undershoots of the power and frequency deviations as performance indices and the performance of the proposed controller is found better than the other two. Simulations have been performed using Matlab®.

  8. Towards the Automatic Generation of Programmed Foreign-Language Instructional Materials.

    Science.gov (United States)

    Van Campen, Joseph A.

    The purpose of this report is to describe a set of programs which either perform certain tasks useful in the generation of programed foreign-language instructional material or facilitate the writing of such task-oriented programs by other researchers. The programs described are these: (1) a PDP-10 assembly language program for the selection from a…

  9. Automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory

    CERN Document Server

    MacLeod, Matthew K

    2015-01-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. With full internal contraction the size of first-order wave functions scales polynomially with the number of active orbitals. The CASPT2 gradient program and the code generator are both publicly available. This work enables the CASPT2 geometry optimization of molecules as complex as those investigated by respective single-point calculations.

  10. PGPG: An Automatic Generator of Pipeline Design for Programmable GRAPE Systems

    CERN Document Server

    Hamada, T; Makino, J; Hamada, Tsuyoshi; Fukushige, Toshiyuki; Makino, Junichiro

    2007-01-01

    We have developed PGPG (Pipeline Generator for Programmable GRAPE), a software which generates the low-level design of the pipeline processor and communication software for FPGA-based computing engines (FBCEs). An FBCE typically consists of one or multiple FPGA (Field-Programmable Gate Array) chips and local memory. Here, the term "Field-Programmable" means that one can rewrite the logic implemented to the chip after the hardware is completed, and therefore a single FBCE can be used for calculation of various functions, for example pipeline processors for gravity, SPH interaction, or image processing. The main problem with FBCEs is that the user need to develop the detailed hardware design for the processor to be implemented to FPGA chips. In addition, she or he has to write the control logic for the processor, communication and data conversion library on the host processor, and application program which uses the developed processor. These require detailed knowledge of hardware design, a hardware description ...

  11. Automatic generation of synthesizable hardware implementation from high level RVC-cal description

    OpenAIRE

    Jerbi, Khaled; Raulet, Mickaël; Deforges, Olivier; Abid, Mohamed

    2012-01-01

    International audience Data process algorithms are increasing in complexity especially for image and video coding. Therefore, hardware development using directly hardware description languages (HDL) such as VHDL or Verilog is a difficult task. Current research axes in this context are introducing new methodologies to automate the generation of such descriptions. In our work we adopted a high level and target-independent language called CAL (Caltrop Actor Language). This language is associa...

  12. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare.

    Science.gov (United States)

    Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung

    2015-07-02

    A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a "data modeler" tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  13. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  14. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    Science.gov (United States)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  15. Communication: Automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory

    Science.gov (United States)

    MacLeod, Matthew K.; Shiozaki, Toru

    2015-02-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability.

  16. Communication: automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory.

    Science.gov (United States)

    MacLeod, Matthew K; Shiozaki, Toru

    2015-02-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability. PMID:25662628

  17. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    OpenAIRE

    Fujimoto, J

    2002-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplit...

  18. Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

    Directory of Open Access Journals (Sweden)

    Alan F. Smeaton

    2010-02-01

    Full Text Available In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor’s output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one’s life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with “Web 2.0” content collected by millions of other individuals.

  19. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M. [Univ. of California, San Diego, CA (United States

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  20. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M.

    2016-04-05

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  1. Automatic mechanism generation for pyrolysis of di-tert-butyl sulfide.

    Science.gov (United States)

    Class, Caleb A; Liu, Mengjie; Vandeputte, Aäron G; Green, William H

    2016-08-01

    The automated Reaction Mechanism Generator (RMG), using rate parameters derived from ab initio CCSD(T) calculations, is used to build reaction networks for the thermal decomposition of di-tert-butyl sulfide. Simulation results were compared with data from pyrolysis experiments with and without the addition of a cyclohexene inhibitor. Purely free-radical chemistry did not properly explain the reactivity of di-tert-butyl sulfide, as the previous experimental work showed that the sulfide decomposed via first-order kinetics in the presence and absence of the radical inhibitor. The concerted unimolecular decomposition of di-tert-butyl sulfide to form isobutene and tert-butyl thiol was found to be a key reaction in both cases, as it explained the first-order sulfide decomposition. The computer-generated kinetic model predictions quantitatively match most of the experimental data, but the model is apparently missing pathways for radical-induced decomposition of thiols to form elemental sulfur. Cyclohexene has a significant effect on the composition of the radical pool, and this led to dramatic changes in the resulting product distribution. PMID:27431650

  2. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    Science.gov (United States)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  3. 朴素贝叶斯应用于自动化测试用例生成%Naive Bayesian Applied in Automatic Test Cases Generation

    Institute of Scientific and Technical Information of China (English)

    李欣; 张聪; 罗宪

    2012-01-01

    提出一种使用朴素贝叶斯作为核心算法来产生自动化测试用例的方法。该方法以实现自动化测试为目标,引入了朴素贝叶斯对产生的随机测试用例分类的思想。实验结果表明,这是一种可行的生成测试用例的方法。%Test cases generation was the key of automatic testing. Test cases generated great significance in software testing process. Automatic testing cases generated by as the core algorithm were presented in this paper. And the thoughts of classificatio in test case generation. The results showed the method presented in this paper was to generate test cases. effectively had Bayesian methods n were introduced a feasible method

  4. Automatic Multi-GPU Code Generation applied to Simulation of Electrical Machines

    CERN Document Server

    Rodrigues, Antonio Wendell De Oliveira; Dekeyser, Jean-Luc; Menach, Yvonnick Le

    2011-01-01

    The electrical and electronic engineering has used parallel programming to solve its large scale complex problems for performance reasons. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Thus, in order to reduce design complexity, we propose an approach to generate code for hybrid architectures (e.g. CPU + GPU) using OpenCL, an open standard for parallel programming of heterogeneous systems. This approach is based on Model Driven Engineering (MDE) and the MARTE profile, standard proposed by Object Management Group (OMG). The aim is to provide resources to non-specialists in parallel programming to implement their applications. Moreover, thanks to model reuse capacity, we can add/change functionalities or the target architecture. Consequently, this approach helps industries to achieve their time-to-market constraints and confirms by experimental tests, performance improvements using multi-GPU environmen...

  5. Wind power integration into the automatic generation control of power systems with large-scale wind power

    Directory of Open Access Journals (Sweden)

    Abdul Basit

    2014-10-01

    Full Text Available Transmission system operators have an increased interest in the active participation of wind power plants (WPP in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different possible future scenarios, when wind power production in the power system is high and conventional production from CHPs is at a minimum level. The investigation results of the proposed control strategy have shown that the WPPs can actively help the AGC, and reduce the real-time power imbalance in the power system, by down regulating their production when CHPs are unable to provide the required response.

  6. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  7. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    Science.gov (United States)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  8. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    Directory of Open Access Journals (Sweden)

    G. T. Kulakov

    2014-01-01

    Full Text Available The paper analyzes an operation of the standard three-impulse automatic control system (ACS for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals reasons of static regulation errors while solving problems of internal and external distortions caused by expenditure of over-heated steam in the standard automatic control system. An actual significance of modernization pertaining to automatic control system for steam generator water supply has been substantiated in the paper.

  9. A heads-up no-limit Texas Hold'em poker player: Discretized betting models and automatically generated equilibrium-finding programs

    DEFF Research Database (Denmark)

    Gilpin, Andrew G.; Sandholm, Tuomas; Sørensen, Troels Bjerre

    2008-01-01

    an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries....

  10. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  11. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: ... you'll be able to watch a live global AP anatomic total shoulder surgery from Methodist Hospital ...

  12. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Audenaert, Emmanuel [Department of Physical Medicine and Orthopedic Surgery, Ghent University, Ghent (Belgium); Speleers, Bruno; Vercauteren, Tom; Mulliez, Thomas [Department of Radiotherapy, Ghent University, Ghent (Belgium); Vandemaele, Pieter; Achten, Eric [Department of Radiology, Ghent University, Ghent (Belgium); Kerckaert, Ingrid; D' Herde, Katharina [Department of Anatomy, Ghent University, Ghent (Belgium); De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2013-11-15

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection.

  13. Open64的MPI代码自动生成算法%Automatic Code Generation Algorithm of Open64 for MPI

    Institute of Scientific and Technical Information of China (English)

    向阳霞; 裴宏; 张惠民; 陈曼青

    2011-01-01

    针对开源编译器Open64存在MPI不能自动并行化的问题,对Open64中面向Cluster的MPI代码自动生成进行了研究。分析了MPI代码自动生成模块在Open64体系结构中的位置,提出了基于Open64的MPI代码自动生成算法,并对其进行了实验验证。实验结果表明:该算法不但能够有效降低MPI并行程序的通信开销,而且能够明显提高其加速比。%The MPI automatic code generation for Cluster based on Open64 is studied in relation to the problem that the open source compiler Open64 has no MPI automatic parallelizing function.Firstly,the location of MPI code automatic generation in the Open64 compiler architecture is analyzed,and then an Open64-based automatic generation algorithm for MPI code is presented,finally the experiments of testing the NPB benchmarks is conducted.The experimental results show that the algorithm can reduce communication overheads of MPI parallel programs effectively and increase their speedups obviously.

  14. A graphically oriented specification language for automatic code generation. GRASP/Ada: A Graphical Representation of Algorithms, Structure, and Processes for Ada, phase 1

    Science.gov (United States)

    Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.

    1989-01-01

    The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.

  15. An image-based automatic mesh generation and numerical simulation for a population-based analysis of aerosol delivery in the human lungs

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2013-11-01

    The authors propose a method to automatically generate three-dimensional subject-specific airway geometries and meshes for computational fluid dynamics (CFD) studies of aerosol delivery in the human lungs. The proposed method automatically expands computed tomography (CT)-based airway skeleton to generate the centerline (CL)-based model, and then fits it to the CT-segmented geometry to generate the hybrid CL-CT-based model. To produce a turbulent laryngeal jet known to affect aerosol transport, we developed a physiologically-consistent laryngeal model that can be attached to the trachea of the above models. We used Gmsh to automatically generate the mesh for the above models. To assess the quality of the models, we compared the regional aerosol distributions in a human lung predicted by the hybrid model and the manually generated CT-based model. The aerosol distribution predicted by the hybrid model was consistent with the prediction by the CT-based model. We applied the hybrid model to 8 healthy and 16 severe asthmatic subjects, and average geometric error was 3.8% of the branch radius. The proposed method can be potentially applied to the branch-by-branch analyses of a large population of healthy and diseased lungs. NIH Grants R01-HL-094315 and S10-RR-022421, CT data provided by SARP, and computer time provided by XSEDE.

  16. Development of a new generation of high-resolution anatomical models for medical device evaluation: the Virtual Population 3.0

    Science.gov (United States)

    Gosselin, Marie-Christine; Neufeld, Esra; Moser, Heidi; Huber, Eveline; Farcito, Silvia; Gerber, Livia; Jedensjö, Maria; Hilber, Isabel; Di Gennaro, Fabienne; Lloyd, Bryn; Cherubini, Emilio; Szczerba, Dominik; Kainz, Wolfgang; Kuster, Niels

    2014-09-01

    The Virtual Family computational whole-body anatomical human models were originally developed for electromagnetic (EM) exposure evaluations, in particular to study how absorption of radiofrequency radiation from external sources depends on anatomy. However, the models immediately garnered much broader interest and are now applied by over 300 research groups, many from medical applications research fields. In a first step, the Virtual Family was expanded to the Virtual Population to provide considerably broader population coverage with the inclusion of models of both sexes ranging in age from 5 to 84 years old. Although these models have proven to be invaluable for EM dosimetry, it became evident that significantly enhanced models are needed for reliable effectiveness and safety evaluations of diagnostic and therapeutic applications, including medical implants safety. This paper describes the research and development performed to obtain anatomical models that meet the requirements necessary for medical implant safety assessment applications. These include implementation of quality control procedures, re-segmentation at higher resolution, more-consistent tissue assignments, enhanced surface processing and numerous anatomical refinements. Several tools were developed to enhance the functionality of the models, including discretization tools, posing tools to expand the posture space covered, and multiple morphing tools, e.g., to develop pathological models or variations of existing ones. A comprehensive tissue properties database was compiled to complement the library of models. The results are a set of anatomically independent, accurate, and detailed models with smooth, yet feature-rich and topologically conforming surfaces. The models are therefore suited for the creation of unstructured meshes, and the possible applications of the models are extended to a wider range of solvers and physics. The impact of these improvements is shown for the MRI exposure of an adult

  17. Robot Game Tactical Automatic Generation Mechanism under Dynamic Environment%动态环境下机器人博弈战术自动产生机制磁

    Institute of Scientific and Technical Information of China (English)

    李柏依; 刘钊; 胡镓伟

    2015-01-01

    论文基于计算机围棋战术产生的原理战术ShoutGo的启迪,设计一种博弈机器人战术自动产生机制。首先建立了一个战术推荐系统。并提出两种机器人足球战术自动生成模型和作为评判和设计自动产生机制基本因素的球员思维模型。进而设计出一个完整的计算机博弈战术自动产生模型,运用该模型不仅使得机器人之间配合更加紧密,战术有了更强的变化性,而且适应对手的调整能力得到明显提升。%This paper aims to generate a game robot tactical automatic generation mechanism through the enlightenment of principle of existing computer go tactical tactics ,ShoutGo .Firstly a tactical recommender systems is designed .Then the automatic generation of models of two kinds of robot soccer tactics and a perfect player thinking model are put forward as the the basic factors as in evaluating and designing the automatic generation mechanism .On the basis of the two points men‐tioned above ,a complete automatic computer game tactical generation model is created .The model not only makes the robot with the qualitative leap ,stronger tactics ,but also make adjusting ability to the opponents become stronger .

  18. Android event code automatic generation method based on object relevance%基于对象关联的Android事件代码自动生成方法

    Institute of Scientific and Technical Information of China (English)

    李杨; 胡文

    2012-01-01

    为解决Android事件代码自动生成问题,结合对象关联理论,论述了控件对象关联关系,并给出控件对象关联关系定义并实现其构建过程,最终建立控件对象关联关系树COARTree,将其应用于Android事件代码生成过程中,解决了Android事件代码自动生成问题,并取得了良好的应用价值.以简易电话簿为实例,验证了Android事件代码自动生成的方法.%In order to solve the problem of Android event code automatically generated, this paper combined with the object of relevance theory (OAR) , discussed on the control object relationship, and gave the control object relationships theory ( COAR) defining and achieve their build process, and ultimately establish control object relationship tree(COARTree) applied to Android event code generation process to solve the problem of Android event code automatically generated, and have achieved good application value. Simple phone book, for instance, to verify the Android event code automatically generated.

  19. Modeling and simulation of the generation automatic control of electric power systems; Modelado y simulacion del control automatico de generacion de sistemas electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Caballero Ortiz, Ezequiel

    2002-12-01

    This work is devoted to the analysis of the Automatic Control of Electrical Systems Generation of power, as of the information that generates the loop with Load-Frequency Control and the Automatic Voltage Regulator loop. To accomplish the analysis, the control classical theory and feedback control systems concepts are applied. Thus also, the modern theory concepts are employed. The studies are accomplished in the digital computer through the MATLAB program and the available simulation technique in the SIMULINK tool. In this thesis the theoretical and physical concepts of the automatic control of generation are established; dividing it in load frequency control and automatic voltage regulator loops. The mathematical models of the two control loops are established. Later, the models of the elements are interconnected in order to integrate the loop with load frequency control and the digital simulation of the system is carried out. In first instance, the function of the primary control in are - machine, area - multi machine and multi area - multi machine power systems, is analyzed. Then, the automatic control of generation of the area and multi area power systems is studied. The economic dispatch concept is established and with this plan the power system multi area is simulated, there in after the energy exchange among areas in stationary stage is studied. The mathematical models of the component elements of the control loop of the automatic voltage regulator are interconnected. Data according to the nature of each component are generated and their behavior is simulated to analyze the system response. The two control loops are interconnected and a simulation is carry out with data generated previously, examining the performance of the automatic control of generation and the interaction between the two control loops. Finally, the Poles Positioning and the Optimum Control techniques of the modern control theory are applied to the automatic control of an area generation

  20. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    OpenAIRE

    William Barragán Zaque; Alexander Martínez Rivillas; Pablo Emilio Garzón Carreño

    2015-01-01

    The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The re...

  1. Automatic generation of 2D micromechanical finite element model of silicon–carbide/aluminum metal matrix composites: Effects of the boundary conditions

    DEFF Research Database (Denmark)

    Qing, Hai

    2013-01-01

    for the automatic generation of 2D micromechanical FE-models with randomly distributed SiC particles. In order to simulate the damage process in aluminum alloy matrix and SiC particles, a damage parameter based on the stress triaxial indicator and the maximum principal stress criterion based elastic brittle damage...... are performed to study the influence of boundary condition, particle number and volume fraction of the representative volume element (RVE) on composite stiffness and strength properties....

  2. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  3. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  4. SYMBOD - A computer program for the automatic generation of symbolic equations of motion for systems of hinge-connected rigid bodies

    Science.gov (United States)

    Macala, G. A.

    1983-01-01

    A computer program is described that can automatically generate symbolic equations of motion for systems of hinge-connected rigid bodies with tree topologies. The dynamical formulation underlying the program is outlined, and examples are given to show how a symbolic language is used to code the formulation. The program is applied to generate the equations of motion for a four-body model of the Galileo spacecraft. The resulting equations are shown to be a factor of three faster in execution time than conventional numerical subroutines.

  5. VC++ PROGRAMMING AUTOMATIC GENERATING WITH NATURAL LANGUAGE INTERFACE%以自然语言为界面实现VC++程序的自动生成

    Institute of Scientific and Technical Information of China (English)

    周玉龙; 辛运帏; 谷大勇; 陈有祺

    2001-01-01

    本文提出以自然语言为界面实现程序自动生成的研究成果.该研究系统使用面向对象的方法与技术,以自然语言为输入界面,使用扩充的格语法进行语法语义分析,将用户描述的VC++期望程序功能的汉语篇章依次进行自动切词处理、语法处理、语义分析理解、目标程序的自动生成,最终形成满足用户要求且符合Visual C++语法的结果程序.%The study of realizing programming automatic generation with natural language interface by combined two important fields of natural language processing and software automation is reported in this paper.   Using object-oriented technology and method, with chinese character text input interface, regarding case grammar as the theory foundaton, after doing automatic word segmenting and syntactic and semantic analyzing, as a final output results, the correct executable visual C++ programs with functions described by user in chinese character texts are automatically generated by the researched system.

  6. Dosimetric Evaluation of Automatic Segmentation for Adaptive IMRT for Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Purpose: Adaptive planning to accommodate anatomic changes during treatment requires repeat segmentation. This study uses dosimetric endpoints to assess automatically deformed contours. Methods and Materials: Sixteen patients with head-and-neck cancer had adaptive plans because of anatomic change during radiotherapy. Contours from the initial planning computed tomography (CT) were deformed to the mid-treatment CT using an intensity-based free-form registration algorithm then compared with the manually drawn contours for the same CT using the Dice similarity coefficient and an overlap index. The automatic contours were used to create new adaptive plans. The original and automatic adaptive plans were compared based on dosimetric outcomes of the manual contours and on plan conformality. Results: Volumes from the manual and automatic segmentation were similar; only the gross tumor volume (GTV) was significantly different. Automatic plans achieved lower mean coverage for the GTV: V95: 98.6 ± 1.9% vs. 89.9 ± 10.1% (p = 0.004) and clinical target volume: V95: 98.4 ± 0.8% vs. 89.8 ± 6.2% (p 3 of the spinal cord 39.9 ± 3.7 Gy vs. 42.8 ± 5.4 Gy (p = 0.034), but no difference for the remaining structures. Conclusions: Automatic segmentation is not robust enough to substitute for physician-drawn volumes, particularly for the GTV. However, it generates normal structure contours of sufficient accuracy when assessed by dosimetric end points.

  7. Ontology-based tolerance specification generated automatically%基于本体的公差规范的自动生成

    Institute of Scientific and Technical Information of China (English)

    钟艳如; 王冰清; 覃裕初; 高文祥

    2016-01-01

    针对目前公差规范依靠人工指定带来不确定性的问题,在基于本体的公差类型自动生成方法的基础上,研究基于本体的公差规范的自动生成.通过分析公差规范领域知识,提取其中涉及的概念和关系,以此构建公差规范本体,并采用Web本体语言(Web Ontology Language,OWL)编码实现该本体.在所实现本体的基础上,采用语义Web规则语言(Semantic Web Rule Language,SWRL)定义公差规范的生成规则,进而设计公差规范的自动生成算法.应用所设计算法,说明减速器中间传动轴的公差规范自动生成的过程.将为CAD系统中公差规范自动生成的研究提供有效的思路和方法.%To reduce the uncertainty in the current tolerance specification relying on artificial, the ontology-based toler-ance specification generated automatically is studied based on automatic generation methodology of assembly tolerance types on ontology. In order to implement this ontology tolerance specification, the related concepts and relationships are analysed and the OWL(Web Ontology Language)is used to code. On the base of the ontology which is implemented, the automatic generation algorithm of tolerance specification is designed, using the SWRL(Semantic Web Rule Language)to define the generating rules. Using this algorithm, the procedure is illustrated by intermediate office propeller shaft of the reducer. The effective ideas and methods will be provided for the study of tolerance specification generated automatically for the CAD system.

  8. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: DePuy Orthopedics is continually advancing the standard of orthopedic patient care. In a few ...

  9. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... to a patient's unique anatomical makeup. Dr. Gerald R. Williams, Jr., a shoulder specialist from the Rothman ... That might help. Could you raise the O.R. table, please? 00:28:35 WOMAN: Can you ...

  10. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: DePuy Orthopedics is continually advancing the standard of orthopedic patient ...

  11. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  12. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    OpenAIRE

    G. T. Kulakov; A. T. Kulakov; A. N. Kukharenko

    2014-01-01

    The paper analyzes an operation of the standard three-impulse automatic control system (ACS) for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals rea...

  13. Auxiliary anatomical labels for joint segmentation and atlas registration

    Science.gov (United States)

    Gass, Tobias; Szekely, Gabor; Goksel, Orcun

    2014-03-01

    This paper studies improving joint segmentation and registration by introducing auxiliary labels for anatomy that has similar appearance to the target anatomy while not being part of that target. Such auxiliary labels help avoid false positive labelling of non-target anatomy by resolving ambiguity. A known registration of a segmented atlas can help identify where a target segmentation should lie. Conversely, segmentations of anatomy in two images can help them be better registered. Joint segmentation and registration is then a method that can leverage information from both registration and segmentation to help one another. It has received increasing attention recently in the literature. Often, merely a single organ of interest is labelled in the atlas. In the presense of other anatomical structures with similar appearance, this leads to ambiguity in intensity based segmentation; for example, when segmenting individual bones in CT images where other bones share the same intensity profile. To alleviate this problem, we introduce automatic generation of additional labels in atlas segmentations, by marking similar-appearance non-target anatomy with an auxiliary label. Information from the auxiliary-labeled atlas segmentation is then incorporated by using a novel coherence potential, which penalizes differences between the deformed atlas segmentation and the target segmentation estimate. We validated this on a joint segmentation-registration approach that iteratively alternates between registering an atlas and segmenting the target image to find a final anatomical segmentation. The results show that automatic auxiliary labelling outperforms the same approach using a single label atlasses, for both mandibular bone segmentation in 3D-CT and corpus callosum segmentation in 2D-MRI.

  14. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  15. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  16. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  17. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  18. Automatic Test Case Generator for Object-Z Specification%Object-Z规格说明测试用例的自动生成器

    Institute of Scientific and Technical Information of China (English)

    许庆国; 缪淮扣; 曹晓夏; 胡晓波

    2011-01-01

    Most research on test case generation from Object-Z specification focuses on theory. There is almost no tool to support generating test cases automatically. The Object-Z is a mathematics and logic based formal specification language. It uses schema composition and abbreviation format, which brings difficulty for extracting semantics and then generating test cases from specification automatically. This paper provides a solution in extracting semantics and generating test cases from Object-Z specification by unfolding the schema definition and improving its syntax in Object-Z. The process has three steps including parsing Object-Z language, extracting semantics, and generating test cases automatically.%对Object-Z形式规格说明构造测试用例的研究,目前主要集中在理论研究阶段,测试用例的自动生成几乎没有相应的工具支持.Object-Z是基于数学和逻辑的语言,并大量使用了模式复合和简写形式,这给计算机提取完整语义用以自动产生测试用例造成了困难.通过展开Object-Z规格说明中的模式定义,改进Object-Z的文法结构,给出了提取Object-Z规格说明语义的方法,研究了从Object-Z规格说明产生测试用例的自动化过程.这一过程主要包含3个阶段:Object-Z语言的自动解析、语义自动抽取和测试用例自动产生.通过介绍的工具原型,可以很容易得到规格说明中的各种语义;基于某些测试准则,能够方便自动产生可视化的抽象测试用例.

  19. Femtosecond laser-induced hard X-ray generation in air from a solution flow of Au nano-sphere suspension using an automatic positioning system.

    Science.gov (United States)

    Hsu, Wei-Hung; Masim, Frances Camille P; Porta, Matteo; Nguyen, Mai Thanh; Yonezawa, Tetsu; Balčytis, Armandas; Wang, Xuewen; Rosa, Lorenzo; Juodkazis, Saulius; Hatanaka, Koji

    2016-09-01

    Femtosecond laser-induced hard X-ray generation in air from a 100-µm-thick solution film of distilled water or Au nano-sphere suspension was carried out by using a newly-developed automatic positioning system with 1-µm precision. By positioning the solution film for the highest X-ray intensity, the optimum position shifted upstream as the laser power increased due to breakdown. Optimized positioning allowed us to control X-ray intensity with high fidelity. X-ray generation from Au nano-sphere suspension and distilled water showed different power scaling. Linear and nonlinear absorption mechanism are analyzed together with numerical modeling of light delivery. PMID:27607607

  20. MATURE: A Model Driven bAsed Tool to Automatically Generate a langUage That suppoRts CMMI Process Areas spEcification

    Science.gov (United States)

    Musat, David; Castaño, Víctor; Calvo-Manzano, Jose A.; Garbajosa, Juan

    Many companies have achieved a higher quality in their processes by using CMMI. Process definition may be efficiently supported by software tools. A higher automation level will make process improvement and assessment activities easier to be adapted to customer needs. At present, automation of CMMI is based on tools that support practice definition in a textual way. These tools are often enhanced spreadsheets. In this paper, following the Model Driven Development paradigm (MDD), a tool that supports automatic generation of a language that can be used to specify process areas practices is presented. The generation is performed from a metamodel that represents CMMI. This tool, differently from others available, can be customized according to user needs. Guidelines to specify the CMMI metamodel are also provided. The paper also shows how this approach can support other assessment methods.

  1. An anatomically oriented breast model for MRI

    Science.gov (United States)

    Kutra, Dominik; Bergtholdt, Martin; Sabczynski, Jörg; Dössel, Olaf; Buelow, Thomas

    2015-03-01

    Breast cancer is the most common cancer in women in the western world. In the breast cancer care-cycle, MRIis e.g. employed in lesion characterization and therapy assessment. Reading of a single three dimensional image or comparing a multitude of such images in a time series is a time consuming task. Radiological reporting is done manually by translating the spatial position of a finding in an image to a generic representation in the form of a breast diagram, outlining quadrants or clock positions. Currently, registration algorithms are employed to aid with the reading and interpretation of longitudinal studies by providing positional correspondence. To aid with the reporting of findings, knowledge about the breast anatomy has to be introduced to translate from patient specific positions to a generic representation. In our approach we fit a geometric primitive, the semi-super-ellipsoid to patient data. Anatomical knowledge is incorporated by fixing the tip of the super-ellipsoid to the mammilla position and constraining its center-point to a reference plane defined by landmarks on the sternum. A coordinate system is then constructed by linearly scaling the fitted super-ellipsoid, defining a unique set of parameters to each point in the image volume. By fitting such a coordinate system to a different image of the same patient, positional correspondence can be generated. We have validated our method on eight pairs of baseline and follow-up scans (16 breasts) that were acquired for the assessment of neo-adjuvant chemotherapy. On average, the location predicted and the actual location of manually set landmarks are within a distance of 5.6 mm. Our proposed method allows for automatic reporting simply by uniformly dividing the super-ellipsoid around its main axis.

  2. Segmentation of anatomical structures in chest CT scans

    NARCIS (Netherlands)

    van Rikxoort, E.M.

    2009-01-01

    In this thesis, methods are described for the automatic segmentation of anatomical structures from chest CT scans. First, a method to segment the lungs from chest CT scans is presented. Standard lung segmentation algorithms rely on large attenuation differences between the lungs and the surrounding

  3. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... advancing the standard of orthopedic patient care. In a few moments, you'll be able to watch a live global AP anatomic total shoulder surgery from Methodist Hospital in Philadelphia. A revolution in shoulder orthopedics, the Global AP gives ...

  4. Construction of a computational anatomical model of the peripheral cardiac conduction system.

    Science.gov (United States)

    Sebastian, Rafael; Zimmerman, Viviana; Romero, Daniel; Frangi, Alejandro F

    2011-12-01

    A methodology is presented here for automatic construction of a ventricular model of the cardiac conduction system (CCS), which is currently a missing block in many multiscale cardiac electromechanic models. It includes the His bundle, left bundle branches, and the peripheral CCS. The algorithm is fundamentally an enhancement of a rule-based method known as the Lindenmayer systems (L-systems). The generative procedure has been divided into three consecutive independent stages, which subsequently build the CCS from proximal to distal sections. Each stage is governed by a set of user parameters together with anatomical and physiological constrains to direct the generation process and adhere to the structural observations derived from histology studies. Several parameters are defined using statistical distributions to introduce stochastic variability in the models. The CCS built with this approach can generate electrical activation sequences with physiological characteristics. PMID:21896384

  5. Unifying the analyses of anatomical and diffusion tensor images using volume-preserved warping

    DEFF Research Database (Denmark)

    Xu, Dongrong; Hao, Xuejun; Bansal, Ravi;

    2007-01-01

    PURPOSE: To introduce a framework that automatically identifies regions of anatomical abnormality within anatomical MR images and uses those regions in hypothesis-driven selection of seed points for fiber tracking with diffusion tensor (DT) imaging (DTI). MATERIALS AND METHODS: Regions of interes...

  6. Unifying the analyses of anatomical and diffusion tensor images using volume-preserved warping

    DEFF Research Database (Denmark)

    Xu, Dongrong; Hao, Xuejun; Bansal, Ravi;

    2007-01-01

    PURPOSE: To introduce a framework that automatically identifies regions of anatomical abnormality within anatomical MR images and uses those regions in hypothesis-driven selection of seed points for fiber tracking with diffusion tensor (DT) imaging (DTI). MATERIALS AND METHODS: Regions of interest...

  7. Research on Automatic Generation Technology of General Crystal Report%通用水晶报表自动生成技术研究

    Institute of Scientific and Technical Information of China (English)

    丛凤侠; 杨玉强

    2013-01-01

      针对水晶报表制作周期长、维护困难,难已满足用户个性化和不断变化需求的现状,文中研究自动生成技术,设计思路是由大型数据库支持前端程序运行,将报表的外观、结构、程序等信息存储在数据库中,运行时根据这些信息自动生成报表。首先进行界面设计,包括报表页眉节、页眉节、详细资料节、页脚节、报表页脚节;然后进行数据库设计,包括概念结构设计和逻辑结构设计;最后进行关键程序设计,包括主程序设计、设置字段设计、设置统计值设计。运用自动生成技术,提高了软件开发劳动生产率,改变了传统的软件开发模式。%For the current situation about long production cycle,difficult to maintain,difficult to meet users’ individual and changing needs of crystal reports,study automatic generation technology,the design idea is run by the large database front-end programs,and the report appearance,structure,procedures are stored in database,running to automatically generate reports based on this information. First conduct interface design,including the report header segment,header segment,details segment,page footer segment,report footer seg-ment;second,carry on database design,including conceptual structure design and logic structure design;finally,conduct key program de-sign,including main program design,set field design,set statistics design. Using the automatic generation technology,improve the labor productivity of software development,and has changed the traditional mode of software development.

  8. Automatic generation of a metamodel from an existing knowledge base to assist the development of a new analogous knowledge base.

    Science.gov (United States)

    Bouaud, J; Séroussi, B

    2002-01-01

    Knowledge acquisition is a key step in the development of knowledge-based systems and methods have been proposed to help elicitating a domain-specific task model from a generic task model. We explored how an existing validated knowledge base (KB) represented by a decision tree could be automatically processed to infer a higher level domain-specific task model. On-codoc is a guideline-based decision support system applied to breast cancer therapy. Assuming task identity and ontological proximity between breast and lung cancer domains, the generalization of the breast can-cer KB should allow to build a metamodel to serve as a guide for the elaboration of a new specific KB on lung cancer. Two types of parametrized generalization methods based on tree structure simplification and ontological abstraction were used. We defined a similarity distance and a generalization coefficient to select the best metamodel identified as the closest to the original decision tree of the most generalized metamodels. PMID:12463788

  9. The Main Paradigm of the Propositional Formula Automatic Generation System%命题公式主析范式的自动生成系统

    Institute of Scientific and Technical Information of China (English)

    张娟

    2013-01-01

    Nowadays the development of artificial intelligence has been very rapid, and is becoming increasingly popular in our life. The development of artificial intelligence can not be separated without mathematical logic. Propositional logic is a significant part of mathematical logical. This paper describes the design and implementation process of the propositional formulas, disjunctive paradigm and conjunctive paradigm of automatic generation of system development.%目前人工智能的发展已经非常迅速,而且会越来越普及到我们的生活中。人工智能的发展离不开数理逻辑,命题逻辑是数理逻辑中重要部分,本文介绍了命题公式主析取范式及主合取范式的自动生成系统的开发、设计与实现过程。

  10. 利用ArcIMS自动生成震中分布图%Automatic generation of epicenter image with ArcIMS

    Institute of Scientific and Technical Information of China (English)

    董星宏; 贾宁

    2011-01-01

    Using the function of the ArcIMS website publishing, we implement automatic generation of static epicenter image, and integrate it into the portal website′s management, enrich the content of the rapid earthquake information report, save the time for manually drawing the Epicenter figure.%利用ArcIMS的地图发布功能,较好地实现自动生成静态震中分布图的功能,并将该功能与门户网站集成起来,可丰富地震速报信息的内容,节约应急时期人工绘图的时间.

  11. [Use of steam-oxygen tents with a universal steam generator and automatic control system in the treatment of acute stenosing laryngotracheitis in children].

    Science.gov (United States)

    Taĭts, B M

    1993-01-01

    To treat acute stenosing laryngotracheitis in acute respiratory viral infection in children an original method has been developed and used for 2 years in a special hospital department. The method implies treatment of children in steam-and-oxygen tents with a universal steam-moistening generator and automatic control system. A controlled study of 50 children with acute laryngeal stenosis degree I-III confirmed high efficacy of this method permitting improvement of blood oxygenation, gas composition, acid-base condition, reduction of acidosis, prevention of exicosis and brain edema. Warm humid atmosphere promoted better discharge of the secretion and better functioning of the ciliated epithelium. Combined treatment incorporating the tents in acute laryngeal stenoses reduced lethality in severe cases, number of intubations and tracheostomies, of complications resultant from parenteral administration of the drugs. PMID:8009767

  12. 数据驱动的NC代码自动生成方法研究%Study in NC code generating automatically based on data driving

    Institute of Scientific and Technical Information of China (English)

    李克天; 何汉武; 王志坚; 郑德涛; 陈统坚

    2001-01-01

    提出了以数据驱动方式来代替常规的人机交互方式对制造模型进行处理,最终可自动生成NC代码。论述了数据驱动文件原理、表达规则、运行方式以及生成NC代码的过程。%It was presented that data driving method used in manufacturing model instead of the method of manually, and NC code could be generated automatically. Its principle, expressing rules, and NC code creating of the data driving file were discussed step by step.

  13. Early fetal anatomical sonography.

    LENUS (Irish Health Repository)

    Donnelly, Jennifer C

    2012-10-01

    Over the past decade, prenatal screening and diagnosis has moved from the second into the first trimester, with aneuploidy screening becoming both feasible and effective. With vast improvements in ultrasound technology, sonologists can now image the fetus in greater detail at all gestational ages. In the hands of experienced sonographers, anatomic surveys between 11 and 14 weeks can be carried out with good visualisation rates of many structures. It is important to be familiar with the normal development of the embryo and fetus, and to be aware of the major anatomical landmarks whose absence or presence may be deemed normal or abnormal depending on the gestational age. Some structural abnormalities will nearly always be detected, some will never be and some are potentially detectable depending on a number of factors.

  14. Development of an expert system for automatic mesh generation for S(N) particle transport method in parallel environment

    Science.gov (United States)

    Patchimpattapong, Apisit

    This dissertation develops an expert system for generating an effective spatial mesh distribution for the discrete ordinates particle transport method in a parallel environment. This expert system consists of two main parts: (1) an algorithm for generating an effective mesh distribution in a serial environment, and (2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. The mesh generation algorithm consists of four steps: creation of a geometric model as partitioned into coarse meshes, determination of an approximate flux shape, selection of appropriate differencing schemes, and generation of an effective fine mesh distribution. A geometric model was created using AutoCAD. A parallel code PENFC (Parallel Environment Neutral-Particle First Collision) has been developed to calculate an uncollided flux in a 3-D Cartesian geometry. The appropriate differencing schemes were selected based on the uncollided flux distribution using a least squares methodology. A menu-driven serial code PENXMSH has been developed to generate an effective spatial mesh distribution that preserves problem geometry and physics. The domain decomposition selection process involves evaluation of the four factors that affect parallel performance, which include number of processors and memory available per processor, load balance, granularity, and degree-of-coupling among processors. These factors are used to derive a parallel-performance-index that provides expected performance of a parallel algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems: the VENUS-3 experimental facility and the BWR core shroud.

  15. 多态蠕虫特征码自动提取算法%Automatic Signature Generation Algorithm of Polymorphic Worm

    Institute of Scientific and Technical Information of China (English)

    陈雪林

    2013-01-01

      针对多态技术下变形蠕虫的特征与自动提取算法的问题,提出一种多态蠕虫特征描述方法,并给出特征码自动提取算法。这种结合了 PADS 和 Polygraph 优点的 MS-PADS 特征提取方法,能在强噪声下快速提取高质量的多态蠕虫特征,具有低误报率、检测精度高和通用性好等特点。%This paper researches the feature description and automatic signature generation algorithm of polymorphic worms. It proposes a new signature generation approach for polymorphic worms - MS-PADS (multiple separated string position-aware distribution signature), which integrate the advantage of PADS and Polygraph. It solved the problem of PADS signature width determination under strong background noise conditions. This method could generate high quality signatures of polymorphic worms with low false-positive rate and high detection precision rate.

  16. Contract Automatic Generation Method Based on TASC Model%基于TASC模型的合同自动生成方法

    Institute of Scientific and Technical Information of China (English)

    张恒; 李自臣

    2011-01-01

    通过可信自治服务协同模型解决因自治个体行为难以预测和控制而导致的协同可信危机.在该模型下,Agent之间通过服务合同建立协同关系.提出一种有效的服务合同生成方法,利用服务分类体系、基于适用情景的服务发现机制和服务合同协议模板进行求解,以一种快速查找、可重用的方式生成合同,从而提高服务组合的自动化程度,并给出可信自治式服务协同过程和服务合同自动生成过程.%TASC model can solve the cooperation trust crisis result from the phenomenon that it is difficult to predict and control the autonomy individual behavior. Under this model, service contract establishes cooperation relations between Agents. This paper proposes an effective service contract generation method. It uses service classification system, service discovery mechanism for scenario and service contract protocol template,generates contract in a fast and reusable way, thus improve the level of automation, and gives the process of trust autonomy service cooperation and service contract automatic generation.

  17. Reference Man anatomical model

    Energy Technology Data Exchange (ETDEWEB)

    Cristy, M.

    1994-10-01

    The 70-kg Standard Man or Reference Man has been used in physiological models since at least the 1920s to represent adult males. It came into use in radiation protection in the late 1940s and was developed extensively during the 1950s and used by the International Commission on Radiological Protection (ICRP) in its Publication 2 in 1959. The current Reference Man for Purposes of Radiation Protection is a monumental book published in 1975 by the ICRP as ICRP Publication 23. It has a wealth of information useful for radiation dosimetry, including anatomical and physiological data, gross and elemental composition of the body and organs and tissues of the body. The anatomical data includes specified reference values for an adult male and an adult female. Other reference values are primarily for the adult male. The anatomical data include much data on fetuses and children, although reference values are not established. There is an ICRP task group currently working on revising selected parts of the Reference Man document.

  18. Automatic programming and generation of collision-free paths for the Mitsubishi Movemaster RV-M1 robot

    Directory of Open Access Journals (Sweden)

    K. Foit

    2011-07-01

    Full Text Available Purpose: of this paper: This paper discuss the possibility to develop and implementing the computer system, which could be able to generate a collision-free path and prepare the data for direct implementing in the robot’s program.Design/methodology/approach: The existing methods of planning of the collision-free paths are mainly limited to the 2D issue and implemented for the mobile robots. The existing methods for planning the trajectory in 3D are often complicated and time-consuming, so most of them are not introduced in reality, being only a theory. In the paper the 2½D method has been presented together with the method of smoothing the generated trajectory. Experiments have been carried out in the virtual environment as well as on the real robot.Findings: The developed PLANER application has been adapted for cooperation with the Mitsubishi Movemaster RV-M1 robot. The current tests, together with the previous one carried out on the Fanuc RJ3iB robot, have shown the versatility of the method and the possibility to adapt it for cooperation with any robotic system.Research limitations/implications: The further stage of research will be concentrated on the consolidation of trajectory generating and simulation phase with the program execution stage in such a way, that the determination of collision-free path could be realized in real time.Practical implications: This approach clearly simplifies the stage of defining the relevant points of the trajectory in order to avoid collisions with the technological objects located in the robot’s manipulator environment. Thereby it significantly reduces the time needed for implementation of the program to the production cycle.Originality/value: The method of generating the collision-free trajectories, which is described in the paper, combines some of the existing tools with the new approach to achieve the optimal performance of the algorithm.

  19. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  20. SU-E-J-141: Comparison of Dose Calculation On Automatically Generated MRBased ED Maps and Corresponding Patient CT for Clinical Prostate EBRT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Schadewaldt, N; Schulz, H; Helle, M; Renisch, S [Philips Research Laboratories Hamburg, Hamburg (Germany); Frantzen-Steneker, M; Heide, U [The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2014-06-01

    Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water and fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle{sup 3}. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a

  1. Design of Controller for Automatic Tracking Solar Power Generation%全天候太阳能自动跟踪系统装置的研究

    Institute of Scientific and Technical Information of China (English)

    郑锋; 王炜灵; 陈健强; 陈泽群; 张晓薇

    2014-01-01

    本文提出了一种全天候太阳能自动跟踪系统。在检测系统上,硬件方面使用实际的光电跟踪模型,软件上设置视日运动轨迹跟踪程序;在控制系统上,采用双轴跟踪的机械传动机构,通过驱动直流电机调整太阳能板的最佳位置,并通过传功装置实现单台电机带动整排太阳能电池板的联动;针对阴雨天和狂风天气控制系统做出一系列的预防措施。本装置旨在全天采光发电,结构简单、能耗低、效率高。%The principle and structure of an intelligent automatic solar tracker are proposed. For testing system, a modelofphotoelectric tracing as hardware is used to track light while the device sets up a program to analysis the movement of the light as software. For controlling system, the controller has a two-axis tracker for mechanical design,and promote the whole row of solar panel linked by linkage. The controller drives the stepping motor to adjust the position of the solar panel to follow the sunlight,and. Other actions are taken to avoid the rain and strong wind. And the intelligent energy-saving design is involved. The simple-designed and energy-saving automatic tracking solar power generation is expected to work in days with highly efficiency.

  2. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    Science.gov (United States)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  3. 改进的蠕虫特征自动提取模型及算法设计%Improved Automatic Generation Model of Worm Signatures

    Institute of Scientific and Technical Information of China (English)

    汪颖; 康松林

    2012-01-01

    This paper presents an worm signatures automatic generation model based on sequence aligment, it uses unified tilting and clustering processing to enhance the suspicious traffic sample's purtity, and with the modified T-Coffe multiple sequence alignment algorithm to generate worms signature. For comparative analysis of the signature generation model, this paper use two popular kinds of algorithms---Apache-Knacker algorithm and Hamsa algorithnr--to capture the signature of Apache-Knacker and TSIG worms virus. According to the experiment result, the signature generation model which are proposed in this paper is superior to the other two kinds of technology.%文章提出了一种基于序列比对的蠕虫特征自动提取模型,该模型针对现有蠕虫特征自动提取系统的可疑蠕虫样本流量单来源和粗预处理等问题,提出了对网络边界可疑流量和蜜罐捕获网络流量统一的聚类预处理,并使用改进的T-Coffee多序列比对算法进行蠕虫特征提取。实验分别对Apache-Knacker和TSIG这两种蠕虫病毒进行特征提取,从实验结果可以看出文章提出的模型产生的特征质量优于比较流行的Polygraph、Hamsa两种技术。

  4. Anatomical imaging for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Philip M [Joint Physics Department, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT (United Kingdom)], E-mail: phil.evans@icr.ac.uk

    2008-06-21

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  5. Automatic generation of attack vectors for stored-XSS%存储型XSS攻击向量自动化生成技术

    Institute of Scientific and Technical Information of China (English)

    陈景峰; 王一丁; 张玉清; 刘奇旭

    2012-01-01

    针对危害性最为严重的存储型XSS漏洞的特点及其触发方式,设计并实现了一款自动生成存储型XSS攻击向量的工具.使用该工具对中国2个大型视频分享网站的日志发布系统进行测试,发现6类导致存储型XSS漏洞的攻击向量.实验结果验证了该方法及测试工具的有效性,并说明中国视频网站仍存在着较大安全隐患.%The stored-XSS (cross-site scripting) is generally more serious than the other modalities of XSS. We study the characteristics and trigger mechanism of stored-XSS, propose an generation method of attack vectors for stored-XSS, and accomplish a tool which can generate the attack vectors automatically. After we used this tool in testing the blog systems of two popular video-sharing sites in China, we found 6 types of attcak vectors which can trigger stored-XSS. The results of the testing experiments show the effectiveness of our method and also show the potential security risk in the video-sharing sites.

  6. Automatic generation of boundary conditions using Demons non-rigid image registration for use in 3D modality-independent elastography

    Science.gov (United States)

    Pheiffer, Thomas S.; Ou, Jao J.; Miga, Michael I.

    2010-02-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm, and are often determined by time-consuming point correspondence methods requiring manual user input. Unfortunately, generation of accurate boundary conditions for the biomechanical model is often difficult due to the challenge of accurately matching points between the source and target surfaces and consequently necessitates the use of large numbers of fiducial markers. This study presents a novel method of automatically generating boundary conditions by non-rigidly registering two image sets with a Demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray computed tomography image data with known boundary conditions. These preliminary results have produced boundary conditions with accuracy of up to 80% compared to the known conditions. Finally, these boundary conditions were utilized within a 3D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.

  7. Automatable on-line generation of calibration curves and standard additions in solution-cathode glow discharge optical emission spectrometry

    International Nuclear Information System (INIS)

    Two methods are described that enable on-line generation of calibration standards and standard additions in solution-cathode glow discharge optical emission spectrometry (SCGD-OES). The first method employs a gradient high-performance liquid chromatography pump to perform on-line mixing and delivery of a stock standard, sample solution, and diluent to achieve a desired solution composition. The second method makes use of a simpler system of three peristaltic pumps to perform the same function of on-line solution mixing. Both methods can be computer-controlled and automated, and thereby enable both simple and standard-addition calibrations to be rapidly performed on-line. Performance of the on-line approaches is shown to be comparable to that of traditional methods of sample preparation, in terms of calibration curves, signal stability, accuracy, and limits of detection. Potential drawbacks to the on-line procedures include signal lag between changes in solution composition and pump-induced multiplicative noise. Though the new on-line methods were applied here to SCGD-OES to improve sample throughput, they are not limited in application to only SCGD-OES—any instrument that samples from flowing solution streams (flame atomic absorption spectrometry, ICP-OES, ICP-mass spectrometry, etc.) could benefit from them. - Highlights: • Describes rapid, on-line generation of calibration standards and standard additions • These methods enhance the ease of analysis and sample throughput with SCGD-OES. • On-line methods produce results comparable or superior to traditional calibration. • Possible alternative, null-point-based methods of calibration are described. • Methods are applicable to any system that samples from flowing liquid streams

  8. Optimization of the Automatic Control Process to Improve CDQ Power Generation%优化自动化控制过程提高干熄焦发电量

    Institute of Scientific and Technical Information of China (English)

    李金凤; 孔德恩

    2015-01-01

    自动化控制系统是确保焦炉、干熄焦、发电各生产环节的稳定运行的重要保障,而优化和改进自动控制系统对提高干熄焦发电的经济效益有重要作用。总结了干熄焦自动控制系统的特点及其重要的优化节点的优化措施。%The automatic control system is an important guarantee for stable operation of the production process flow from coke oven and dry quenching to waste heat power generation; and optimization and improvement of the automatic control system plays an important role in improving the economic benefits of CDQ power generation. The characteristics of CDQ automatic control system and optimization measures for important control nodes are summarized.

  9. Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping

    Science.gov (United States)

    Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable

  10. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  11. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    Science.gov (United States)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning

  12. A program for assisting automatic generation control of the ELETRONORTE using artificial neural network; Um programa para assistencia ao controle automatico de geracao da Eletronorte usando rede neuronal artificial

    Energy Technology Data Exchange (ETDEWEB)

    Brito Filho, Pedro Rodrigues de; Nascimento Garcez, Jurandyr do [Para Univ., Belem, PA (Brazil). Centro Tecnologico; Charone Junior, Wady [Centrais Eletricas do Nordeste do Brasil S.A. (ELETRONORTE), Belem, PA (Brazil)

    1994-12-31

    This work presents an application of artificial neural network as a support to decision making in the automatic generation control (AGC) of the ELETRONORTE. It uses a software to auxiliary in the decisions in real time of the AGC. (author) 2 refs., 6 figs., 1 tab.

  13. 软件测试数据自动生成算法的仿真研究%Simulation Research on Automatically Generate Software Test Data Algorithm

    Institute of Scientific and Technical Information of China (English)

    黄丽芬

    2012-01-01

    Testing data is the most crucial part in software testing software, and it is important for the software test automation degree to improve the automatic software test data generation method. Aiming at the defects of genetic algorithm and ant colony algorithm, a new software test data generation algorithm was proposed in this paper based on genetic and ant colony algorithm. Firstly, genetic algorithm which has the global searching ability was used to find the optimal solution, and then the optimal solution was converted into the initial pheromone of ant colony algorithm. Finally, the best test data were found by ant colony algorithm positive feedback mechanism quickly. The experimental results show that the proposed method improves the efficiency of software test data generation and has very important using value.%研究软件质量优化问题,传统遗传算法存在局部最优、收敛速度慢,使软件测试数据自动生成效率低.为提高软件测试数据生成效率,对传统遗传算法进行改进,提出一种遗传-蚁群算法的软件测试数据生成算法.针对测试数据自动生成的特点,充分发挥遗传算法的全局搜索和蚁群算法的局部搜索优势,提高了测试数据的生成能力.实验结果表明,遗传-蚁群算法提高了软件测试数据生成效率,是一种较为理想的软件测试数据生成算法.

  14. A reusable anatomically segmented digital mannequin for public health communication.

    Science.gov (United States)

    Fujieda, Kaori; Okubo, Kosaku

    2016-01-01

    The ongoing development of world wide web technologies has facilitated a change in health communication, which has now become bi-directional and encompasses people with diverse backgrounds. To enable an even greater role for medical illustrations, a data set, BodyParts3D, has been generated and its data set can be used by anyone to create and exchange customised three-dimensional (3D) anatomical images. BP3D comprises more than 3000 3D object files created by segmenting a digital mannequin in accordance with anatomical naming conventions. This paper describes the methodologies and features used to generate an anatomically correct male mannequin.

  15. 一种可通用的参数编辑软件设计及实现%A Solution of Automatically Generate Software Parameter Editing lnterface

    Institute of Scientific and Technical Information of China (English)

    雷丽萍; 吴卫平; 何见坤

    2014-01-01

    This paper presents a flexible software parameter editing solutions.Through a configuration file,definition of data types, parameter definition,interface format definition become easy and flexible.Once the software runs,it wil automatical y run the configuration file and generate the parameter editing interface.This method good to meet the needs of different embedded parameters software,so as to achieve the general parameter editing interface.This paper focuses on the key algorithm of the executable configuration file.%提出了一种可通用的参数编制软件方案,其将数据类型的定义、所需参数定义、界面格式定义开放出来,成为可执行程序代码的配置文件,程序运行时,动态加载解析上述文件,自动生成人机交互界面,以满足不同嵌入式软件的参数需求,从而达到了参数设定软件的通用性。重点对该方案的关键算法进行了描述。

  16. Automatic voltage regulation of synchronous generator using generalized predictive control; Ippanka yosoku seigyo wo mochiita doki hatsudenki no jido den`atsu chosei

    Energy Technology Data Exchange (ETDEWEB)

    Funabiki, S.; Yamakawa, S. [Okayama University, Okayama (Japan). Faculty of Engineering; Ito, T. [Nishishiba Electric Co. Ltd., Hyogo (Japan)

    1995-02-28

    For the automatic voltage regulator (AVR) of a synchronous generator, various applications of self-tuning digital control (STC) have been experimented which successively adjusts PID gains to cope with dynamic characteristics such as disturbances of a plant. As one of such applications, a proposal has been made in this paper for a stable and highly adaptable control system by using a generalized predictive control as the control law and the sequential least-square method as the identification method. An experiment was carried out by a simulation and an experimental AVR, and the effectiveness was confirmed of this control method. The following points may be listed in summarizing the characteristics of this AVR. The arithmetic time is short, and a highly accurate identification value is obtainable. Since an oblivion coefficient is determined by the supremum trace gain method, the adaptability is increased on the parameter identification value. A stable control is obtained even if a plant is a non-minimum phase system. 10 refs., 11 figs., 2 tabs.

  17. Automatic Generation of Instrument Sheet and Index Realization with Office VBA%利用Office VBA自动生成相关仪表设计文件

    Institute of Scientific and Technical Information of China (English)

    郭非; 范琳; 付荣申; 陈松华

    2012-01-01

    目前工程公司的仪表设计文件如仪表数据表、仪表索引等,多是人工手动填写或复制粘贴,速度慢、准确率低,一定程度上影响了设计文件的质量和工程进度。针对这一情况,介绍了利用VBA开发工具,开发出自动填写仪表数据表工艺参数和索引自动生成软件,工程实际应用表明该软件能够有效地减轻设计人员的劳动强度,提高设计成品的质量。%Currently, most of the instrument design files such as process data of instruments sheets and instrument index are filled manually or copied/pasted in the engineering companies with low speed and accuracy. As a result, the quality of design file and engineering progress are influenced somehow. Regarding to the situation, VBA development tool with the function of automatic generation of instrument sheet data and instruments index is introduced. According to the engineering application, labor intensity of the designers is effectively reduced and quality of design work is improved.

  18. Hybrid evolutionary algorithm based fuzzy logic controller for automatic generation control of power systems with governor dead band non-linearity

    Directory of Open Access Journals (Sweden)

    Omveer Singh

    2016-12-01

    Full Text Available A new intelligent Automatic Generation Control (AGC scheme based on Evolutionary Algorithms (EAs and Fuzzy Logic concept is developed for a multi-area power system. EAs i.e. Genetic Algorithm–Simulated Annealing (GA–SA are used to optimize the gains of Fuzzy Logic Algorithm (FLA-based AGC regulators for interconnected power systems. The multi-area power system model has three different types of plants i.e. reheat, non-reheat and hydro and are interconnected via Extra High Voltage Alternate Current transmission links. The dynamic model of the system is developed considering one of the most important Governor Dead Band (GDB non-linearity. The designed AGC regulators are implemented in the wake of 1% load perturbation in one of the control areas and the dynamic response plots are obtained for various system states. The investigations carried out in the study reveal that the system dynamic performance with hybrid GA–SA-tuned Fuzzy technique (GASATF-based AGC controller is appreciably superior as compared to that of integral and FLA-based AGC controllers. It is also observed that the incorporation of GDB non-linearity in the system dynamic model has resulted in degraded system dynamic performance.

  19. Research on the Method Based on LabVlEW for Automatically Generating Detection Report%基于LabVIEW的检测报告自动生成方法研究

    Institute of Scientific and Technical Information of China (English)

    李磊; 杨峰; 何耀

    2012-01-01

    In order to automatically printout the detection results and judgment conclusion from certain composite detector that is developed based on LabVIEW platform for radar receiver in accordance with designated template format and sequence, the global variable technology and Word document report generation technology have been researched. By adopting Report Generation Toolkit and LabVIEW programming technology, the automatic report generation program for composite detection report is designed. Various detection results and judgment conclusion can be recorded in real time; and the report based on the format of designated Word template can be generated automatically with this program.%为了将检测结果和判别结论按照给定的模版格式和顺序自动地打印出来,研究了LabVIEW中的全局变量技术和Word文档检测报告生成技术.利用Report Generation Toolkit 工具包和LabVIEW编程技术,设计了组合检测报告自动生成程序.该程序能够实时记录各种检测结果和判别结论,并自动生成基于指定Word模板格式的检测报告.

  20. Validation of an anatomical coordinate system for clinical evaluation of the knee joint in upright and closed MRI.

    Science.gov (United States)

    Olender, Gavin; Hurschler, Christof; Fleischer, Benjamin; Friese, Karl-Ingo; Sukau, Andreas; Gutberlet, Marcel; Becher, Christoph

    2014-05-01

    A computerized method to automatically and spatially align joint axes of in vivo knee scans was established and compared to a fixed reference system implanted in a cadaver model. These computational methods to generate geometric models from static MRI images with an automatic coordinate system fitting proved consistent and accurate to reproduce joint motion in multiple scan positions. Two MRI platforms, upright and closed, were used to scan a phantom cadaver knee to create a three-dimensional, geometric model. The knee was subsequently scanned in several positions of knee bending in a custom made fixture. Reference markers fixed to the bone were tracked by an external infrared camera system as well as by direct segmentation from scanned images. Anatomical coordinate systems were automatically fitted to the segmented bone model and the transformations of joint position were compared to the reference marker coordinate systems. The tracked translation and rotation measurements of the automatic coordinate system were found to be below root mean square errors of 0.8 mm and 0.7°. In conclusion, the precision of the translation and rotational tracking is found to be sensitive to the scanning modality, albeit in upright or closed MRI, but still within comparative measures to previously performed studies. The potential to use segmented bone models for patient joint analysis could vastly improve clinical evaluation of disorders of the knee with continual application in future three-dimensional computations.

  1. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  2. Research on Technologies of Service-oriented Test Program Automatic Generation%面向服务的测试程序自动生成技术研究

    Institute of Scientific and Technical Information of China (English)

    王成; 杨森; 孟晨

    2012-01-01

    Test program automatic generation technology is the key technology of the new generation ATS. The paper started from the service—oriented, firstly, the framework of service—oriented test program automatic generation is built; then, the test flow description language is introduced, and the test description XML is translated to test flow description language, then the test flow description language is translated to C language middle program using TFDL compiler; finally, the test program is automatic generated by cots compiler.%测试程序自动生成技术是新一代自动测试系统(ATS)关键技术之一.文中从面向服务的角度出发,首先建立了面向服务的测试程序自动生成总体框架;然后介绍了测试流程描述语言(Test Flow Description Language,TFDL),并通过XSLT模板将测试描述XML转化为测试流程描述语言,利用TFDL编译器将测试流程描述语言转化为C语言中间程序;最后通过商业编译器自动生成测试程序.

  3. Automatic Reading

    Institute of Scientific and Technical Information of China (English)

    胡迪

    2007-01-01

    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  4. Shape analysis of simulated breast anatomical structures

    Science.gov (United States)

    Contijoch, Francisco; Lynch, Jennifer M.; Pokrajac, David D.; Maidment, Andrew D. A.; Bakic, Predrag R.

    2012-03-01

    Recent advances in high-resolution 3D breast imaging, namely, digital breast tomosynthesis and dedicated breast CT, have enabled detailed analysis of the shape and distribution of anatomical structures in the breast. Such analysis is critically important, since the projections of breast anatomical structures make up the parenchymal pattern in clinical images which can mask the existing abnormalities or introduce false alarms; the parenchymal pattern is also correlated with the risk of cancer. As a first step towards the shape analysis of anatomical structures in the breast, we have analyzed an anthropomorphic software breast phantom. The phantom generation is based upon the recursive splitting of the phantom volume using octrees, which produces irregularly shaped tissue compartments, qualitatively mimicking the breast anatomy. The shape analysis was performed by fitting ellipsoids to the simulated tissue compartments. The ellipsoidal semi-axes were calculated by matching the moments of inertia of each individual compartment and of an ellipsoid. The distribution of Dice coefficients, measuring volumetric overlap between the compartment and the corresponding ellipsoid, as well as the distribution of aspect ratios, measuring relative orientations of the ellipsoids, were used to characterize various classes of phantoms with qualitatively distinctive appearance. A comparison between input parameters for phantom generation and the properties of fitted ellipsoids indicated the high level of user control in the design of software breast phantoms. The proposed shape analysis could be extended to clinical breast images, and used to inform the selection of simulation parameters for improved realism.

  5. Automatic Construction of Finite Algebras

    Institute of Scientific and Technical Information of China (English)

    张健

    1995-01-01

    This paper deals with model generation for equational theories,i.e.,automatically generating (finite)models of a given set of (logical) equations.Our method of finite model generation and a tool for automatic construction of finite algebras is described.Some examples are given to show the applications of our program.We argue that,the combination of model generators and theorem provers enables us to get a better understanding of logical theories.A brief comparison betwween our tool and other similar tools is also presented.

  6. Review of the Historical Evolution of Anatomical Terms

    Directory of Open Access Journals (Sweden)

    Algieri, Rubén D.

    2011-12-01

    English, listing which updates and supersedes all previous nomenclatures. In September 2001, the Spanish Anatomical Society translated this International Anatomical Terminology into Spanish language.The study of the historical backgrounds in the worldwide development of Anatomical Terms, give us valuable data about the origin and foundation of the names. It is necessary to raise awareness about the implementation of a unified, updated and uniform anatomical terminology, when conducting scientific communications and publications. As specialists in this discipline, we must study and know the existence of the official list of anatomical terms of use worldwide (International Anatomical Terminology, its equivalence with previous classifications, keeping us updated about its changes to teach it to new generations of health professionals.

  7. A Mathematical Framework for Incorporating Anatomical Knowledge in DT-MRI Analysis

    OpenAIRE

    Maddah, Mahnaz; Zöllei, Lilla; Grimson, W. Eric L.; Westin, Carl-Fredrik; Wells, William M.

    2008-01-01

    We propose a Bayesian approach to incorporate anatomical information in the clustering of fiber trajectories. An expectation-maximization (EM) algorithm is used to cluster the trajectories, in which an atlas serves as the prior on the labels. The atlas guides the clustering algorithm and makes the resulting bundles anatomically meaningful. In addition, it provides the seed points for the tractography and initial settings of the EM algorithm. The proposed approach provides a robust and automat...

  8. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  9. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  10. A GPS-track-based method for automatically generating road-network vector map%基于GPS轨迹的矢量路网地图自动生成方法

    Institute of Scientific and Technical Information of China (English)

    孔庆杰; 史文欢; 刘允才

    2012-01-01

    A method is proposed for automatically generating large-scale road-network vector maps based on GPS-probe-vehicle tracks. This method does not need to employ the basic maps of road-network maps, except the tracks formed when GPS probe vehicles are moving in road networks, to reflect the real topology of road networks onto digital maps automatically. The proposed method consists of three steps: First, transform the earth longitude and latitude coordinates of the GPS-track data to the urban map coordinates! then, generate the skeleton map of the road network by using the transformed GPS-track dataj finally, perform vectorization processing to the generated skeleton map. The experiments for automatically generating the real road network with the real-world GPS-track data indicate that the proposed method is able to automatically generate road-network maps successfully, and that the generated vector digital map bears high accuracy and can satisfy the application requirement of automatically and timely updating the digital map in vehicle navigation systems, traffic guidance systems, etc.%提出一种基于GPS探测车轨迹的大规模矢量路网地图自动生成方法.该方法不需要路网地图的基图,可以只利用GPS探测车在路网中的行驶轨迹,自动将实际路网的真实拓扑结构反映在数字地图上.该方法分三个步骤:首先,实现GPS探测车轨迹数据的大地经纬度坐标到地图城建坐标的转换;然后,利用坐标转换后的GPS轨迹数据生成路网栅格地图;最后,将已生成的栅格路网地图进行矢量化处理.采用真实GPS探测车轨迹数据进行的实际路网自动生成实验表明,该方法能够成功地通过GPS轨迹自动生成路网地图,生成的矢量路网数字地图具有较高的精确度,可以满足交通诱导和汽车导航等系统中数字地图及时、自动更新的应用需求.

  11. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    Science.gov (United States)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  12. ANATOMICAL PROPERTIES OF PLANTAGO ARENARIA

    OpenAIRE

    Nicoleta IANOVICI; SINITEAN, Adrian; Aurel FAUR

    2011-01-01

    Psammophytes are marked by a number of adaptations that enable them to exist in the hard environmental conditions of the sand habitats. In this study, the anatomical characteristics of Plantago arenaria were examined. Studies were conducted to assess the diversity of anatomical adaptations of vegetative organs in this taxa. Results are presented with original photographs. The analysis of leaf anatomy in P. arenaria showed that the leaves contained a contained xeromorphic traits. Arbuscular my...

  13. Finite element speaker-specific face model generation for the study of speech production.

    Science.gov (United States)

    Bucki, Marek; Nazari, Mohammad Ali; Payan, Yohan

    2010-08-01

    In situations where automatic mesh generation is unsuitable, the finite element (FE) mesh registration technique known as mesh-match-and-repair (MMRep) is an interesting option for quickly creating a subject-specific FE model by fitting a predefined template mesh onto the target organ. The irregular or poor quality elements produced by the elastic deformation are corrected by a 'mesh reparation' procedure ensuring that the desired regularity and quality standards are met. Here, we further extend the MMRep capabilities and demonstrate the possibility of taking into account additional relevant anatomical features. We illustrate this approach with an example of biomechanical model generation of a speaker's face comprising face muscle insertions. While taking advantage of the a priori knowledge about tissues conveyed by the template model, this novel, fast and automatic mesh registration technique makes it possible to achieve greater modelling realism by accurately representing the organ surface as well as inner anatomical or functional structures of interest. PMID:20635262

  14. Quantifying anatomical shape variations in neurological disorders.

    Science.gov (United States)

    Singh, Nikhil; Fletcher, P Thomas; Preston, J Samuel; King, Richard D; Marron, J S; Weiner, Michael W; Joshi, Sarang

    2014-04-01

    We develop a multivariate analysis of brain anatomy to identify the relevant shape deformation patterns and quantify the shape changes that explain corresponding variations in clinical neuropsychological measures. We use kernel Partial Least Squares (PLS) and formulate a regression model in the tangent space of the manifold of diffeomorphisms characterized by deformation momenta. The scalar deformation momenta completely encode the diffeomorphic changes in anatomical shape. In this model, the clinical measures are the response variables, while the anatomical variability is treated as the independent variable. To better understand the "shape-clinical response" relationship, we also control for demographic confounders, such as age, gender, and years of education in our regression model. We evaluate the proposed methodology on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline structural MR imaging data and neuropsychological evaluation test scores. We demonstrate the ability of our model to quantify the anatomical deformations in units of clinical response. Our results also demonstrate that the proposed method is generic and generates reliable shape deformations both in terms of the extracted patterns and the amount of shape changes. We found that while the hippocampus and amygdala emerge as mainly responsible for changes in test scores for global measures of dementia and memory function, they are not a determinant factor for executive function. Another critical finding was the appearance of thalamus and putamen as most important regions that relate to executive function. These resulting anatomical regions were consistent with very high confidence irrespective of the size of the population used in the study. This data-driven global analysis of brain anatomy was able to reach similar conclusions as other studies in Alzheimer's disease based on predefined ROIs, together with the identification of other new patterns of deformation. The

  15. Generating Capacity Prediction of Automatic Tracking Power Generation System on Inflatable Membrane Greenhouse Attached Photovoltaic%光伏充气膜温室自跟踪发电系统发电量预测

    Institute of Scientific and Technical Information of China (English)

    徐小力; 刘秋爽; 见浪護

    2012-01-01

    A method which can forecast generating capacity of automatic tracking power system on inflatable membrane greenhouse attached photovoltaic was proposed based on the self-adaptive variation particle swarm neural network by adding with weather information. Firstly, through combining historical data of electricity production and meteorological data, the main factors of the impact on generating capacity of power generation system on inflatable membrane greenhouse attached photovoltaic was analyzed. Then, the neural network forecasting model was established by combining the weather forecast. The self-adaptive variation particle swarm algorithm was introduced to improve the training effect by tackling the problems of slowly converging, easily falling into local optimum, and difficultly converging existed in traditional neural network forecasting model based on gradient-descent BP algorithm. The neural network was optimized with adaptable mutation particle swarm optimization ( AMPSO) algorithm. The mutation was put into particle swarm optimization(PSO) algorithm to find local optimal value. Experimental results showed that the entire convergence performance was significantly improved by adopting AMPSO and the premature convergence problem can be effectively avoided in PSO.%针对光伏充气膜温室自跟踪发电系统提出了一种加入天气预报信息的自适应变异粒子群神经网络的发电量预测算法.首先结合历史发电量数据和气象数据分析了影响光伏充气膜温室自跟踪发电系统发电量的主要因素,建立了加入天气预报的神经网络预测模型,并针对传统神经网络预测模型中基于梯度下降的BP算法收敛慢、易陷入局部最优、训练难收敛等问题,通过自适应变异粒子群算法改进了神经网络.该算法通过将变异环节引入粒子群优化算法,进行隔代进化找到局部最优解.实验结果表明所采用的自适应变异粒子群的神经网络预测算法的全

  16. Development of a automatic positioning system of photovoltaic panels for electric energy generation; Desenvolvimento de um sistema de posicionamento automatico de placas fotovoltaicas para a geracao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Alceu F.; Cagnon, Odivaldo Jose [Universidade Estadual Paulista (DEE/FEB/UNESP), Bauru, SP (Brazil). Fac. de Engenharia. Dept. de Engenharia Eletrica; Seraphin, Odivaldo Jose [Universidade Estadual Paulista (DER/FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas. Dept. de Engenharia Rural

    2008-07-01

    This work presents an automatic positioning system for photovoltaic panels, in order to improve the conversion of solar energy to electric energy. A prototype with automatic movement was developed, and its efficiency in generating electric energy was compared to another one with the same characteristics, but fixed in space. Preliminary results point to a significant increase in efficiency, obtained from a simplified process of movement, in which sensors are not used to determine the apparent sun's position, but instead of it, the relative Sun-Earth's position equations are used. An innovative movement mechanical system is also presented, using two stepper motors to move the panel along two-axis, but with independent movement, contributing, this way, to save energy during the positioning times. The use of this proposed system in rural areas is suggested. (author)

  17. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  18. 基于拓扑分层的配电网电气接线图自动生成算法%An Automatic Electrical Diagram Generation Method for Distribution Networks Based on Hierarchical Topology Model

    Institute of Scientific and Technical Information of China (English)

    廖凡钦; 刘东; 闫红漫; 于文鹏; 黄玉辉; 万顷波

    2014-01-01

    The automatic generation of the electrical diagram for a distribution network is a complex optimization problem.The nature of this process is to determine the relative positions of equipment of the distribution network in a 2-D plane.Based on a simplified hierarchical model,an automatic drawing algorithm is proposed to automatically generate the electrical diagram.The algorithm solves this problem by decomposing it into three steps,namely,preliminary layout,framework routing and complete drawing.The preliminary layout is obtained with gravitation-repulsion model.The power station drawing is completed through equipment classification and comparison of each outlet line”s dip angle.The routing priority is used to ensure there is no overlapping and crossing of routing.The overall automatic generation of the electrical diagram can then be obtained with respect to its original integrated topology structure.Finally,a practical case study of a city”s distribution network is given to show the effectiveness of the method and the automatic layout algorithm.%配电网电气接线图的自动生成是一个复杂的优化问题,其本质是在一个平面合理确定配电网拓扑中各设备间的相对坐标位置。文中提出一种基于拓扑分层的成图算法,该算法首先在原拓扑模型基础上构建3层不同程度简化的分层成图拓扑模型,在此基础上对应地将自动成图问题分解为初步布局、骨架布线和完整绘图这3个步骤求解。采用基于引力-斥力模型的布局算法完成初步布局,通过设备分类和比较电站出线的倾角大小实现电站成图,采用基于区分布线优先顺序的算法完成主干线的无重叠交叉布线,最终生成与原拓扑结构完全对应的配电网电气接线图。针对某市配电网的实例成图表明了所提算法的有效性。

  19. GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

    Science.gov (United States)

    Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew

    2016-03-01

    Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.

  20. ANATOMICAL PROPERTIES OF PLANTAGO ARENARIA

    Directory of Open Access Journals (Sweden)

    Nicoleta IANOVICI

    2011-01-01

    Full Text Available Psammophytes are marked by a number of adaptations that enable them to exist in the hard environmental conditions of the sand habitats. In this study, the anatomical characteristics of Plantago arenaria were examined. Studies were conducted to assess the diversity of anatomical adaptations of vegetative organs in this taxa. Results are presented with original photographs. The analysis of leaf anatomy in P. arenaria showed that the leaves contained a contained xeromorphic traits. Arbuscular mycorrhizal symbiosis seems to be critical for their survival.

  1. 热机专业管道名称自动生成系统的实现方法%Implementation of pipe name automatic generating system for mechanical specialty

    Institute of Scientific and Technical Information of China (English)

    米景平

    2012-01-01

    A pipe name automatic generating system for mechanical specialty is introduced.The three-dimensional theory,PID system diagram,KKs coding and EXCEL VBA programming are utilized.The compliance and accuracy for the pipe names are assured.%介绍了三维系统设计的原理,以P&ID系统图软件为设计平台,基于KKs编码技术及EXCEL VBA软件编程技术,成功开发了管道名称自动生成系统。在实现管道名称自动生成。

  2. Towards unifying inheritance and automatic program specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2002-01-01

    Inheritance allows a class to be specialized and its attributes refined, but implementation specialization can only take place by overriding with manually implemented methods. Automatic program specialization can generate a specialized, effcient implementation. However, specialization of programs...... with covariant specialization to control the automatic application of program specialization to class members. Lapis integrates object-oriented concepts, block structure, and techniques from automatic program specialization to provide both a language where object-oriented designs can be e#ciently implemented...

  3. Using automatic programming for simulating reliability network models

    Science.gov (United States)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  4. Employing anatomical knowledge in vertebral column labeling

    Science.gov (United States)

    Yao, Jianhua; Summers, Ronald M.

    2009-02-01

    The spinal column constitutes the central axis of human torso and is often used by radiologists to reference the location of organs in the chest and abdomen. However, visually identifying and labeling vertebrae is not trivial and can be timeconsuming. This paper presents an approach to automatically label vertebrae based on two pieces of anatomical knowledge: one vertebra has at most two attached ribs, and ribs are attached only to thoracic vertebrae. The spinal column is first extracted by a hybrid method using the watershed algorithm, directed acyclic graph search and a four-part vertebra model. Then curved reformations in sagittal and coronal directions are computed and aggregated intensity profiles along the spinal cord are analyzed to partition the spinal column into vertebrae. After that, candidates for rib bones are detected using features such as location, orientation, shape, size and density. Then a correspondence matrix is established to match ribs and vertebrae. The last vertebra (from thoracic to lumbar) with attached ribs is identified and labeled as T12. The rest of vertebrae are labeled accordingly. The method was tested on 50 CT scans and successfully labeled 48 of them. The two failed cases were mainly due to rudimentary ribs.

  5. An automatic dose verification system for adaptive radiotherapy for helical tomotherapy

    Science.gov (United States)

    Mo, Xiaohu; Chen, Mingli; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel; Lu, Weiguo

    2014-03-01

    Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic dose

  6. Automatic Generation of Working Principle Animation of Mechanical Equipments%基于单晶炉的机械装备工作原理动画自动生成方法

    Institute of Scientific and Technical Information of China (English)

    袁婧; 吴恩启; 杜宝江; 吴志豪

    2013-01-01

    A method was proposed,which can generate the working principle animation automatically based on SolidWorks.With the application programming interface(API) of SolidWorks and advanced language,the relative links between the static assembly relationship of parts and their regular movement were established.The method realizes the automatic generation of the interactive animation of working principle.It improves the efficiency of the establishment of virtual training system.%提出了基于SolidWorks操作环境的工作原理动画自动生成方法.在SolidWorks环境下,利用其应用程序接口(API),采用高级语言进行二次开发,构建了机械装备的装配约束与运动副之间的相对链接.该方法实现了工作原理交互动画的自动生成,大大提高了虚拟培训系统的搭建效率.

  7. Anatomical structure of Polystichum Roth ferns rachises

    Directory of Open Access Journals (Sweden)

    Oksana V. Tyshchenko

    2012-03-01

    Full Text Available The morpho-anatomical characteristics of rachis cross sections of five Polystichum species is presented. The main and auxiliary anatomical features which help to distinguish investigated species are revealed.

  8. Automatic control of nuclear power plants

    International Nuclear Information System (INIS)

    The fundamental concepts in automatic control are surveyed, and the purpose of the automatic control of pressurized water reactors is given. The response characteristics for the main components are then studied and block diagrams are given for the main control loops (turbine, steam generator, and nuclear reactors)

  9. Automatic Association of News Items.

    Science.gov (United States)

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  10. Anatomic study of infrapopliteal vessels.

    Science.gov (United States)

    Lappas, D; Stavropoulos, N A; Noussios, G; Sakellariou, V; Skandalakis, P

    2012-08-01

    The purpose of this project is to study and analyse the anatomical variations of the infrapopliteal vessels concerning their branching pattern. A reliable sample of one hundred formalin-fixed adult cadavers was dissected by the Anatomical Laboratory of Athens University. The variations can be classified in the following way: the normal branching of the popliteal artery was present in 90%. The remainder revealed variant branching patterns: hypoplastic or aplastic posterior tibial artery and the pedis arteries arising from the peroneal (3%); hypoplastic or aplastic anterior tibial artery (1.5%); and the dorsalis pedis formed by two equal branches, arising from the peroneal and the anterior tibial artery (2%). The variations were more frequent in females and in short-height individuals. Knowledge of these variations is rather important for any invasive technic concerning lower extremities.

  11. Application of Automatic Generation Control in Yixing Pumped Storage Power Station%自动发电控制在宜兴抽水蓄能电站的应用

    Institute of Scientific and Technical Information of China (English)

    李海波; 仇岚

    2012-01-01

    The paper introduces the actual condition of generation units in the East China Yixing Pumped Storage Power Station.After expounding the 2 controlling methods,i.e.single unit and group units AGC(automatic generation control),the paper emphasizes the probabilities of hierarchy control in power station and presents the application of AGC in the power plant side.%介绍了华东宜兴抽水蓄能电站机组的实际情况,阐述了单机自动发电控制和成组控制2种控制方式,说明了电站分层控制的概念,介绍了自动发电控制在电厂侧的应用情况。

  12. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  13. 光伏发电与公用电网互补自动切换系统的研究与设计%The Research and Design of Automatic Switching System between Photovoltaic Power Generation and Public Power Grid

    Institute of Scientific and Technical Information of China (English)

    李雅丽; 薛同莲

    2012-01-01

    Solar photovoltaic power generation is one of the main forms of exploitation and utilization of new energy,but solar power generation is affected by environment factors such as sunrise and sunset,sunny and cloudy,etc,existing intermittent problems.As lack of power will bring inconvenience to production and life in cloudy and rainy days.In order to take full advantage of solar power,making it effective to work with the public power grid complementary,the automatic switching system between photovoltaic power generation and public power grid based on household is designed in this article.This system is composed of signal comparative circuit and switching controlling circuit.The signal comparison is used in detecting the minimum working voltage of storage battery,when it detects output voltage is lower than the minimum working voltage,the system automatically switches to public power grid;when it detects battery output voltage is higher than or equal to the minimum working voltage,the system will automatically switch to solar power generation system.%太阳能光伏发电是开发利用新能源的主要形式之一,但是太阳能发电受日出和日落、晴天和阴天等环境因素影响,存在间歇问题,在阴天和雨天因其发电不足会给生产和生活带来诸多不便。为了充分利用太阳能发电,使其能够与公用电网有效互补,文中设计了一种基于户型光伏发电与公用电网互补自动切换系统。该系统由信号比较和开关控制两部分电路组成。信号比较电路用于检测蓄电池最低工作电压,当检测到蓄电池的输出电压低于最低工作电压,系统就自动切换到公用电网;当检测到蓄电池的输出电压高于或等于最低工作电压时,系统就自动切换到太阳能发电系统。

  14. Research of the Optimal Automatic Generation Control Strategy of Interconnected Power Grid Based on CPS Standard%基于CPS标准的互联电网最优自动发电控制策略研究

    Institute of Scientific and Technical Information of China (English)

    田启东; 翁毅选

    2015-01-01

    研究了考虑CPS的互联电网自动发电控制问题,提出了基于CPS和最优动态闭环控制的自动发电控制策略,在区域间互联电网中建立自动发电控制状态空间数学模型,采用了外点罚函数法进行目标函数的求解,并计及了CPS的新动态性能指标.与传统A标准下的PI控制策略相比,文中提出的方法考虑了区域控制偏差对系统频率恢复的贡献,明显地提高了CPS考核指标,降低了AGC机组的调节次数,减少了发电成本.同时,结合了最优化控制的良好的内部感知能力和动态适应性的优点.通过仿真分析,并与传统的控制策略相比较,结果表明文中提出的控制策略具有更优异的动态特性和更好的调节性能.%This paper studies the automatic generation control of the interconnected power grid considering CPS,and puts forward an automatic generation control strategy based on the CPS standard and optimal dynamic closed-loop control. It establishes the state space mathematic model of the automatic generation control of a two-region power grid,and introduces a new dynamic performance index which takes CPS into account and uses the outer point penalty function method to solve the objective function. Compared with the traditional PI control strategy based on A standard,the proposed method in the paper considers the contribution of ACE for frequency recovery,and obviously improves the CPS Assessment index,and reduces the number of regulation orders of AGC units and the cost of power generation. It combines the good internal perception and dynamic adaptability advantages of optimal control. Through simulation analysis,compared with conventional control stra-tegy,the proposed control strategy has more excellent dynamic characteristics and better regulation performance.

  15. A Polygon Data Automatic Generation Algorithm Based on Topology Information%一种基于拓扑信息的多边形数据自动生成算法

    Institute of Scientific and Technical Information of China (English)

    卢浩; 钟耳顺; 王天宝; 王少华

    2012-01-01

    It is essential to GIS for automatic generation of polygon data,creation and maintenance of polygon topology information as many GIS operations are based on them. In this paper, the current polygon data automatic generation algorithms are summarized and analyzed,as well as polygon topology information generation algorithms with other scholars,a more efficient polygon data automatic generation algorithm based on topology information is proposed. Firstly, the core contents of the algorithm data structure are presented,describing the three core process including arc adjacency, polygon search and topology relationship determination. Secondly,the topology information creation by the polygon search process is described, which can accelerate the process of topology relationship determine. Finally, the algorithm time complexity analysis is presented,as well as the experimental verification.%在GIS的众多应用中,多边形数据的自动生成和多边形数据拓扑关系的构建与维护都是一种高频率的操作.该文在分析和总结已有多边形数据自动生成算法和拓扑关系生成算法基础上,提出了一种基于拓扑信息的多边形数据自动生成算法(PG-TI).介绍了该算法的数据结构以及弧段邻接关系确定、多边形搜索和拓扑关系确定3个核心过程,重点探讨了使用多边形搜索过程中建立的拓扑信息来提升拓扑关系确定过程性能,在此基础上与传统算法和ArcGIS中对应算法的时间复杂度进行了对比分析和验证.

  16. An automatic modular procedure to generate high-resolution earthquake catalogues: application to the Alto Tiberina Near Fault Observatory (TABOO), Italy.

    Science.gov (United States)

    Di Stefano, R.; Chiaraluce, L.; Valoroso, L.; Waldhauser, F.; Latorre, D.; Piccinini, D.; Tinti, E.

    2014-12-01

    The Alto Tiberina Near Fault Observatory (TABOO) in the upper Tiber Valley (northern Appennines) is a INGV research infrastructure devoted to the study of preparatory processes and deformation characteristics of the Alto Tiberina Fault (ATF), a 60 km long, low-angle normal fault active since the Quaternary. The TABOO seismic network, covering an area of 120 × 120 km, consists of 60 permanent surface and 250 m deep borehole stations equipped with 3-components, 0.5s to 120s velocimeters, and strong motion sensors. Continuous seismic recordings are transmitted in real-time to the INGV, where we set up an automatic procedure that produces high-resolution earthquakes catalogues (location, magnitudes, 1st motion polarities) in near-real-time. A sensitive event detection engine running on the continuous data stream is followed by advanced phase identification, arrival-time picking, and quality assessment algorithms (MPX). Pick weights are determined from a statistical analysis of a set of predictors designed to correctly apply an a-priori chosen weighting scheme. The MPX results are used to routinely update earthquakes catalogues based on a variety of (1D and 3D) velocity models and location techniques. We are also applying the DD-RT procedure which uses cross-correlation and double-difference methods in real-time to relocate events with high precision relative to a high-resolution background catalog. P- and S-onset and location information are used to automatically compute focal mechanisms, VP/VS variations in space and time, and periodically update 3D VP and VP/VS tomographic models. We present results from four years of operation, during which this monitoring system analyzed over 1.2 million detections and recovered ~60,000 earthquakes at a detection threshold of ML 0.5. The high-resolution information is being used to study changes in seismicity patterns and fault and rock properties along the ATF in space and time, and to elaborate ground shaking scenarios adopting

  17. Research on Automatic Code Generation Based on SDL%基于SDL语言代码自动生成技术研究

    Institute of Scientific and Technical Information of China (English)

    吴琦; 熊光泽

    2003-01-01

    As one of the key technology of CASE tools,code auto-generation has a wide application future. However, at present, some of problems limit its application in the practical project, such as executive efficiency of code generation, the combination with the hardware and software and etc. In thus paper, the main factors of code autogeneration are introduced in details. The main parts of the code auto-generation based on SDL and the main factors which will effect the ultimately code performance are analyzed. The improved methods aiming at the different software and hardware platform and application performance are presented.

  18. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial

    International Nuclear Information System (INIS)

    This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients (‘benchmarking cases’), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 “RADAR” trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation

  19. 一种基于配合特征面的三维装配尺寸链自动生成方法%Method of Automatic Generation Based on 3D Assembly Dimension Chain with Characteristics of Surface

    Institute of Scientific and Technical Information of China (English)

    王光磊; 吴玉光; 勾波

    2015-01-01

    由于目前尺寸链自动生成存在着人机交互过多,实用性不强,并且主要集中在二维的情况。提出了一种以图论为基础,基于UG的三维空间尺寸链自动生成的方法。通过人机交互对装配体进行自由度的约束,使用UG/OPEN API函数对零件模型进行信息提取;利用图论理论,构建尺寸公差、形位公差和装配邻接表,建立了公差图结构。在此基础上,提出了基于特征面要素的搜索算法,该算法减少了不必要的人机交互,真正意义上实现了三维空间尺寸链的自动生成。通过实例验证了该算法,证明有效。%Because many problems exist in the automatic generation of dimension chain,such as frequent human interaction and low practicality which are mainly concentrated on the two dimensional case,this paper presents a method of three-dimensional space of dimension chain automatic generation based on graph theory and UG. The human-computer interaction is used to constrain the free ̄dom of assembly. The UG/OPEN API function is used to extract the information of the part model. And then. the structure of toler ̄ance chart of the size,form and position tolerances and assembly adjacent lists are established based on graph theory. On this ba ̄sis,it puts forward a search algorithm based on feature elements, which reduces the unnecessary man-machine interaction. This method is only used to real y realize the automatic generation of 3D space dimension chain. It is proved that the algorithm is effecfive by an il ustrative example.

  20. Research of Comprehensive Evaluation Automatic Generation System Based on Ontology%基于本体的综合评价文本自动生成系统研究

    Institute of Scientific and Technical Information of China (English)

    殷红梅

    2014-01-01

    With the rapid development of information technology, information processing has become the most important re-search content. How to get what is necessary and relatively accurate information from a large amount of information has become a big problem in current society. Aiming at this problem and based on the analysis of lots of comment texts, this paper puts forward a kind of method for automatic generation of the comprehensive evaluation of text based on ontology, which can rapidly process large amount of texts, and automatically obtain the corresponding comprehensive evaluation of text.%随着信息技术的高速发展,信息处理已经成为目前最重要的研究内容,如何从大量的相关信息中获取我们需要的且相对准确的信息已经成为当前社会的一大难题。本文针对这一问题展开研究,通过对大量评语文本的分析,提出了一种基于本体的综合评价文本自动生成的方法,可以快速处理大量评语文本,从而自动获取相应的综合评价文本。

  1. Automatically Generated Model of Book Acquisitioning Recommendation List Using Text-Mining%基于文本挖掘的图书采访推荐清单自动生成模型

    Institute of Scientific and Technical Information of China (English)

    张成林

    2013-01-01

      由于大多数图书馆的采编馆员人数有限,依靠传统的手工统计表单方式采访图书往往很难令读者满意。运用Web的文本挖掘技术提取读者历史查询关键词,构造一种图书采访推荐清单的自动生成模型。实验数据证明,该模型有着较好的召回率和精准率,能有效地执行清单的自动生成。%For most libraries, there is only limited number of acquisitioning and cataloging librarians. Using traditional manual statistics form methods, the acquisitioning result is often far from satisfying the readers’ needs. Based on web text mining technology, by extracting history query keywords used by readers, the paper gives an automatic model of book-acquisitioning recommendation list. Experiments show that the model has better recall rates and accuracy rates, and the effective book-acquisitioning list can be automatically generated.

  2. Effect of anatomical backgrounds on detectability in volumetric cone beam CT images

    Science.gov (United States)

    Han, Minah; Park, Subok; Baek, Jongduk

    2016-03-01

    As anatomical noise is often a dominating factor affecting signal detection in medical imaging, we investigate the effects of anatomical backgrounds on signal detection in volumetric cone beam CT images. Signal detection performances are compared between transverse and longitudinal planes with either uniform or anatomical backgrounds. Sphere objects with diameters of 1mm, 5mm, 8mm, and 11mm are used as the signals. Three-dimensional (3D) anatomical backgrounds are generated using an anatomical noise power spectrum, 1/fβ, with β=3, equivalent to mammographic background [1]. The mean voxel value of the 3D anatomical backgrounds is used as an attenuation coefficient of the uniform background. Noisy projection data are acquired by the forward projection of the uniform and anatomical 3D backgrounds with/without sphere lesions and by the addition of quantum noise. Then, images are reconstructed by an FDK algorithm [2]. For each signal size, signal detection performances in transverse and longitudinal planes are measured by calculating the task SNR of a channelized Hotelling observer with Laguerre-Gauss channels. In the uniform background case, transverse planes yield higher task SNR values for all sphere diameters but 1mm. In the anatomical background case, longitudinal planes yield higher task SNR values for all signal diameters. The results indicate that it is beneficial to use longitudinal planes to detect spherical signals in anatomical backgrounds.

  3. An Automatic Mosaicking Algorithm for the Generation of a Large-Scale Forest Height Map Using Spaceborne Repeat-Pass InSAR Correlation Magnitude

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2015-05-01

    Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.

  4. Generalduty Question Databank and Exam Papers Automatically Generate System Design and Implementation%通用试题库及试卷自动生成系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    樊继东

    2013-01-01

    This paper sets up a generalduty question databank and exam papers automatically generate system which use the SQL Server2008 based on Delphi. The system can manually and automoatically produces papers which are saved in Word. Experimen-tal results show that the system has perfect function, reliant performance and good genralduty.%提出了一种基于Delphi,应用SQL Server2008技术的通用题库及试卷自动生成系统,可针对不同课程进行手或自动组卷,并以Word文档形式保存。经测试,该系统功能完善,性能可靠,操作方便,通用性好。

  5. Digital imaging in anatomic pathology.

    Science.gov (United States)

    O'Brien, M J; Sotnikov, A V

    1996-10-01

    Advances in computer technology continue to bring new innovations to departments of anatomic pathology. This article briefly reviews the present status of digital optical imaging, and explores the directions that this technology may lead over the next several years. Technical requirements for digital microscopic and gross imaging, and the available options for image archival and retrieval are summarized. The advantages of digital images over conventional photography in the conference room, and the usefulness of digital imaging in the frozen section suite and gross room, as an adjunct to surgical signout and as a resource for training and education, are discussed. An approach to the future construction of digital histologic sections and the computer as microscope is described. The digital technologic applications that are now available as components of the surgical pathologist's workstation are enumerated. These include laboratory information systems, computerized voice recognition, and on-line or CD-based literature searching, texts and atlases and, in some departments, on-line image databases. The authors suggest that, in addition to these resources that are already available, tomorrow's surgical pathology workstation will include network-linked digital histologic databases, on-line software for image analysis and 3-D image enhancement, expert systems, and ultimately, advanced pattern recognition capabilities. In conclusion, the authors submit that digital optical imaging is likely to have a significant and positive impact on the future development of anatomic pathology. PMID:8853053

  6. Implementation of an Automatic System for the Monitoring of Start-up and Operating Regimes of the Cooling Water Installations of a Hydro Generator

    Directory of Open Access Journals (Sweden)

    Ioan Pădureanu

    2015-07-01

    Full Text Available The safe operation of a hydro generator depends on its thermal regime, the basic conditions being that the temperature in the stator winding fall within the limits of the insulation class. As the losses in copper depend on the square current in the stator winding, it is necessary that the cooling water debit should be adapted to the values of these losses, so that the winding temperature falls within the range of the values prescribed in the specifications. This paper presents an efficient solution of commanding and monitoring the water cooling installations of two high-power hydro generators.

  7. 基于同步约束解除的零件爆炸图自动生成方法%Method for Automatic Generation of Exploded View Based on Synchronous Constraint Release

    Institute of Scientific and Technical Information of China (English)

    赵鸿飞; 张琦; 王海涛; 赵洋; 方宝山

    2015-01-01

    面向结构教学及维修人员培训,提出了一种基于零件几何约束关系同步解除的爆炸图自动生成方法。在定义零件拆卸轴向的基础上,建立了零件邻接拆卸约束关系矩阵及约束类型矩阵,按照可同步解除几何约束的顺序对零件进行分层,并利用判断规则识别子装配体。结合应用 OBB和 FDH 两种包围盒,提出了一种“由外向内”的等速率分层牵引零件爆炸分离方法,实现了装配体组成零件爆炸图的自动生成。%To improve learning of structure design and training maintenance persons,a method for generating exploded view automatically was proposed based on synchronism release of parts geo-metric constraint relations.Part adj acency restriction relation matrix and restriction type matrix were built by defining part disassembly axial.Parts were stratifed according to the sequence of geometric constraint synchronism release,and sub-assembly was identified by defining rules.A method for parts isometric rate explosive separation form outside to inside was constructed.OBB(oriented boun-ding box)and FDH(fixed directions hulls)bounding boxes were used to realize the automatic genera-tion of assembly component parts exploded view.

  8. Design and Implementation of Urban Planning and Mapping Results Data Automatic Generation System%城市规划测绘成果资料自动化生成系统设计与实现

    Institute of Scientific and Technical Information of China (English)

    吴凯华; 程相兵; 黄昀鹏; 谢武强

    2015-01-01

    With the development and popularization of computer technology , the informatization of surveying and mapping has become a trend today .In the light of the way for urban planning and mapping results data ,according to the actual needs of the production units ,carries on the software code Visual Studio 2013 platform based on the C#language and SQL Server 2008 database management platform ,using .NET and office components of the secondary development of other series version of microsoft office word .Design and implementation of urban planning and mapping results data auto-matic generation system .The software system can automatically generate urban planning surveying and mapping results data ,through the practical application of this unit in many aspects of engineering measuring team production ,validation of the advanced and practicability of the software .%随着计算机技术的发展和普及,信息化测绘已成为当今的一种趋势。针对城市规划测绘成果资料的整理方式,根据生产单位的实际需求,基于SQL Server 2008数据库管理平台和Visual Studio 2013平台的C#语言进行软件编码。利用.NET和office组件对Microsoft Office Word等多系列版本的二次开发,设计和实现了城市规划测绘成果资料自动化生成系统软件。该软件系统能够自动化生成城市规划测绘成果资料,通过本单位测量队工程生产多方面的实际应用,验证了该软件的先进性和实用性。

  9. Making anatomical dynamic film using the principle of linear motion

    Institute of Scientific and Technical Information of China (English)

    Sun Guosheng

    2015-01-01

    Objective:The aim of this study was to develop the dynamic aids to help students to combine human morphology and function during study, and to understand and memorize important and difficult contents u-sing physiological function of analog organs and system. Methods:The design of the aids was based on our innova-tion. The linear movement is derived from the number of lines, the thickness of a line, distance and angle between lines. Therefore, according to the effect of line stripes, the stripes were divided into two types: ( 1 ) the parallel straight lines which meet the following criteria - 12 stripes per cm, the equal thickness of the stripes, the equal distance between adjacent stripes and printable on a transparent film;(2)the straight line and curved stripes which meet the following criteria -an equal or unequal linear fringe space between the stripes, the curve stripes being drawn by a mathematical equation, and being digitalized and stored in a computer. Results:(1) Demonstrating a dynamic effect:The parallel straight stripes with a 12 percentimeter space between the stripes were printed on a transparent film. The film was termed"the moving film" as its effect was displayed while moving the film. Another static film was made. The static film shown different directions. After the moving film was overlaid on the static film, slowly moving the film produced a wave-like spread. (2)Producing a dynamic film:The quality of a dynamic film was determined by the quality of the "static film". The first was to design and draw the drawings, and leave space for generating dynamic sense to prepare the paste, with the detection of dynamic effects until satisfaction. It appeared impossible to draw the difficult curvilinear motion in fringes by hands. We input mathematical equations into the computer and connected the automatic plotter to draw. A variety of drawn"static diagram fringe pattern as the library was stored in a computer to access at any time. Conclusions

  10. SimWorld – Automatic Generation of realistic Landscape models for Real Time Simulation Environments – a Remote Sensing and GIS-Data based Processing Chain

    OpenAIRE

    Sparwasser, Nils; Stöbe, Markus; Friedl, Hartmut; Krauß, Thomas; Meisner, Robert

    2007-01-01

    The interdisciplinary project “SimWorld” - initiated by the German Aerospace Center (DLR) - aims to improve and to facilitate the generation of virtual landscapes for driving simulators. It integrates the expertise of different research institutes working in the field of car simulation and remote sensing technology. SimWorld will provide detailed virtual copies of the real world derived from air- and satellite-borne remote sensing data, using automated geo-scientific analysis techniques for m...

  11. 基于实时安全约束经济调度的自动发电控制模型%Automatic Generation Control Model Based on Real-time Security Constrained Economic Dispatch

    Institute of Scientific and Technical Information of China (English)

    石辉; 袁林山

    2015-01-01

    In order to improve intelligence level and real-time dispatch effect of automatic generation control (AGC),a secu-rity constrained economic dispatch(SCED)model for AGC active coordination based on synchronous ultra-short term predic-tion on distributed load of the whole grid was proposed depending on online data platform of new typed D5000 system.By means of rolling grey prediction on load different frequencies of the whole grid stations,anticipation of AGC adjustability was carried out for reminding artificial intervention on one hand,and on the other hand,with the target of power genera-tion of the unit close to the optimal day-ahead plan and reduction of adjustment price to the greatest extent,output of vari-ous units was pre-distributed for realizing highly intelligent AGC active coordinated dispatching function.Simulating calcula-tion was conducted on dispatching simulation system of some provincial power gird in central China area and the result indi-cated that this model was of obvious advantages in ensuring power grid security,optimizing operational index and improving automation degree of power generation dispatching.%为提高自动发电控制(automatic generation control,AGC)智能化水平及实时调节效果,依托新型 D5000系统在线数据平台,提出基于全网分布负荷同步超短期预测的 AGC 主动协调安全约束经济调度模型。通过全网站点负荷差频滚动灰色预测,一方面预判 AGC 可调以提示人工干预,另一方面以机组发电尽量接近最优日前计划及尽量降低调节代价为目标,预分配各机组出力,从而实现高度智能化的 AGC 主动协调调度功能。在华中某省级电网调度模拟系统进行仿真计算,结果表明该模型在保障电网安全、优化运行指标、提高发电调度自动化程度等方面具有明显的优势。

  12. Automatic Generation of Attack-based Signature%基于攻击特征签名的自动生成

    Institute of Scientific and Technical Information of China (English)

    王国栋; 陈平; 茅兵; 谢立

    2012-01-01

    签名可以基于攻击特征的相关信息生成.在栈上针对控制流攻击中对函数调用返回值和函数调用指针的攻击以及非控制流中对与判断相关联的数据的攻击,结合动态分析技术生成二进制签名.首先,识别出漏洞相关指令;然后,用虚拟机监控运行上述指令;最后,修改虚拟机以在监控到恶意写行为时报警并生成签名.同时生成的补丁文件记录恶意写指令以便后继执行时跳过.签名可迅速分发给其他主机,在轻量级虚拟机上监测程序运行.实验表明,二进制签名具有准确、精简的优点,可以防御多态攻击,同时具有较低漏报率,结合使用轻量级虚拟机可使签名生成和后继检测都快速高效.%Signatures can be generated based on characteristics of attacks. Using dynamic program analyzing skills we generated binary signatures for control flow attack to return value of function call and function call pointer, and non-control flow attack to decision-related variable. First, we identified instructions related to the vulnerability. Second, we monitored these instructions using a modified virtual machine. At last, we alerted and generated signature after finding any malicious write behaviors. Patch recorded malicious write instructions could be generated meanwhile to ignore these instructions in future execution. Generated signature could be sent to other computers to monitor the same software's execution using lightweight virtual machine. Experiment results show that binary level signature has simplified form and precise functionality and low false negative and is effective to defense polymorphic attack. Besides, lightweight virtual machine makes use of the signature fast.

  13. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  14. Automatic Schema Evolution in Root

    Institute of Scientific and Technical Information of China (English)

    ReneBrun; FonsRademakers

    2001-01-01

    ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.

  15. 考虑光伏组件发电性能的自动除尘系统运行时间优化%Optimization of running time of automatic dedusting system considered generating performance of PV mudules

    Institute of Scientific and Technical Information of China (English)

    郭枭; 澈力格尔; 韩雪; 田瑞

    2015-01-01

    Low power generation efficiency is one of the main obstacles to apply PV (photovoltaic) modules in large scale, and therefore studying the influence factors is of great significance. This article has independently developed a kind of automatic dedusting system of PV modules, which has the advantage of simple structure, low installation cost, reliable operation, without the use of water in the ash deposition, continuous and effective dedusting. The system has been applied to 3 kinds of occasions, including supplying power separately by the PV conversion cell with temperature in the range of -45℃−35℃, having various experimental tests of the assemble angles by the PV module cells and a large area of the PV power system. The dedusting effect of the automatic dedusting system is tested with temperature in the range of -10℃−5℃ when applied in the power separately by the PV conversion cell. Adopting the automatic dedusting system, the dynamic occlusion in the operation process has been simulated and the influence law of the output parameter for PV modules has been researched; the effect of dedusting has been analyzed under different amounts of the ash deposition; the effect of dedusting changing with the amount of the ash deposition has been summarized, and the opening time and the running period have been determined. The experimental PV modules are placed in outdoor open ground at an angle of 45°for 3, 7, 20 days and the amounts of the ash deposition are 0.1274, 0.2933, 0.8493 g/m2separately. The correction coefficient of PV modules involved in the experiments is 0.9943. The results show that, when the system is in the horizontal and vertical cycle, the cleaning brush makes the output parameters of the PV modules, including the output power, the electric current and the voltage, change according to the V-shaped law as it crosses a row of battery. Compared with the process of downlink, the output parameters of PV modules in the process of uplink fluctuate

  16. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  17. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  18. Automatic Generation Framework of Model-driven Test Cases%模型驱动的测试用例自动生成框架

    Institute of Scientific and Technical Information of China (English)

    刘扬; 李亚芬; 王普

    2011-01-01

    提出一个基于模型驱动架构(MDA)的测试用例生成框架,其中,平台无关的系统模型通过水平转换成平台无关的测试模型,平台无关的测试模型通过竖直转换生成相应的测试用例.利用MDA转换工具ATL和MOFScript制定相应的转换规则作用于元模型,使测试者只须提供源模型和测试数据即可生成相应的测试用例.%This paper proposes a test cases generation framework based on Model-Driven Architecture(MDA), in which Plafform-Independent Model(PIM) is converted into a Platform-Independent Test(PIT) model through level conversion, and platform-independent test model is converted into the corresponding test cases through vertical conversion. MDA conversion tools including ATL and MOFScript are used to develop the corresponding transformation rules acting on the meta-model, so that testers only need provide source model and test data to generate the corresponding test cases.

  19. ertCPN: The adaptations of the coloured Petri-Net theory for real-time embedded system modeling and automatic code generation

    Directory of Open Access Journals (Sweden)

    Wattanapong Kurdthongmee

    2003-05-01

    Full Text Available A real-time system is a computer system that monitors or controls an external environment. The system must meet various timing and other constraints that are imposed on it by the real-time behaviour of the external world. One of the differences between a real-time and a conventional software is that a real-time program must be both logically and temporally correct. To successfully design and implement a real-time system, some analysis is typically done to assure that requirements or designs are consistent and that they satisfy certain desirable properties that may not be immediately obvious from specification. Executable specifications, prototypes and simulation are particularly useful in real-time systems for debugging specifications. In this paper, we propose the adaptations to the coloured Petri-net theory to ease the modeling, simulation and code generation process of an embedded, microcontroller-based, real-time system. The benefits of the proposed approach are demonstrated by use of our prototype software tool called ENVisAge (an Extended Coloured Petri-Net Based Visual Application Generator Tool.

  20. Wien Automatic System Package (WASP). A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 2: Appendices

    International Nuclear Information System (INIS)

    With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction (FIXSYS plants); user control of the distribution of capital cost expenditures during the construction period (if required to be different from the general 'S' curve distribution used as default). This second volume of the document to support use of the WASP-Ill Plus computer code consists of 5 appendices giving some additional information about the WASP-Ill Plus program. Appendix A is mainly addressed to the WASP-Ill Plus system analyst and supplies some information which could help in the implementation of the program on the user computer facilities. This appendix also includes some aspects about WASP-Ill Plus that could not be treated in detail in Chapters 1 to 11. Appendix B identifies all error and warning messages that may appear in the WASP printouts and advises the user how to overcome the problem. Appendix C presents the flow charts of the programs along with a brief description of the objectives and structure of each module. Appendix D describes the

  1. Reinforcement Based Fuzzy Neural Network Control with Automatic Rule Generation%基于增强型算法并能自动生成规则的模糊神经网络控制器

    Institute of Scientific and Technical Information of China (English)

    吴耿锋; 傅忠谦

    2001-01-01

    A reinforcement based fuzzy neural network controller (RBFNNC) is proposed. A set of optimised fuzzy control rules can be automatically generated through reinforcement learning based on the state variables of object system. RBFNNC was applied to a cart-pole balancing system and shows significant improvements on the rule generation.%给出了一种基于增强型算法并能自动生成控制规则的模糊神经网络控制器RBFNNC(reinforcements based fuzzy neural network controller).该控制器能根据被控对象的状态通过增强型学习自动生成模糊控制规则.RBFNNC用于倒立摆小车平衡系统控制的仿真实验表明了该系统的结构及增强型学习算法是有效和成功的.

  2. Brain Morphometry Using Anatomical Magnetic Resonance Imaging

    Science.gov (United States)

    Bansal, Ravi; Gerber, Andrew J.; Peterson, Bradley S.

    2008-01-01

    The efficacy of anatomical magnetic resonance imaging (MRI) in studying the morphological features of various regions of the brain is described, also providing the steps used in the processing and studying of the images. The ability to correlate these features with several clinical and psychological measures can help in using anatomical MRI to…

  3. Are the Genitalia of Anatomical Dolls Distorted?

    Science.gov (United States)

    Bays, Jan

    1990-01-01

    To determine whether the genitalia of anatomical dolls are disproportionately large and may suggest sexual activity to children who have not been abused, the genitalia and breasts of 17 sets of anatomical dolls were measured. When the measurements were extrapolated to adult human proportions, the sizes were not found to be exaggerated. (Author/JDD)

  4. Anatomical entity recognition with a hierarchical framework augmented by external resources.

    Directory of Open Access Journals (Sweden)

    Yan Xu

    Full Text Available References to anatomical entities in medical records consist not only of explicit references to anatomical locations, but also other diverse types of expressions, such as specific diseases, clinical tests, clinical treatments, which constitute implicit references to anatomical entities. In order to identify these implicit anatomical entities, we propose a hierarchical framework, in which two layers of named entity recognizers (NERs work in a cooperative manner. Each of the NERs is implemented using the Conditional Random Fields (CRF model, which use a range of external resources to generate features. We constructed a dictionary of anatomical entity expressions by exploiting four existing resources, i.e., UMLS, MeSH, RadLex and BodyPart3D, and supplemented information from two external knowledge bases, i.e., Wikipedia and WordNet, to improve inference of anatomical entities from implicit expressions. Experiments conducted on 300 discharge summaries showed a micro-averaged performance of 0.8509 Precision, 0.7796 Recall and 0.8137 F1 for explicit anatomical entity recognition, and 0.8695 Precision, 0.6893 Recall and 0.7690 F1 for implicit anatomical entity recognition. The use of the hierarchical framework, which combines the recognition of named entities of various types (diseases, clinical tests, treatments with information embedded in external knowledge bases, resulted in a 5.08% increment in F1. The resources constructed for this research will be made publicly available.

  5. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  6. Automatic generation of time resolved motion vector fields of coronary arteries and 4D surface extraction using rotational x-ray angiography

    Science.gov (United States)

    Jandt, Uwe; Schäfer, Dirk; Grass, Michael; Rasche, Volker

    2009-01-01

    Rotational coronary angiography provides a multitude of x-ray projections of the contrast agent enhanced coronary arteries along a given trajectory with parallel ECG recording. These data can be used to derive motion information of the coronary arteries including vessel displacement and pulsation. In this paper, a fully automated algorithm to generate 4D motion vector fields for coronary arteries from multi-phase 3D centerline data is presented. The algorithm computes similarity measures of centerline segments at different cardiac phases and defines corresponding centerline segments as those with highest similarity. In order to achieve an excellent matching accuracy, an increasing number of bifurcations is included as reference points in an iterative manner. Based on the motion data, time-dependent vessel surface extraction is performed on the projections without the need of prior reconstruction. The algorithm accuracy is evaluated quantitatively on phantom data. The magnitude of longitudinal errors (parallel to the centerline) reaches approx. 0.50 mm and is thus more than twice as large as the transversal 3D extraction errors of the underlying multi-phase 3D centerline data. It is shown that the algorithm can extract asymmetric stenoses accurately. The feasibility on clinical data is demonstrated on five different cases. The ability of the algorithm to extract time-dependent surface data, e.g. for quantification of pulsating stenosis is demonstrated.

  7. Automatic generation of time resolved motion vector fields of coronary arteries and 4D surface extraction using rotational x-ray angiography

    Energy Technology Data Exchange (ETDEWEB)

    Jandt, Uwe; Schaefer, Dirk; Grass, Michael [Philips Research Europe-Hamburg, Roentgenstr. 24, 22335 Hamburg (Germany); Rasche, Volker [University of Ulm, Department of Internal Medicine II, Robert-Koch-Strasse 8, 89081 Ulm (Germany)], E-mail: ujandt@gmx.de

    2009-01-07

    Rotational coronary angiography provides a multitude of x-ray projections of the contrast agent enhanced coronary arteries along a given trajectory with parallel ECG recording. These data can be used to derive motion information of the coronary arteries including vessel displacement and pulsation. In this paper, a fully automated algorithm to generate 4D motion vector fields for coronary arteries from multi-phase 3D centerline data is presented. The algorithm computes similarity measures of centerline segments at different cardiac phases and defines corresponding centerline segments as those with highest similarity. In order to achieve an excellent matching accuracy, an increasing number of bifurcations is included as reference points in an iterative manner. Based on the motion data, time-dependent vessel surface extraction is performed on the projections without the need of prior reconstruction. The algorithm accuracy is evaluated quantitatively on phantom data. The magnitude of longitudinal errors (parallel to the centerline) reaches approx. 0.50 mm and is thus more than twice as large as the transversal 3D extraction errors of the underlying multi-phase 3D centerline data. It is shown that the algorithm can extract asymmetric stenoses accurately. The feasibility on clinical data is demonstrated on five different cases. The ability of the algorithm to extract time-dependent surface data, e.g. for quantification of pulsating stenosis is demonstrated.

  8. Methodology for the Automatic Generation of Assur Groups from Planar Multi-bar Linkages%平面多杆机构杆组自动生成方法

    Institute of Scientific and Technical Information of China (English)

    韩建友; 袁玉芹; 吕翔宇; 张倩倩; 卢天齐

    2015-01-01

    阿苏尔杆组理论指出,通过对不同运动链选取不同的机架和原动件,能够得到各种可能的阿苏尔杆组与机构构型。基于这一理论提出一种阿苏尔杆组的自动生成方法。针对杆组的结构特点,提出简单易行的杆组同构判别方法,该判别方法也适用于平面多杆机构运动链的同构判别。联合应用运动链的邻接矩阵与关联矩阵,使得自动生成算法与计算机编程相结合,实现了平面多杆机构杆组的自动生成。该自动生成机构杆组的方法理论简单,编程可操作性强,能够实现多杆运动链在构成机构时杆组的准确快速的拆分。该方法将杆组的拆分过程与由杆组搭接形成机构的过程相联系,对拆分得到的所有杆组与机构构型进行同构判别,得到了六杆以内的13种杆组,以及由八杆运动链构成的153种机构。%The theory of Assur groups indicates that the potential Assur groups and linkage types can be obtained by selecting different ground links and driving links of unique kinematic chains. A methodology of automatically generating Assur groups is proposed based on this theory. Aiming at the structure characteristics of Assur groups, an effective and simple method of isomorphism identification of Assur groups is also proposed. This method is also applicable to the isomorphism identification of planar multi-bar kinematic chains. This methodology combines the automatic generation algorithm and computer programming to realize the automatic generation of Assur groups and linkages by using the adjacency matrices and the relevance matrices of the kinematic chains. This methodology makes splitting the Assur groups from planar multi-bar kinematic chains when forming linkages quick and accurate. The automation process has obtained all the 13 Assur groups from four-bar and six-bar and eight-bar kinematic chains and the 153 unique linkage types formed by eight-bar kinematic chains

  9. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  10. Automatic input rectification

    OpenAIRE

    Long, Fan; Ganesh, Vijay; Carbin, Michael James; Sidiroglou, Stelios; Rinard, Martin

    2012-01-01

    We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the ...

  11. Design and use of numerical anatomical atlases for radiotherapy

    International Nuclear Information System (INIS)

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  12. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  13. Automatic Generation Algorithm of 3 D Navigation Map Based on Seed Algorithm%基于种子算法的三维导航图自动生成算法

    Institute of Scientific and Technical Information of China (English)

    陈慧平; 刘东峰; 何家峰; 程昱

    2013-01-01

    导航图为人群模拟提供了对应的环境信息,为智能体的移动提供了导航基础。其准确与否对模拟结果的正确性至关重要,是反应智能体自主特征与智能行为的关键技术之一。而目前工作主要针对平坦的地面进行导航图的创建,对实际应用有很大的局限性。文中利用种子填充算法蔓延特性和碰撞检测技术,并根据场景的几何属性自动生成复杂地形的三维导航图,解决了起伏地形、复杂场景导航图自动生成困难的问题。所得结果可以利用到实际人群三维模拟或三维游戏开发中。%Navigation map provides the corresponding environmental information in the crowd simulation for agent movement,it is navi-gation foundation. Its accuracy or not for the correctness of the simulation results is very important,is one of key technologies reacting the agent independent characteristics and behavior. The exsiting work is mainly navigation map creation for flat ground,which has significant limitations on the practical application. In this paper,use the spread characteristic of seed filling algorithm and collision detection technolo-gy to automatically generate complex three-dimensional navigation map of the terrain,and based on the geometric properties of the scene to solve the problems of undulating terrain,complex scenes automatically generating navigational charts difficult. The results can take ad-vantage of the actual crowd 3D simulation or 3D game development.

  14. 基于K均值PSOABC的测试用例自动生成方法%Automatic Testcase Generation Method Based on PSOABC and K-means Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    贾冀婷

    2015-01-01

    To improve the automation ability of testcase generation in software testing is very important to guarantee the quality of soft-ware and reduce the cost of software. In this paper,propose an automatic testcase generation method based on particle swarm optimiza-tion,artificial bee colony algorithm and K-means clustering algorithm,and carry out the simulation experiments. The results show that the improved algorithm’ s efficiency is better and convergence ability is stronger than other algorithms such as particle swarm optimization and genetic algorithm in the automation ability of testcase generation.%软件测试中测试用例自动生成技术对于确保软件质量与降低开发成本都是非常重要的。文中基于K均值聚类算法与粒子群算法和人工蜂群算法相结合的混合算法,提出了一种测试用例自动生成方法,并且对此方法进行了仿真实验。实验结果表明,与基本的粒子群算法、遗传算法的测试用例自动生成方法相比较,基于文中改进算法的测试用例自动生成方法具有测试用例自动生成效率高、收敛能力强等优点。

  15. Chinese Natural Language Processing for Animation Automatic Generation of Traditional Architecture%面向古建动画自动生成的中文自然语言处理

    Institute of Scientific and Technical Information of China (English)

    孙凯

    2011-01-01

    本文提出了一个面向古代建筑领域的自然语言处理的系统模型,它被用于古建筑动画自动生成系统之中,承担着从简单中文描述到古建筑领域语义结果的计算工作。该模型分为三部分,分别为预处理过程,一般语义计算和面向古建筑领域的语义计算。通过调用Stanford大学的中文分词、语法分析程序完成分词、语法分析任务,使用Prolog语言完成一般语义计算,最终计算出古建筑构件以及它的搭建顺序、尺寸和位置,即所谓的面向古建筑领域的语义计算。%In this paper,a model is proposed for Chinese natural language processing for animation automatic generation of traditional architecture,which is applied in the system of animation generation of traditional architecture and undertakes the computing task of transformation from simple Chinese text to result in traditional architecture domain.This model contains three main parts including preprocessing,general semantics computing,and traditional architecture oriented semantics computing.Stanford segmenter and parser programs are called to accomplish the task of segmenting and parsing,and Prolog is utilized to accomplish the general semantics computing,and finally the components of traditional architecture and their construction sequence,size and position is generated,whose procedure is also called traditional architecture domain oriented semantics computing.

  16. SubClonal Hierarchy Inference from Somatic Mutations: Automatic Reconstruction of Cancer Evolutionary Trees from Multi-region Next Generation Sequencing.

    Directory of Open Access Journals (Sweden)

    Noushin Niknafs

    2015-10-01

    Full Text Available Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones--cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8 can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can

  17. Automatic generation of zonal models to study air movement and temperature distribution in buildings; Generation automatique de modeles zonaux pour l'etude du comportement thermo-aeraulique des batiments

    Energy Technology Data Exchange (ETDEWEB)

    Musy, M.

    1999-07-01

    This study consists in showing that it is possible to automatically build zonal models that allow to predict air movement, temperature distribution and air quality in the whole building. Zonal models are based on a rough partitioning of the rooms. It is an intermediate approach between one-node models and CFD models. One-node models consider an homogeneous temperature in each room, and for that reason, do not permit to predict the thermal comfort in a room whereas CFD models require a great amount of simulation time To achieve this aim, the zonal model was entirely reformulated as the connection of small sets of equations. The equations describe, either the state of a sub-zone of the partitioning (such sets of equations are called 'cells'), or mass and energy transfers that occur between two sub-zones (then, they are called 'interfaces'). There are various 'cells' and 'interfaces' to represent different air flows that occur in buildings. They all have been translated into SPARK objects that form a model library. Building a simulation consists in choosing the appropriate models to represent the rooms, and connecting them. The last stage has been automated. So, the only thing the user has to do is to give the partitioning and to choose the models to be implemented. The resulting set of equations is solved iteratively with SPARK. Results of simulations in 3D-rooms are presented and compared with experimental data. examples of zonal models are also given. They are applied to the study of a group of two rooms, a building, and a room the geometry of which is complex. (author)

  18. Biofabrication of multi-material anatomically shaped tissue constructs

    International Nuclear Information System (INIS)

    Additive manufacturing in the field of regenerative medicine aims to fabricate organized tissue-equivalents. However, the control over shape and composition of biofabricated constructs is still a challenge and needs to be improved. The current research aims to improve shape, by converging a number of biocompatible, quality construction materials into a single three-dimensional fiber deposition process. To demonstrate this, several models of complex anatomically shaped constructs were fabricated by combined deposition of poly(vinyl alcohol), poly(ε-caprolactone), gelatin methacrylamide/gellan gum and alginate hydrogel. Sacrificial components were co-deposited as temporary support for overhang geometries and were removed after fabrication by immersion in aqueous solutions. Embedding of chondrocytes in the gelatin methacrylamide/gellan component demonstrated that the fabrication and the sacrificing procedure did not affect cell viability. Further, it was shown that anatomically shaped constructs can be successfully fabricated, yielding advanced porous thermoplastic polymer scaffolds, layered porous hydrogel constructs, as well as reinforced cell-laden hydrogel structures. In conclusion, anatomically shaped tissue constructs of clinically relevant sizes can be generated when employing multiple building and sacrificial materials in a single biofabrication session. The current techniques offer improved control over both internal and external construct architecture underscoring its potential to generate customized implants for human tissue regeneration. (paper)

  19. RESEARCH ON AUTOMATICALLY GENERATING C + + CODE FROM UML CLASS AND SEQUENCE DIAGRAMS%基于UML类图和顺序图的C++代码自动生成方法的研究

    Institute of Scientific and Technical Information of China (English)

    王晓宇; 钱红兵

    2013-01-01

    UML是一种被广泛用于软件系统需求分析和详细设计的标准建模语言,研究将UML描述的软件详细设计自动生成代码的技术可以大大加速软件产品的开发进度,提高软件的质量.提出一种将UML类图和顺序图相结合生成具有静态结构和动态行为信息的C++代码的方法,从而解决现在多数代码生成工具只能将静态图转换为C++代码框架而不能处理动态行为模型转换的问题.该方法包括UML类图和顺序图的元模型以及相应的转换规则.最后通过一个采用Velocity技术实现的代码生成器生成代码的实例描述了代码生成的具体过程及结果.%UML is a standard modelling language and is widely used in requirement analysis and high level design of software system. Research on the technology of generating C++ code automatically from high level software design depicted by UML can greatly accelerate the development process of software products and improve its quality. We propose an approach, which integrates UML class and sequence diagrams to form the C ++ code containing both the static structure and dynamic behaviour information of the software system, therefore solves the problems of the current code generation tools that they are only able to transform static diagrams to C + + code frame other than dealing with the transformation of dynamic behaviour models. This approach consists of meta models of UML class and sequence diagrams as well as the corresponding transformation rules. A case of code generation by code generator, which is realized by Velocity, is used to present the specific process and result of code generation.

  20. Automatic target validation based on neuroscientific literature mining for tractography

    OpenAIRE

    Xavier Vasques; Renaud Richardet; Etienne Pralong; LAURA CIF

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so t...

  1. Anatomic Eponyms in Neuroradiology: Head and Neck.

    Science.gov (United States)

    Bunch, Paul M

    2016-10-01

    In medicine, an eponym is a word-typically referring to an anatomic structure, disease, or syndrome-that is derived from a person's name. Medical eponyms are ubiquitous and numerous. They are also at times controversial. Eponyms reflect medicine's rich and colorful history and can be useful for concisely conveying complex concepts. Familiarity with eponyms facilitates correct usage and accurate communication. In this article, 22 eponyms used to describe anatomic structures of the head and neck are discussed. For each structure, the author first provides a biographical account of the individual for whom the structure is named. An anatomic description and brief discussion of the structure's clinical relevance follow. PMID:27283070

  2. The Simulation Platform Design for Automatic Generation Control System Based Matlab GUI%基于 Matlab GUI的电力系统自动发电控制仿真平台设计

    Institute of Scientific and Technical Information of China (English)

    张春慧; 国中琦; 张永

    2014-01-01

    利用Matlab中的图形用户界面( GUI),设计了一个电力系统自动发电控制( AGC)仿真平台。该平台应用于电力系统单区域一次、互联电网二次调频,同时加入控制策略模块,用于经典PID和模糊PID控制策略的分析。该仿真平台可直观地反映AGC系统调节下的频率变化趋势,以及两种控制策略的性能优劣,同时可便捷地设置参数。%Graphical User Interface ( GUI) of Matlab was used to design a simulation platform of Automatic Gener-ation Control system .This platform was used in single area frequency regulation and primary and secondary frequency regulation of the interconnected power grid , at the same time , it compares the performance of classical PID and the fuzzy self-adjustive PID control strategy .At last, the simulation results showed that the effectiveness of the platform which can reflect the frequency change trend under the AGC system and modify the parameter easily .

  3. 地闪密度图与相应MIF文件自动生成%Automatic Generation of Cloud-to-Ground Flash Density Map and Corresponding MIF File

    Institute of Scientific and Technical Information of China (English)

    董兴朋; 李胜乐; 彭愿; 苏融; 刘珠妹; 刘坚

    2012-01-01

    Cloud-to-ground flash density map could reflect distribution of activity and variation characteristics of thunder and lightning, and provide basic data for lightning protection. We developed a Visual C++ program, which is capable of automatically generating cloud-to-ground flash density map and corresponding MIF file. The MIF file can be converted to the Maplnfo Tab diagram by using the Maplnfo software for further analysis. It avoids Mapbasic language and only uses general Visual C++ in the process of map-making based on Maplnfo. At last, we make dynamic link library with program, in order to save the system resources and improve the operation efficiency.%地闪密度分布图能够反映雷电的活动分布和变化特征,可为电网防雷提供基础资料.编写Visual C++程序,能够自动生成地闪密度分布图和相应的MIF文件,生成的MIF文件可以在MapInfo中转换为MapInfo Tab图,并做进一步分析.这就避免了专业的Mapbasic语言,只需通用的Visual C++语言即可在MapInfo中成图.最后将程序做成动态链接库,节省了系统资源,提高了运行效率.

  4. 基于J2EE的网站自动生成与管理系统的研究与实现%Research and Implementation of Website Automatic Generation and Management System Based on J2EE

    Institute of Scientific and Technical Information of China (English)

    孙巧凯; 杨国林; 马晓波

    2014-01-01

    In this paper ,we propose an overall architecture and solutions for a website automatic generation and management system based on J2EE .Taking the small and medium -sized educational system for example ,we have researched and implemented such a system which is fully functional ,easi-ly operational and extendable .This system uses a MVC technical framework which is composed of Spring ,SpringMVC and Hibernate .The implementation of this system is a combination of CSS ,DIV and web technology .Simultaneously ,the system realizes the unlimited classification of the column and has a lower coupling ,better stability and portability .%本文提出了基于J2 E E的网站自动生成与管理系统的整体架构及解决方案。同时,使用符合M VC开发模型的Spring、SpringM VC和Hiber-nate技术框架,结合CSS、DIV等网页技术,针对中小型教育系统,研究并实现了一个功能完备、操作简单和便于扩展的网站自动生成与管理系统。该系统实现了栏目的无限分级,具有较低的耦合性、较好的稳定性和可移植性。

  5. Automatic Sea-route Generation Based on the Combination of Ant Colony Searching and Genetic Optimization%蚁群搜索与遗传优化结合的航线自动生成

    Institute of Scientific and Technical Information of China (English)

    李启华; 李晓阳; 吴国华

    2014-01-01

    This article researches an automatic sea-route generation problem,introduces grid settings and attribute structure of navigation area,discusses computational method and model of grid attribute,some issues including ant colony search strategy,genetic optimizing method and computation model of sea-route performance,route smoothing method and model are discussed. The feasibility of the method and the accuracy of the model are verified by a series of examples.%本文研究航线自动生成问题,介绍了海图网格的设置和网格的属性结构,讨论了网格属性的计算方法、模型,论述了蚁群搜索的策略,遗传优化的方法和航线性能计算模型,航线平滑的方法和模型,用示例验证了方法的可行性和模型的准确性。

  6. Computerised 3-D anatomical modelling using plastinates: an example utilising the human heart.

    Science.gov (United States)

    Tunali, S; Kawamoto, K; Farrell, M L; Labrash, S; Tamura, K; Lozanoff, S

    2011-08-01

    Computerised modelling methods have become highly useful for generating electronic representations of anatomical structures. These methods rely on crosssectional tissue slices in databases such as the Visible Human Male and Female, the Visible Korean Human, and the Visible Chinese Human. However, these databases are time consuming to generate and require labour-intensive manual digitisation while the number of specimens is very limited. Plastinated anatomical material could provide a possible alternative to data collection, requiring less time to prepare and enabling the use of virtually any anatomical or pathological structure routinely obtained in a gross anatomy laboratory. The purpose of this study was to establish an approach utilising plastinated anatomical material, specifically human hearts, for the purpose computerised 3-D modelling. Human hearts were collected following gross anatomical dissection and subjected to routine plastination procedures including dehydration (-25(o)C), defatting, forced impregnation, and curing at room temperature. A graphics pipeline was established comprising data collection with a hand-held scanner, 3-D modelling, model polishing, file conversion, and final rendering. Representative models were viewed and qualitatively assessed for accuracy and detail. The results showed that the heart model provided detailed surface information necessary for gross anatomical instructional purposes. Rendering tools facilitated optional model manipulation for further structural clarification if selected by the user. The use of plastinated material for generating 3-D computerised models has distinct advantages compared to cross-sectional tissue images. PMID:21866531

  7. The application of automatic tracking control method based on PLC photovoltaic generation in Jiuquan%基于PLC光伏发电自动跟踪控制方法在酒泉应用

    Institute of Scientific and Technical Information of China (English)

    秦天像

    2014-01-01

    As the position of the sun changes with time, the light intensity of the solar cell array of photovoltaic power generation system is not stable, therefore the efficiency of photovoltaic battery is re-duced. So, the design of automatic solar tracker is the effective measures to improve the efficiency of photovoltaic power generation system. Aiming at the existing defects and shortcomings of the photovoltaic tracking control method, the author takes into account the prediction and control of motors in the rotation time variation of solar position angle and tracking error range, proposes a tracking control method using PLC, and has tested its feasibility by theoretical analysis and simulation results in Matlab/Simulink.%由于太阳位置随时间而变化,使光伏发电系统的太阳能电池阵列受光照强度不稳定,从而降低了光伏电池的效率,因此,设计太阳自动跟踪器是提高光伏发电系统工作效率的有效措施。该文针对已有的光伏跟踪控制方法的缺陷与不足,考虑到执行电机在转动时间内对太阳位置角度的变化与跟踪误差范围的预测与控制,提出了一种采用PLC的跟踪控制方法,并通过理论分析与Matlab/Simulink仿真结果验证了其可行性,具有很高的推广应用价值。

  8. Research of automatically generating the curved surface cards in MCNP input file%自动生成MCNP输入文件中曲面卡的研究与实现

    Institute of Scientific and Technical Information of China (English)

    黄少华; 杨平利; 袁媛; 林成地

    2013-01-01

    Since the geometry module in hand⁃written MCNP input file is easy to make mistakes,three development inter⁃faces(API function,C++ class and direct interface function)provided by Spatial Company’s ACIS were adopted to get the sur⁃face equation for all surfaces in the model according to the given CAD model. Especially when the uneven surface and coordi⁃nate axis line is not parallel,the simplified surface equation of the auxiliary coordinate system is used to automatically generate the curved surface card with MCNP format. Through validation of different models,this method can generate the surface card correctly and improve the efficiency of compiling the MCNP input file.%  针对手工编写MCNP输入文件中几何模块容易出错的问题,采用Spatial公司推出ACIS提供的API函数、C++类和DI函数3种开发接口,实现根据给定的CAD模型得到该模型中所有曲面的面方程,在曲面与坐标轴不平行时,以辅助坐标系的形式简化面方程,最终自动生成MCNP格式的曲面卡。通过对不同模型的验证,该方法可以正确生成曲面卡,能提高编写MCNP输入文件的效率。

  9. BPEL 文档基于 DAG 自动生成框架的研究与实现%RESEARCH AND IMPLEMENTATION OF DAG-BASED AUTOMATIC BPEL DOCUMENT GENERATION FRAMEWORK

    Institute of Scientific and Technical Information of China (English)

    陈龙; 苏厚勤

    2016-01-01

    针对目前服务之间组合的复杂性,提出一种基于有向无环图的业务流程执行语言文档生成框架。通过对框架模型的解析,提出一种能够自动生成服务组合所需各种文档的改进框架及其算法,有效地隐藏了专业的业务流程执行语言知识和繁琐的服务组合流程。实践表明,该框架简单、易用,不仅能够清楚地反应服务组合之间的流程,而且还不需要 BPEL 专业方面的知识,减少了服务组合的工作量。%We proposed a DAG-based framework of Business Process Execution Language document generation for the complexities of compositions among current services.By resolving the framework model,we presented an improved framework,which can automatically generate various documents required for service compositions,and its algorithm,the professional knowledge of Business Process Execution Language and complex service composition processes are effectively hidden.Practice indicated that the framework was simple and easy to apply,the process among service compositions could be clearly reflected,and there was no need of professional knowledge of Business Process Execution Language as well,this reduced the work quantity for service compositions.

  10. New Compensating Method for Automatic Generation Control Unit Assessment in Electricity Market Environment%电力市场环境下的AGC机组评估补偿新方式

    Institute of Scientific and Technical Information of China (English)

    赵杰; 刘天健; 周建平

    2011-01-01

    根据层次分析法建立AGC机组评估模型,为对单台AGC机组的性能与价格进行分析评价以及补偿提供参考。通过对AGC机组的性能指标和价格指标进行综合层次分析后鼓励优质优价的AGC机组投入运行,并提出了一种基于综合评分的AGC服务费用两部制分配结算新方案,支付给各发电商的AGC容量服务费由其机组综合评分在所有机组总综合得分中所占比例决定,在支付总服务费用不变的情况下促使发电商积极提高AGC机组的性能并降低成本,从而有利于电力市场的长期稳定发展。提出算法简单实用,具有易操作的优点。%Based on the analytic hierarchy process (AHP), the evaluation model for automatic generation control (AGC) unit was constructed, which provided reference for analytic evaluation and compensation of single AGC unit’s performance and price. After performance index and price index of AGC units were comprehensively carried out hierarchical analysis, high quality and high price AGC units were encouraged to put into operation. This paper proposed a kind of two-to-make allocated settlement innovation for AGC cover charge based on synthesized grade. The AGC capacity cover charge paid to the generation company was determined by the proportion between the synthesized grade of AGC units and the total synthesized grade of all the generation units, which made the generation company improve AGC unit performance actively and reduce the cost under the conditions of unchanged cover charge, so as to facilitate long-term stability and development of the electricity market. The proposed algorithm is simple and practical with the advantages of easy operation.

  11. Complex anatomic variation in the brachial region.

    Science.gov (United States)

    Troupis, Th; Michalinos, A; Protogerou, V; Mazarakis, A; Skandalakis, P

    2015-01-01

    Authors describe a case of a complex anatomic variation discovered during dissection of the humeral region. On the right side, brachial artery followed a superficial course. Musculocutaneous nerve did not pierce coracobrachialis muscle but instead passed below the muscle before continuing in the forearm. On the left side, a communication between musculocutaneous and median nerve was dissected. Those variations are analytically presented with a brief review on their anatomic and clinical implications. Considerations on their embryological origin are attempted.

  12. Automatic segmentation and co-registration of gated CT angiography datasets: measuring abdominal aortic pulsatility

    Science.gov (United States)

    Wentz, Robert; Manduca, Armando; Fletcher, J. G.; Siddiki, Hassan; Shields, Raymond C.; Vrtiska, Terri; Spencer, Garrett; Primak, Andrew N.; Zhang, Jie; Nielson, Theresa; McCollough, Cynthia; Yu, Lifeng

    2007-03-01

    Purpose: To develop robust, novel segmentation and co-registration software to analyze temporally overlapping CT angiography datasets, with an aim to permit automated measurement of regional aortic pulsatility in patients with abdominal aortic aneurysms. Methods: We perform retrospective gated CT angiography in patients with abdominal aortic aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic assumptions for the aorta and heart. Visual quality assessment is performed following automatic segmentation with manual editing. Following subsequent centerline generation, centerlines are cross-registered across phases, with internal validation of co-registration performed by examining registration at the regions of greatest diameter change (i.e. when the second derivative is maximal). Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1 minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular tissues; small segmentation errors associated with calcified plaque; and segmentation of non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates appropriate registration in our patient population. In general, we observed that aortic pulsatility can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as well as between aneurysms, but the clinical significance of these findings remain unknown. Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration of temporally resolved datasets.

  13. Automatic Hardware Generation for Reconfigurable Architectures

    NARCIS (Netherlands)

    Nane, R.

    2014-01-01

    Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving

  14. Towards Execution in Automatic Test Suite Generation

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yixin; WU Jianping

    2001-01-01

    Only executable test suite generatedautomatically has practical usage.The authors de-vise and implement the algorithm of "parametrizingand executizing" in the TUGEN system and discussthe result,limitation and its reason.Based on thesework and the study in the executability of the tran-sition and the satiability problem of the predicate,the authors further present and implement anotheralgorithm called "executable parametrizing" to over-come the deficiency of the previous one and furtherimprove the practicability and efficiency of TUGEN.After analysis and comparison,the authors outline thefocus of the future research.

  15. Automatic metadata generation for learning objects

    OpenAIRE

    Ramšak, Maja

    2011-01-01

    One of the results of modern era is a massive production and usage of manifold electronic resources. Number of digital collections, digital libraries and repositories who offer these resources to users, usually by search mechanisms, are increasing. This is especially evident in scientific research and education area. Above mentioned services for managing electronic resources use metadata and metadata records, respectively. Many authors present metadata as data about data or information a...

  16. Pseudo-Urban automatic pattern generation

    OpenAIRE

    Saleri, Renato

    2005-01-01

    Notre but dans ce travail est de rechercher et d'expérimenter des méthodes de production automatique de morphologies urbaines ou architecturales. Nous avons jusqu'ici implémenté et fait converger des dispositifs s'appuyant sur une heuristique couplant un moteur de production de séquences pseudo-aléatoires avec un formalisme graphtal, de type L-System (Lindenmayer System). L'objectif étant dans un premier temps de produire simplement et “à moindres frais“ des environnements géometriques textur...

  17. Automatic query formulations in information retrieval.

    Science.gov (United States)

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice. PMID:10299297

  18. Detection and analysis of statistical differences in anatomical shape.

    Science.gov (United States)

    Golland, Polina; Grimson, W Eric L; Shenton, Martha E; Kikinis, Ron

    2005-02-01

    We present a computational framework for image-based analysis and interpretation of statistical differences in anatomical shape between populations. Applications of such analysis include understanding developmental and anatomical aspects of disorders when comparing patients versus normal controls, studying morphological changes caused by aging, or even differences in normal anatomy, for example, differences between genders. Once a quantitative description of organ shape is extracted from input images, the problem of identifying differences between the two groups can be reduced to one of the classical questions in machine learning of constructing a classifier function for assigning new examples to one of the two groups while making as few misclassifications as possible. The resulting classifier must be interpreted in terms of shape differences between the two groups back in the image domain. We demonstrate a novel approach to such interpretation that allows us to argue about the identified shape differences in anatomically meaningful terms of organ deformation. Given a classifier function in the feature space, we derive a deformation that corresponds to the differences between the two classes while ignoring shape variability within each class. Based on this approach, we present a system for statistical shape analysis using distance transforms for shape representation and the support vector machines learning algorithm for the optimal classifier estimation and demonstrate it on artificially generated data sets, as well as real medical studies. PMID:15581813

  19. Spinning gland transcriptomics from two main clades of spiders (order: Araneae--insights on their molecular, anatomical and behavioral evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    Full Text Available Characterized by distinctive evolutionary adaptations, spiders provide a comprehensive system for evolutionary and developmental studies of anatomical organs, including silk and venom production. Here we performed cDNA sequencing using massively parallel sequencers (454 GS-FLX Titanium to generate ∼80,000 reads from the spinning gland of Actinopus spp. (infraorder: Mygalomorphae and Gasteracantha cancriformis (infraorder: Araneomorphae, Orbiculariae clade. Actinopus spp. retains primitive characteristics on web usage and presents a single undifferentiated spinning gland while the orbiculariae spiders have seven differentiated spinning glands and complex patterns of web usage. MIRA, Celera Assembler and CAP3 software were used to cluster NGS reads for each spider. CAP3 unigenes passed through a pipeline for automatic annotation, classification by biological function, and comparative transcriptomics. Genes related to spider silks were manually curated and analyzed. Although a single spidroin gene family was found in Actinopus spp., a vast repertoire of specialized spider silk proteins was encountered in orbiculariae. Astacin-like metalloproteases (meprin subfamily were shown to be some of the most sampled unigenes and duplicated gene families in G. cancriformis since its evolutionary split from mygalomorphs. Our results confirm that the evolution of the molecular repertoire of silk proteins was accompanied by the (i anatomical differentiation of spinning glands and (ii behavioral complexification in the web usage. Finally, a phylogenetic tree was constructed to cluster most of the known spidroins in gene clades. This is the first large-scale, multi-organism transcriptome for spider spinning glands and a first step into a broad understanding of spider web systems biology and evolution.

  20. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  1. 一种基于用户偏好自动分类的社会媒体共享和推荐方法%A User Preference Based Automatic Potential Group Generation Method for Social Media Sharing and Recommendation

    Institute of Scientific and Technical Information of China (English)

    贾大文; 曾承; 彭智勇; 成鹏; 阳志敏; 卢舟

    2012-01-01

    Social media applications have become the mainstream of Web application. User-oriented and content generated by users are pivotal characteristics of social media sites. Data sharing and recommendation approaches play an important role in dealing with the problem of information overload in social media environment. In this paper, we analyze the flaws of current group-based information sharing mechanism and the common problem of traditional recommender approaches, and then we propose a novel approach of group automatic generating for social media sharing and recommendation. Intuitively, the essential idea of our approach is that we switch user's preference from the media objects to the interest elements which media objects imply. Then we gather the users who have common preference, namely users have the same interestingness in a set of interest elements, together as Common Preference Group (CPG). We also propose a new social media data sharing and recommendation system architecture based on CPG and design a CPG automatic mining algorithm. By compare our CPG mining algorithm with other algorithm which has similar functionality, it is shown that our algorithm could be applicable to real social media application with massive users.%社会媒体应用已成为Web应用的主流,以用户为中心并且海量媒体数据由用户自生成是社会媒体Web应用的重要特征.应对目前社会媒体环境中信息过载的问题,信息的共享和推荐机制发挥着重要的作用.文中分析了目前主流社会媒体网站基于用户自建组的信息共享机制所存在的问题以及传统推荐技术在效率上的问题,提出了一种新的基于用户偏好自动分类的社会媒体数据共享和推荐方法.直观上讲,该方法的本质是把用户对具体媒体对象的偏好转化成用户对媒体对象所蕴含兴趣元素的偏好,然后把具有相同偏好的用户,即对若干兴趣元素上的兴趣度都相同,自动聚

  2. Methodology for Automatic Generation of Models for Large Urban Spaces Based on GIS Data/Metodología para la generación automática de modelos de grandes espacios urbanos desde información SIG/

    Directory of Open Access Journals (Sweden)

    Sergio Arturo Ordóñez Medina

    2012-12-01

    Full Text Available In the planning and evaluation stages of infrastructure projects, it is necessary to manage huge quantities of information. Cities are very complex systems, which need to be modeled when an intervention is required. Suchmodels allow us to measure the impact of infrastructure changes, simulating hypothetic scenarios and evaluating results. This paper describes a methodology for the automatic generation of urban space models from GIS sources. A Voronoi diagram is used to partition large urban regions and subsequently define zones of interest. Finally, some examples of application models are presented, one used for microsimulation of traffic and another for air pollution simulation.En las etapas de planeación y evaluación de proyectos de infraestructura es necesario manejar grandes cantidades de información. Las ciudades son sistemas complejos que deben ser modeladas para ser intervenidas. Estos modelos permitirón medir el impacto de los cambios de infraestructura, simular escenarios hipotéticos y evaluar resultados. Este artículo describe una metodología para generar automáticamente modelos espaciales urbanos desde fuentes SIG: Un diagrama de Voronoi es usado para dividir grandes regiones urbanas, y a continuación serán definidas las zonas de interés. Finalmente, algunos ejemplos de modelos de aplicación serán presentados, uno usado para microsimulación de tráfico y el otro para simular contaminación atmosférica.

  3. Anatomical pathways involved in generating and sensing rhythmic whisker movements

    NARCIS (Netherlands)

    L.W.J. Bosman (Laurens); A.R. Houweling (Arthur); C.B. Owens (Cullen); N. Tanke (Nouk); O.T. Shevchouk (Olesya); N. Rahmati (Negah); W.H.T. Teunissen (Wouter); C. Ju (Chiheng); W. Gong (Wei); S.K.E. Koekkoek (Bas); C.I. de Zeeuw (Chris)

    2011-01-01

    textabstractThe rodent whisker system is widely used as a model system for investigating sensorimotor integration, neural mechanisms of complex cognitive tasks, neural development, and robotics. The whisker pathways to the barrel cortex have received considerable attention. However, many subcortical

  4. Anatomical pathways involved in generating and sensing rhythmic whisker movements

    Directory of Open Access Journals (Sweden)

    Laurens W.J. Bosman

    2011-10-01

    Full Text Available The rodent whisker system is widely used as a model system for investigating sensorimotor integration, neural mechanisms of complex cognitive tasks, neural development, and robotics. The whisker pathways to the barrel cortex have received considerable attention. However, many subcortical structures are paramount to the whisker system. They contribute to important processes, like filtering out salient features, integration with other senses and adaptation of the whisker system to the general behavioral state of the animal. We present here an overview of the brain regions and their connections involved in the whisker system. We do not only describe the anatomy and functional roles of the cerebral cortex, but also those of subcortical structures like the striatum, superior colliculus, cerebellum, pontomedullary reticular formation, zona incerta and anterior pretectal nucleus as well as those of level setting systems like the cholinergic, histaminergic, serotonergic and noradrenergic pathways. We conclude by discussing how these brain regions may affect each other and how they together may control the precise timing of whisker movements and coordinate whisker perception.

  5. Fault injection system for automatic testing system

    Institute of Scientific and Technical Information of China (English)

    王胜文; 洪炳熔

    2003-01-01

    Considering the deficiency of the means for confirming the attribution of fault redundancy in the re-search of Automatic Testing System(ATS) , a fault-injection system has been proposed to study fault redundancyof automatic testing system through compurison. By means of a fault-imbeded environmental simulation, thefaults injected at the input level of the software are under test. These faults may induce inherent failure mode,thus bringing about unexpected output, and the anticipated goal of the test is attained. The fault injection con-sists of voltage signal generator, current signal generator and rear drive circuit which are specially developed,and the ATS can work regularly by means of software simulation. The experimental results indicate that the faultinjection system can find the deficiency of the automatic testing software, and identify the preference of fault re-dundancy. On the other hand, some soft deficiency never exposed before can be identified by analyzing the tes-ting results.

  6. Lateral laryngopharyngeal diverticulum: anatomical and videofluoroscopic study

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Milton Melciades Barbosa [Universidade Federal do Rio de Janeiro ICB/CCS/UFRJ, Laboratorio de Motilidade Digestiva e Imagem, S. F1-008, Departamento de Anatomia, Rio de Janeiro (Brazil); Koch, Hilton Augusto [Universidade Federal do Rio de Janeiro ICB/CCS/UFRJ, Departamento de Radiologia, Rio de Janeiro (Brazil)

    2005-07-01

    The aims were to characterize the anatomical region where the lateral laryngopharyngeal protrusion occurs and to define if this protrusion is a normal or a pathological entity. This protrusion was observed on frontal contrasted radiographs as an addition image on the upper portion of the laryngopharynx. We carried out a plane-by-plane qualitative anatomical study through macroscopic and mesoscopic surgical dissection on 12 pieces and analyzed through a videofluoroscopic method on frontal incidence the pharyngeal phase of the swallowing process of 33 patients who had a lateral laryngopharyngeal protrusion. The anatomical study allowed us to identify the morphological characteristics that configure the high portion of the piriform recess as a weak anatomical point. The videofluoroscopic study allowed us to observe the laryngopharyngeal protrusion and its relation to pharyngeal repletion of the contrast medium. All kinds of the observed protrusions could be classified as ''lateral laryngopharyngeal diverticula.'' The lateral diverticula were more frequent in older people. These lateral protrusions can be found on one or both sides, usually with a small volume, without sex or side prevalence. This formation is probably a sign of a pharyngeal transference difficulty associated with a deficient tissue resistance in the weak anatomical point of the high portion of the piriform recess. (orig.)

  7. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; Vries, A.P. de; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  8. Reduction of Dutch Sentences for Automatic Subtitling

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.; Daelemans, W.; Höthker, A.

    2004-01-01

    We compare machine learning approaches for sentence length reduction for automatic generation of subtitles for deaf and hearing-impaired people with a method which relies on hand-crafted deletion rules. We describe building the necessary resources for this task: a parallel corpus of examples of news

  9. Anatomic Breast Coordinate System for Mammogram Analysis

    DEFF Research Database (Denmark)

    Karemore, Gopal Raghunath; Brandt, S; Karssemeijer, N;

    2011-01-01

    inside the breast. Most of the risk assessment and CAD modules use a breast region in a image centered Cartesian x,y coordinate system. Nevertheless, anatomical structure follows curve-linear trajectories. We examined an anatomical breast coordinate system that preserves the anatomical correspondence...... between the mammograms and allows extracting not only the aligned position but also the orientation aligned with the anatomy of the breast tissue structure. Materials and Methods The coordinate system used the nipple location as the point A and the border of the pectoral muscle as a line BC. The skin air...... was represented by geodesic distance (s) from nipple and parametric angle (¿) as shown in figure 1. The scoring technique called MTR (mammographic texture resemblance marker) used this breast coordinate system to extract Gaussian derivative features. The features extracted using the (x,y) and the curve...

  10. Standardized anatomic space for abdominal fat quantification

    Science.gov (United States)

    Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.

    2014-03-01

    The ability to accurately measure subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) from images is important for improved assessment and management of patients with various conditions such as obesity, diabetes mellitus, obstructive sleep apnea, cardiovascular disease, kidney disease, and degenerative disease. Although imaging and analysis methods to measure the volume of these tissue components have been developed [1, 2], in clinical practice, an estimate of the amount of fat is obtained from just one transverse abdominal CT slice typically acquired at the level of the L4-L5 vertebrae for various reasons including decreased radiation exposure and cost [3-5]. It is generally assumed that such an estimate reliably depicts the burden of fat in the body. This paper sets out to answer two questions related to this issue which have not been addressed in the literature. How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? At what anatomic location do the volumes of SAT and VAT correlate maximally with the corresponding single-slice area measures? To answer these questions, we propose two approaches for slice localization: linear mapping and non-linear mapping which is a novel learning based strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. We then study the volume-to-area correlations and determine where they become maximal. We demonstrate on 50 abdominal CT data sets that this mapping achieves significantly improved consistency of anatomic localization compared to current practice. Our results also indicate that maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized.

  11. Automated anatomical description of pleural thickening towards improvement of its computer-assisted diagnosis

    Science.gov (United States)

    Chaisaowong, Kraisorn; Jiang, Mingze; Faltin, Peter; Merhof, Dorit; Eisenhawer, Christian; Gube, Monika; Kraus, Thomas

    2016-03-01

    Pleural thickenings are caused by asbestos exposure and may evolve into malignant pleural mesothelioma. An early diagnosis plays a key role towards an early treatment and an increased survival rate. Today, pleural thickenings are detected by visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. A computer-assisted diagnosis system to automatically assess pleural thickenings has been developed, which includes not only a quantitative assessment with respect to size and location, but also enhances this information with an anatomical description, i.e. lung side (left, right), part of pleura (pars costalis, mediastinalis, diaphragmatica, spinalis), as well as vertical (upper, middle, lower) and horizontal (ventral, dorsal) position. For this purpose, a 3D anatomical model of the lung surface has been manually constructed as a 3D atlas. Three registration sub-steps including rigid, affine, and nonrigid registration align the input patient lung to the 3D anatomical atlas model of the lung surface. Finally, each detected pleural thickening is assigned a set of labels describing its anatomical properties. Through this added information, an enhancement to the existing computer-assisted diagnosis system is presented in order to assure a higher precision and reproducible assessment of pleural thickenings, aiming at the diagnosis of the pleural mesothelioma in its early stage.

  12. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  13. Automatic contrast phase estimation in CT volumes.

    Science.gov (United States)

    Sofka, Michal; Wu, Dijia; Sühling, Michael; Liu, David; Tietjen, Christian; Soza, Grzegorz; Zhou, S Kevin

    2011-01-01

    We propose an automatic algorithm for phase labeling that relies on the intensity changes in anatomical regions due to the contrast agent propagation. The regions (specified by aorta, vena cava, liver, and kidneys) are first detected by a robust learning-based discriminative algorithm. The intensities inside each region are then used in multi-class LogitBoost classifiers to independently estimate the contrast phase. Each classifier forms a node in a decision tree which is used to obtain the final phase label. Combining independent classification from multiple regions in a tree has the advantage when one of the region detectors fail or when the phase training example database is imbalanced. We show on a dataset of 1016 volumes that the system correctly classifies native phase in 96.2% of the cases, hepatic dominant phase (92.2%), hepatic venous phase (96.7%), and equilibrium phase (86.4%) in 7 seconds on average. PMID:22003696

  14. Congenital neck masses: embryological and anatomical perspectives

    Directory of Open Access Journals (Sweden)

    Zahida Rasool

    2013-08-01

    Full Text Available Neck masses are a common problem in paediatric age group. They tend to occur frequently and pose a diagnostic dilemma to the ENT surgeons. Although the midline and lateral neck masses differ considerably in their texture and presentation but the embryological perspective of these masses is not mostly understood along with the fundamental anatomical knowledge. The article tries to correlate the embryological, anatomical and clinical perspectives for the same. [Int J Res Med Sci 2013; 1(4.000: 329-332

  15. Anatomical basis for Wilms tumor surgery

    Directory of Open Access Journals (Sweden)

    Trobs R

    2009-01-01

    Full Text Available Wilms tumor surgery requires meticulous planning and sophisticated surgical technique. Detailed anatomical knowledge can facilitate the uneventful performance of tumor nephrectomy and cannot be replaced by advanced and sophisticated imaging techniques. We can define two main goals for surgery: (1 exact staging as well as (2 safe and complete resection of tumor without spillage. This review aims to review the anatomical basis for Wilms tumor surgery. It focuses on the surgical anatomy of retroperitoneal space, aorta, vena cava and their large branches with lymphatics. Types and management of vascular injuries are discussed.

  16. Automatic page composition with combined image crop and layout metrics

    Science.gov (United States)

    Hunter, Andrew; Greig, Darryl

    2012-03-01

    Automatic layout algorithms simplify the composition of image-rich documents, but they still require users to have sufficient artistry to supply well cropped and composed imagery. Combining an automatic cropping technology with a document layout system enables better results to be produced faster by less-skilled users. This paper reviews prior work in automatic image cropping and automatic page layout and presents a case for a combined crop and layout technology. We describe one such technology in a system for interactive publication design by amateur self-publishers and show that providing an automatic cropping system with additional information about the layout context can enable it to generate a more appropriate set of ranked crop options for a given image. Furthermore, we show that providing an automatic layout system with sets of ranked crop options for images can enable it to compose more appropriate page layouts.

  17. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  18. Developing a derivatives generator

    Directory of Open Access Journals (Sweden)

    Mircea Petic

    2016-01-01

    Full Text Available The article intends to highlight the particularities of the derivational morphology mechanisms that will help in lexical resources extension. Some computing approaches for derivational morphology are given for several languages, inclusively for Romanian. This paper deals with some preprocessing particularities, that are needed in the process of automatic generation. Then, generative mechanisms are presented in the form of derivational formal rules separately for prefixation and suffixation. The article ends with several approaches in automatic new generated words validation.

  19. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  20. Automatic utilities auditing

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Colin Boughton [Energy Metering Technology (United Kingdom)

    2000-08-01

    At present, energy audits represent only snapshot situations of the flow of energy. The normal pattern of energy audits as seen through the eyes of an experienced energy auditor is described. A brief history of energy auditing is given. It is claimed that the future of energy auditing lies in automatic meter reading with expert data analysis providing continuous automatic auditing thereby reducing the skill element. Ultimately, it will be feasible to carry out auditing at intervals of say 30 minutes rather than five years.

  1. Handbook of anatomical models for radiation dosimetry

    CERN Document Server

    Eckerman, Keith F

    2010-01-01

    Covering the history of human model development, this title presents the major anatomical and physical models that have been developed for human body radiation protection, diagnostic imaging, and nuclear medicine therapy. It explores how these models have evolved and the role that modern technologies have played in this development.

  2. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...

  3. HPV Vaccine Effective at Multiple Anatomic Sites

    Science.gov (United States)

    A new study from NCI researchers finds that the HPV vaccine protects young women from infection with high-risk HPV types at the three primary anatomic sites where persistent HPV infections can cause cancer. The multi-site protection also was observed at l

  4. Influences on anatomical knowledge: The complete arguments

    NARCIS (Netherlands)

    Bergman, E.M.; Verheijen, I.W.; Scherpbier, A.J.J.A.; Vleuten, C.P.M. van der; Bruin, A.B. De

    2014-01-01

    Eight factors are claimed to have a negative influence on anatomical knowledge of medical students: (1) teaching by nonmedically qualified teachers, (2) the absence of a core anatomy curriculum, (3) decreased use of dissection as a teaching tool, (4) lack of teaching anatomy in context, (5) integrat

  5. Report of a rare anatomic variant

    DEFF Research Database (Denmark)

    De Brucker, Y; Ilsen, B; Muylaert, C;

    2015-01-01

    We report the CT findings in a case of partial anomalous pulmonary venous return (PAPVR) from the left upper lobe in an adult. PAPVR is an anatomic variant in which one to three pulmonary veins drain into the right atrium or its tributaries, rather than into the left atrium. This results in a lef...

  6. Research on an Intelligent Automatic Turning System

    Directory of Open Access Journals (Sweden)

    Lichong Huang

    2012-12-01

    Full Text Available Equipment manufacturing industry is the strategic industries of a country. And its core part is the CNC machine tool. Therefore, enhancing the independent research of relevant technology of CNC machine, especially the open CNC system, is of great significance. This paper presented some key techniques of an Intelligent Automatic Turning System and gave a viable solution for system integration. First of all, the integrated system architecture and the flexible and efficient workflow for perfoming the intelligent automatic turning process is illustrated. Secondly, the innovated methods of the workpiece feature recognition and expression and process planning of the NC machining are put forward. Thirdly, the cutting tool auto-selection and the cutting parameter optimization solution are generated with a integrated inference of rule-based reasoning and case-based reasoning. Finally, the actual machining case based on the developed intelligent automatic turning system proved the presented solutions are valid, practical and efficient.

  7. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  8. Automatic imitation in a rich social context with virtual characters

    Directory of Open Access Journals (Sweden)

    Xueni ePan

    2015-06-01

    Full Text Available It has been well established that people respond faster when they perform an action that is congruent with an observed action than when they respond with an incongruent action. Here we propose a new method of using interactive Virtual Characters (VCs to test if social congruency effects can be obtained in a richer social context with sequential hand-arm actions. Two separate experiments were conducted, exploring if it is feasible to measure spatial congruency (experiment 1 and anatomical congruency (experiment 2 in response to a virtual character, compared to the same action sequence indicated by three virtual balls. In experiment 1, we found a robust spatial congruency effect for both VC and virtual balls, modulated by a social facilitation effect for participants who felt the VC was human. In experiment 2 which allowed for anatomical congruency, a form by congruency interaction provided evidence that participants automatically imitate the actions of the VC but do not imitate the balls. Our method and results build a bridge between studies using minimal stimuli in automatic interaction and studies of mimicry in a rich social interaction, and open new research venue for future research in the area of automatic imitation with a more ecologically valid social interaction.

  9. A color hierarchy for automatic target selection.

    Science.gov (United States)

    Tchernikov, Illia; Fallah, Mazyar

    2010-01-01

    Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces. PMID:20195361

  10. A color hierarchy for automatic target selection.

    Directory of Open Access Journals (Sweden)

    Illia Tchernikov

    Full Text Available Visual processing of color starts at the cones in the retina and continues through ventral stream visual areas, called the parvocellular pathway. Motion processing also starts in the retina but continues through dorsal stream visual areas, called the magnocellular system. Color and motion processing are functionally and anatomically discrete. Previously, motion processing areas MT and MST have been shown to have no color selectivity to a moving stimulus; the neurons were colorblind whenever color was presented along with motion. This occurs when the stimuli are luminance-defined versus the background and is considered achromatic motion processing. Is motion processing independent of color processing? We find that motion processing is intrinsically modulated by color. Color modulated smooth pursuit eye movements produced upon saccading to an aperture containing a surface of coherently moving dots upon a black background. Furthermore, when two surfaces that differed in color were present, one surface was automatically selected based upon a color hierarchy. The strength of that selection depended upon the distance between the two colors in color space. A quantifiable color hierarchy for automatic target selection has wide-ranging implications from sports to advertising to human-computer interfaces.

  11. 点集局部处处为凸的外包线自动生成%Algorithm for automatically generating a locally convex envelope curve of a point set

    Institute of Scientific and Technical Information of China (English)

    李世森; 李春阳

    2012-01-01

    在海岸工程的数学模型中,原始地形数据一般表现为一系列平面点的坐标,而在数学建模过程中往往需要根据该点集(坐标)人工给定模拟区域的边界.可根据事先设定的搜索点数得到点集的外包线,不同的搜索点数可以得到不同的外包线.一般说来,随着搜索点数的增加,外包线内的面积也不断增大,直到得到该区域的凸包(该凸包一般不是所要寻找的).外包线内的面积与凸包的面积比值,定义为该外包线的凸度.为了减少手工工作的劳动量,提出了一个根据给定点集,自动寻找其合适外包线的算法.同时给出了外包线的调整算法,使得寻找到的外包线更加贴近初始给定的情形.最后应用该程序对渤海区域边界点数据进行了边界寻找,效果良好.%In coastal engineering mathematical model,the original terrain data are a series of planar point data with its elevations.In the mathematical model, it is required to get the artificial boundary of the given model region from the points set.An algorithm can get the envelope curve based on pre-set number of searching points, and different number can get different envelope curve.With the number of search points increasing, the area bounded by the envelope curve is also increasing, until the convex hull of the region is got (in general, the convex hull is not what we are considering).The convex-concave degree of envelope curve is defined as the ratio of the area bounded by the envelope curve and the area bounded by the convex hull.In order to reduce the amount of manual labor work,an algorithm which could automatically generate the appropriate envelope curve of the given points set was presented in this paper.An adjustment algorithm which could make the envelope curve more coincidence of the actual situation was also given.Finally, this algorithm was used to generate the Bohai Sea region boundary .The computed boundary agrees reasonable well with the actual

  12. Modeling of Automatic Generation Control for Power System Transient, Medium-Term and Long-Term Stabilities Simulations%电力系统全过程动态仿真中的自动发电控制模型

    Institute of Scientific and Technical Information of China (English)

    宋新立; 王成山; 仲悟之; 汤涌; 卓峻峰; 旸吴国; 苏志达

    2013-01-01

    针对大规模电力系统二次调频控制的动态仿真问题,采用混杂系统的建模方法,提出一种适于机电暂态及中长期动态全过程仿真的自动发电控制模型。模型主要由属于连续动态系统的区域控制偏差计算、属于离散动态系统的控制策略和机组调节指令计算3个模块组成。通过与电力系统全过程动态仿真程序中已有模型的接口,该模型可以模拟大规模电网中基于A标准和CPS控制性能评价标准的控制策略,以及定频率控制、定交换功率控制和联络线功率频率偏差控制等多种方式。与我国特高压交流联络线相关的2个算例仿真表明,该模型可为大规模电网联络线功率波动限制、多区域AGC控制策略的协调配合和二次调频的优化控制等实际电网问题提供有效的仿真手段。%In order to dynamically simulate secondary power frequency control in large power systems, a new automatic generation control (AGC) model, which can be applied for power system electro-mechanical transient, medium-term and long-term dynamics simulation, is proposed based on the modeling method of hybrid system. It mainly consists of three parts:calculation of area control error (ACE), simulation of control strategy, and calculation of generating power regulation. The first module is modeled by the method of continuous dynamic systems, and the last two modules are modeled by the method of discrete event dynamic systems. By interfacing to the existing models in the power system unified dynamic simulation program, it is capable of simulating not only the three main control modes of AGC for large power systems, i.e., flat frequency control (FFC), constant net interchange control (CIC), and tie line bias frequency control (TBC), but also the widely-used control strategies based on CPS and A standard. Two simulation cases, which are related to the active power control for the tie-line in China UHVAC interconnected

  13. An automatic system for segmentation, matching, anatomical labeling and measurement of airways from CT images

    DEFF Research Database (Denmark)

    Petersen, Jens; Feragen, Aasa; Lo, P.;

    Purpose: Assessing airway dimensions and attenuation from CT images is useful in the study of diseases affecting the airways such as Chronic Obstructive Pulmonary Disease (COPD). Measurements can be compared between patients and over time if specific airway segments can be identified. However...... differences. Results: The segmentation method has been used on 9711 low dose CT images from the Danish Lung Cancer Screening Trial (DLCST). Manual inspection of thumbnail images revealed gross errors in a total of 44 images. 29 were missing branches at the lobar level and only 15 had obvious false positives...... extracted perpendicularly to and in random positions of the centerline in 7 subjects. Results show an average Dice's coefficient of 89%. The COPD gene phantom was scanned with the DLCST protocol and all interior and exterior diameters were estimated within 0.3 mm of their actual values. Limiting...

  14. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  15. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data.

    Science.gov (United States)

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L; Polimeni, Jonathan R

    2016-07-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we present a framework for deriving cortical surface reconstructions directly from high-resolution EPI-based reference images that provide anatomical models exactly geometric distortion-matched to the functional data. Anatomical EPI data with 1mm isotropic voxel size were acquired using a fast multiple inversion recovery time EPI sequence (MI-EPI) at 7T, from which quantitative T1 maps were calculated. Using these T1 maps, volumetric data mimicking the tissue contrast of standard anatomical data were synthesized using the Bloch equations, and these T1-weighted data were automatically processed using FreeSurfer. The spatial alignment between T2(⁎)-weighted EPI data and the synthetic T1-weighted anatomical MI-EPI-based images was improved compared to the conventional anatomical reference. In particular, the alignment near the regions vulnerable to distortion due to magnetic susceptibility differences was improved, and sampling of the adjacent tissue classes outside of the cortex was reduced when using cortical surface reconstructions derived directly from the MI-EPI reference. The MI-EPI method therefore produces high-quality anatomical data that can be automatically segmented with standard software, providing cortical surface reconstructions that are geometrically matched to the BOLD fMRI data. PMID:27079529

  16. Anatomically guided voxel-based partial volume effect correction in brain PET : Impact of MRI segmentation

    NARCIS (Netherlands)

    Gutierrez, Daniel; Montandon, Marie-Louise; Assal, Frederic; Allaoua, Mohamed; Ratib, Osman; Loevblad, Karl-Olof; Zaidi, Habib

    2012-01-01

    Partial volume effect is still considered one of the main limitations in brain PET imaging given the limited spatial resolution of current generation PET scanners. The accuracy of anatomically guided partial volume effect correction (PVC) algorithms in brain PET is largely dependent on the performan

  17. Automatic Sequencing for Experimental Protocols

    Science.gov (United States)

    Hsieh, Paul F.; Stern, Ivan

    We present a paradigm and implementation of a system for the specification of the experimental protocols to be used for the calibration of AXAF mirrors. For the mirror calibration, several thousand individual measurements need to be defined. For each measurement, over one hundred parameters need to be tabulated for the facility test conductor and several hundred instrument parameters need to be set. We provide a high level protocol language which allows for a tractable representation of the measurement protocol. We present a procedure dispatcher which automatically sequences a protocol more accurately and more rapidly than is possible by an unassisted human operator. We also present back-end tools to generate printed procedure manuals and database tables required for review by the AXAF program. This paradigm has been tested and refined in the calibration of detectors to be used in mirror calibration.

  18. ANATOMIC RESEARCH OF SUPERIOR CLUNIAL NERVE TRAUMA

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    In order to find the mechanism of superior clunial nerve (SCN) trauma, we dissected and revealed SCN from 12 corpses (24 sides). Combining 100 sides of SCN trauma, we inspected the course of SCN, the relation between SCN and it's neighbour tissues with the situation of SCN when being subjected to force. We found that the following special anatomic characteristics and mechanical elements such as the course of SCN, it's turning angles, the bony fibrous tube at the iliac crest, the posterior layer of the lumbodorsal fascia and SCN neighbour adipose tissue, are the causes of external force inducing SCN trauma. The anatomic revealment is the guidance of SCN trauma treatment with edged needle.

  19. Integrating anatomical pathology to the healthcare enterprise.

    Science.gov (United States)

    Daniel-Le Bozec, Christel; Henin, Dominique; Fabiani, Bettina; Bourquard, Karima; Ouagne, David; Degoulet, Patrice; Jaulent, Marie-Christine

    2006-01-01

    For medical decisions, healthcare professionals need that all required information is both correct and easily available. We address the issue of integrating anatomical pathology department to the healthcare enterprise. The pathology workflow from order to report, including specimen process and image acquisition was modeled. Corresponding integration profiles were addressed by expansion of the IHE (Integrating the Healthcare Enterprise) initiative. Implementation using respectively DICOM Structured Report (SR) and DICOM Slide-Coordinate Microscopy (SM) was tested. The two main integration profiles--pathology general workflow and pathology image workflow--rely on 13 transactions based on HL7 or DICOM standard. We propose a model of the case in anatomical pathology and of other information entities (orders, image folders and reports) and real-world objects (specimen, tissue samples, slides, etc). Cases representation in XML schemas, based on DICOM specification, allows producing DICOM image files and reports to be stored into a PACS (Picture Archiving and Communication System. PMID:17108550

  20. ANPS - AUTOMATIC NETWORK PROGRAMMING SYSTEM

    Science.gov (United States)

    Schroer, B. J.

    1994-01-01

    Development of some of the space program's large simulation projects -- like the project which involves simulating the countdown sequence prior to spacecraft liftoff -- requires the support of automated tools and techniques. The number of preconditions which must be met for a successful spacecraft launch and the complexity of their interrelationship account for the difficulty of creating an accurate model of the countdown sequence. Researchers developed ANPS for the Nasa Marshall Space Flight Center to assist programmers attempting to model the pre-launch countdown sequence. Incorporating the elements of automatic programming as its foundation, ANPS aids the user in defining the problem and then automatically writes the appropriate simulation program in GPSS/PC code. The program's interactive user dialogue interface creates an internal problem specification file from user responses which includes the time line for the countdown sequence, the attributes for the individual activities which are part of a launch, and the dependent relationships between the activities. The program's automatic simulation code generator receives the file as input and selects appropriate macros from the library of software modules to generate the simulation code in the target language GPSS/PC. The user can recall the problem specification file for modification to effect any desired changes in the source code. ANPS is designed to write simulations for problems concerning the pre-launch activities of space vehicles and the operation of ground support equipment and has potential for use in developing network reliability models for hardware systems and subsystems. ANPS was developed in 1988 for use on IBM PC or compatible machines. The program requires at least 640 KB memory and one 360 KB disk drive, PC DOS Version 2.0 or above, and GPSS/PC System Version 2.0 from Minuteman Software. The program is written in Turbo Prolog Version 2.0. GPSS/PC is a trademark of Minuteman Software. Turbo Prolog

  1. ACCESSORY SPLEEN: A CLINICALLY RELEVANT ANATOMIC ANOMALY

    OpenAIRE

    Prachi Saffar; Amit Kumar; Ankur

    2016-01-01

    The purpose of our study is to emphasize on the clinical relevance of the presence of accessory spleen. It is not only a well-documented anatomic anomaly, it holds special significance in the differential diagnosis of intra-abdominal tumours and lymphadenopathy. MATERIALS AND METHODS Thirty male cadavers from North Indian population above the age of 60 yrs. were dissected in the Anatomy Department of FMHS, SGT University, Gurgaon, over a period of 5 yrs. (Sep 2010-Aug 2015) and presence...

  2. Microstructure and Anatomical Characteristics of Daemonorops margaritae

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Daemonorops margaritae is among the most important commercial rattan in South China. Its microstructure and basic anatomical characteristics as well as variation were investigated. Results show that: 1)The variation along the height is small, while the variation along the radial direction is significant; 2) The fibre length, fibre ratio and distribution density of the vascular bundles in the cross section decrease from cortex to core, while the fibre width, vessel element length and width, parenchyma ratio,...

  3. Pure endoscopic endonasal odontoidectomy: anatomical study

    OpenAIRE

    Messina, Andrea; Bruno, Maria Carmela; Decq, Philippe; Coste, Andre; Cavallo, Luigi Maria; de Divittis, Enrico; Cappabianca, Paolo; Tschabitscher, Manfred

    2007-01-01

    Different disorders may produce irreducible atlanto-axial dislocation with compression of the ventral spinal cord. Among the surgical approaches available for a such condition, the transoral resection of the odontoid process is the most often used. The aim of this anatomical study is to demonstrate the possibility of an anterior cervico-medullary decompression through an endoscopic endonasal approach. Three fresh cadaver heads were used. A modified endonasal endoscopic approach was made in al...

  4. An anatomical and functional model of the human tracheobronchial tree.

    Science.gov (United States)

    Florens, M; Sapoval, B; Filoche, M

    2011-03-01

    The human tracheobronchial tree is a complex branched distribution system in charge of renewing the air inside the acini, which are the gas exchange units. We present here a systematic geometrical model of this system described as a self-similar assembly of rigid pipes. It includes the specific geometry of the upper bronchial tree and a self-similar intermediary tree with a systematic branching asymmetry. It ends by the terminal bronchioles whose generations range from 8 to 22. Unlike classical models, it does not rely on a simple scaling law. With a limited number of parameters, this model reproduces the morphometric data from various sources (Horsfield K, Dart G, Olson DE, Filley GF, Cumming G. J Appl Physiol 31: 207-217, 1971; Weibel ER. Morphometry of the Human Lung. New York: Academic Press, 1963) and the main characteristics of the ventilation. Studying various types of random variations of the airway sizes, we show that strong correlations are needed to reproduce the measured distributions. Moreover, the ventilation performances are observed to be robust against anatomical variability. The same methodology applied to the rat also permits building a geometrical model that reproduces the anatomical and ventilation characteristics of this animal. This simple model can be directly used as a common description of the entire tree in analytical or numerical studies such as the computation of air flow distribution or aerosol transport. PMID:21183626

  5. DESIGN OF 3D MODEL OF CUSTOMIZED ANATOMICALLY ADJUSTED IMPLANTS

    Directory of Open Access Journals (Sweden)

    Miodrag Manić

    2015-12-01

    Full Text Available Design and manufacturing of customized implants is a field that has been rapidly developing in recent years. This paper presents an originally developed method for designing a 3D model of customized anatomically adjusted implants. The method is based upon a CT scan of a bone fracture. A CT scan is used to generate a 3D bone model and a fracture model. Using these scans, an indicated location for placing the implant is recognized and the design of a 3D model of customized implants is made. With this method it is possible to design volumetric implants used for replacing a part of the bone or a plate type for fixation of a bone part. The sides of the implants, this one lying on the bone, are fully aligned with the anatomical shape of the bone surface which neighbors the fracture. The given model is designed for implants production utilizing any method, and it is ideal for 3D printing of implants.

  6. Exploring brain function from anatomical connectivity

    Directory of Open Access Journals (Sweden)

    Gorka eZamora-López

    2011-06-01

    Full Text Available The intrinsic relationship between the architecture of the brain and the range of sensory and behavioral phenomena it produces is a relevant question in neuroscience. Here, we review recent knowledge gained on the architecture of the anatomical connectivity by means of complex network analysis. It has been found that corticocortical networks display a few prominent characteristics: (i modular organization, (ii abundant alternative processing paths and (iii the presence of highly connected hubs. Additionally, we present a novel classification of cortical areas of the cat according to the role they play in multisensory connectivity. All these properties represent an ideal anatomical substrate supporting rich dynamical behaviors, as-well-as facilitating the capacity of the brain to process sensory information of different modalities segregated and to integrate them towards a comprehensive perception of the real world. The result here exposed are mainly based in anatomical data of cats’ brain, but we show how further observations suggest that, from worms to humans, the nervous system of all animals might share fundamental principles of organization.

  7. Anatomical MRI with an atomic magnetometer

    CERN Document Server

    Savukov, I

    2012-01-01

    Ultra-low field (ULF) MRI is a promising method for inexpensive medical imaging with various additional advantages over conventional instruments such as low weight, low power, portability, absence of artifacts from metals, and high contrast. Anatomical ULF MRI has been successfully implemented with SQUIDs, but SQUIDs have the drawback of cryogen requirement. Atomic magnetometers have sensitivity comparable to SQUIDs and can be in principle used for ULF MRI to replace SQUIDs. Unfortunately some problems exist due to the sensitivity of atomic magnetometers to magnetic field and gradients. At low frequency, noise is also substantial and a shielded room is needed for improving sensitivity. In this paper, we show that at 85 kHz, the atomic magnetometer can be used to obtain anatomical images. This is the first demonstration of any use of atomic magnetometers for anatomical MRI. The demonstrated resolution is 1.1x1.4 mm2 in about six minutes of acquisition with SNR of 10. Some applications of the method are discuss...

  8. Automatic Program Reports

    OpenAIRE

    Lígia Maria da Silva Ribeiro; Gabriel de Sousa Torcato David

    2007-01-01

    To profit from the data collected by the SIGARRA academic IS, a systematic setof graphs and statistics has been added to it and are available on-line. Thisanalytic information can be automatically included in a flexible yearly report foreach program as well as in a synthesis report for the whole school. Somedifficulties in the interpretation of some graphs led to the definition of new keyindicators and the development of a data warehouse across the university whereeffective data consolidation...

  9. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  10. Automatic Differentiation Variational Inference

    OpenAIRE

    Kucukelbir, Alp; Tran, Dustin; Ranganath, Rajesh; Gelman, Andrew; Blei, David M.

    2016-01-01

    Probabilistic modeling is iterative. A scientist posits a simple model, fits it to her data, refines it according to her analysis, and repeats. However, fitting complex models to large data is a bottleneck in this process. Deriving algorithms for new models can be both mathematically and computationally challenging, which makes it difficult to efficiently cycle through the steps. To this end, we develop automatic differentiation variational inference (ADVI). Using our method, the scientist on...

  11. Zebrafish Expression Ontology of Gene Sets (ZEOGS): a tool to analyze enrichment of zebrafish anatomical terms in large gene sets.

    Science.gov (United States)

    Prykhozhij, Sergey V; Marsico, Annalisa; Meijsing, Sebastiaan H

    2013-09-01

    The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene expression

  12. Introducing International Journal of Anatomical Variations

    Directory of Open Access Journals (Sweden)

    Tunali S

    2008-06-01

    Full Text Available Welcome to International Journal of Anatomical Variations (IJAV - an annual journal of anatomical variations and clinical anatomy case reports. After having a notable experience for eight years in NEUROANATOMY, we are pleased to introduce you IJAV. We are eventually announcing our new journal after three years of feasibility and background study period. We hope that IJAV will fill in the gap in anatomy journals’ bunch. IJAV is an annual, open access journal having electronic version only. Despite of unavailability of a budget for publishing IJAV, the evaluation of submissions and access to the full text articles is totally free of charge.Our vision for IJAV is to constitute an online compendium for anatomical variations in gross, radiological and surgical anatomy, neuroanatomy and case reports in clinical anatomy. We believe that cases have an important role in clinical anatomy education. In this aspect, we aim to serve as an open source of case reports. We hope that IJAV will be cited in most of the case reports related to clinical anatomy and anatomical variations in near future.In NEUROANATOMY, we encouraged the submission of case reports in the area of neuroanatomy. Whereas in IJAV, besides neuroanatomy, we will consider case reports in any area related to human anatomy. The scope of IJAV will encompass any anatomical variations in gross, radiological and surgical anatomy. Case reports in clinical anatomy are also welcome.All submitted articles will be peer-reviewed. No processing fee will be charged from authors. One of the most important features of IJAV will be speedy review and rapid publication. We strive to publish an accepted manuscript within three weeks of initial submission. Our young and dynamic Scientific Advisory Board will achieve this objective.A few remarks about our logo and page design: Prof. Dr. M. Mustafa ALDUR designed our logo, being inspired by a quadricuspid aortic valve case, reported by Francesco FORMICA et al

  13. Using Protege for Automatic Ontology Instantiation

    OpenAIRE

    Alani, Harith; Kim, Sanghee; Millard, David E.; Weal, Mark J.; Hall, Wendy; Lewis, Paul H.; Shadbolt, Nigel

    2004-01-01

    This paper gives an overview on the use of Protégé in the Artequakt system, which integrated Protégé with a set of natural language tools to automatically extract knowledge about artists from web documents and instantiate a given ontology. Protégé was also linked to structured templates that generate documents from the knowledge fragments it maintains.

  14. Automatic summary evaluation based on text grammars

    OpenAIRE

    Branny, Emilia

    2007-01-01

    In this paper, I describe a method for evaluating automatically generated text summaries. The method is inspired by research in text grammars by Teun Van Dijk. It addresses a text as a complex structure, the elements of which are interconnected both on the level of form and meaning, and the well-formedness of which should be described on both of these levels. The method addresses current problems of summary evaluation methods, especially the problem of quantifying informativity, as well as th...

  15. Automatic event detection for tennis broadcasting

    OpenAIRE

    Enebral González, Javier

    2011-01-01

    Within the image digital processing framework, this thesis is situated in the automatic content indexation field. Specifically during the project, different methods and techniques will be developed in order to achieve event detection for broadcasting tennis videos. Audiovisual indexation consists in the generation of descriptive tags based on the existing audiovisual data. All these tags are used to search the desired material in an efficient way. Televisions and other entities are l...

  16. Extraction of the human cerebral ventricular system from MRI: inclusion of anatomical knowledge and clinical perspective

    Science.gov (United States)

    Aziz, Aamer; Hu, Qingmao; Nowinski, Wieslaw L.

    2004-04-01

    The human cerebral ventricular system is a complex structure that is essential for the well being and changes in which reflect disease. It is clinically imperative that the ventricular system be studied in details. For this reason computer assisted algorithms are essential to be developed. We have developed a novel (patent pending) and robust anatomical knowledge-driven algorithm for automatic extraction of the cerebral ventricular system from MRI. The algorithm is not only unique in its image processing aspect but also incorporates knowledge of neuroanatomy, radiological properties, and variability of the ventricular system. The ventricular system is divided into six 3D regions based on the anatomy and its variability. Within each ventricular region a 2D region of interest (ROI) is defined and is then further subdivided into sub-regions. Various strict conditions that detect and prevent leakage into the extra-ventricular space are specified for each sub-region based on anatomical knowledge. Each ROI is processed to calculate its local statistics, local intensity ranges of cerebrospinal fluid and grey and white matters, set a seed point within the ROI, grow region directionally in 3D, check anti-leakage conditions and correct growing if leakage occurs and connects all unconnected regions grown by relaxing growing conditions. The algorithm was tested qualitatively and quantitatively on normal and pathological MRI cases and worked well. In this paper we discuss in more detail inclusion of anatomical knowledge in the algorithm and usefulness of our approach from clinical perspective.

  17. An ``Anatomic approach" to study the Casimir effect

    Science.gov (United States)

    Intravaia, Francesco; Haakh, Harald; Henkel, Carsten

    2010-03-01

    The Casimir effect, in its simplest definition, is a quantum mechanical force between two objects placed in vacuum. In recent years the Casimir force has been the object of an exponentially growing attention both from theorists and experimentalists. A new generation of experiments paved the way for new challenges and spotted some shadows in the comparison to theory. Here we are going to isolate different contributions to the Casimir interaction and perform a detailed study to shine new light on this phenomenon. As an example, the contributions of Foucault (eddy current) modes will be discussed in different configurations. This ``anatomic approach'' allows to clearly put into evidence special features and to explain unusual behaviors. This brings new physical understanding on the undergoing physical mechanisms and suggests new ways to engineer the Casimir effect.

  18. Probabilistic anatomical labeling of brain structures using statistical probabilistic anatomical maps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Su; Lee, Dong Soo; Lee, Byung Il; Lee, Jae Sung; Shin, Hee Won; Chung, June Key; Lee, Myung Chul [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2002-12-01

    The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal neurological institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the statistical probabilistic anatomical map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for the easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was performed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. These programs will be useful for the result interpretation of the image analysis performed on MNI coordinate, as done in SPM program.

  19. TOPICAL REVIEW: Anatomical imaging for radiotherapy

    Science.gov (United States)

    Evans, Philip M.

    2008-06-01

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  20. MARZ: Manual and automatic redshifting software

    Science.gov (United States)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.