WorldWideScience

Sample records for automatically generated anatomically

  1. Automatic Dance Lesson Generation

    Science.gov (United States)

    Yang, Yang; Leung, H.; Yue, Lihua; Deng, LiQun

    2012-01-01

    In this paper, an automatic lesson generation system is presented which is suitable in a learning-by-mimicking scenario where the learning objects can be represented as multiattribute time series data. The dance is used as an example in this paper to illustrate the idea. Given a dance motion sequence as the input, the proposed lesson generation…

  2. Algorithms to automatically quantify the geometric similarity of anatomical surfaces

    CERN Document Server

    Boyer, D; Clair, E St; Puente, J; Funkhouser, T; Patel, B; Jernvall, J; Daubechies, I

    2011-01-01

    We describe new approaches for distances between pairs of 2-dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not requir...

  3. Automatic generation of documents

    OpenAIRE

    Rosa Gini; Jacopo Pasquini

    2006-01-01

    This paper describes a natural interaction between Stata and markup languages. Stata’s programming and analysis features, together with the flexibility in output formatting of markup languages, allow generation and/or update of whole documents (reports, presentations on screen or web, etc.). Examples are given for both LaTeX and HTML. Stata’s commands are mainly dedicated to analysis of data on a computer screen and output of analysis stored in a log file available to researchers for later re...

  4. Automatic Generation of Technical Documentation

    OpenAIRE

    Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of ...

  5. Automatic Generation of Technical Documentation

    CERN Document Server

    Reiter, E R; Levine, J; Reiter, Ehud; Mellish, Chris; Levine, John

    1994-01-01

    Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of our experiences with IDAS and the lessons we have learned from it will be beneficial for other researchers who wish to build technical-documentation generation systems.

  6. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  7. Automatic anatomical labeling of the complete cerebral vasculature in mouse models.

    Science.gov (United States)

    Ghanavati, Sahar; Lerch, Jason P; Sled, John G

    2014-07-15

    Study of cerebral vascular structure broadens our understanding of underlying variations, such as pathologies that can lead to cerebrovascular disorders. The development of high resolution 3D imaging modalities has provided us with the raw material to study the blood vessels in small animals such as mice. However, the high complexity and 3D nature of the cerebral vasculature make comparison and analysis of the vessels difficult, time-consuming and laborious. Here we present a framework for automated segmentation and recognition of the cerebral vessels in high resolution 3D images that addresses this need. The vasculature is segmented by following vessel center lines starting from automatically generated seeds and the vascular structure is represented as a graph. Each vessel segment is represented as an edge in the graph and has local features such as length, diameter, and direction, and relational features representing the connectivity of the vessel segments. Using these features, each edge in the graph is automatically labeled with its anatomical name using a stochastic relaxation algorithm. We have validated our method on micro-CT images of C57Bl/6J mice. A leave-one-out test performed on the labeled data set demonstrated the recognition rate for all vessels including major named vessels and their minor branches to be >75%. This automatic segmentation and recognition methods facilitate the comparison of blood vessels in large populations of subjects and allow us to study cerebrovascular variations. PMID:24680868

  8. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  9. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  10. Semi-Automatic Anatomical Tree Matching for Landmark-Based Elastic Registration of Liver Volumes

    Directory of Open Access Journals (Sweden)

    Klaus Drechsler

    2010-01-01

    Full Text Available One promising approach to register liver volume acquisitions is based on the branching points of the vessel trees as anatomical landmarks inherently available in the liver. Automated tree matching algorithms were proposed to automatically find pair-wise correspondences between two vessel trees. However, to the best of our knowledge, none of the existing automatic methods are completely error free. After a review of current literature and methodologies on the topic, we propose an efficient interaction method that can be employed to support tree matching algorithms with important pre-selected correspondences or after an automatic matching to manually correct wrongly matched nodes. We used this method in combination with a promising automatic tree matching algorithm also presented in this work. The proposed method was evaluated by 4 participants and a CT dataset that we used to derive multiple artificial datasets.

  11. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature

    OpenAIRE

    Destrieux, Christophe; Fischl, Bruce; Dale, Anders; Halgren, Eric

    2010-01-01

    Precise localization of sulco-gyral structures of the human cerebral cortex is important for the interpretation of morpho-functional data, but requires anatomical expertise and is time consuming because of the brain s geometric complexity. Software developed to automatically identify sulco-gyral structures has improved substantially as a result of techniques providing topologically-correct reconstructions permitting inflated views of the human brain. Here we describe a complete parcellation o...

  12. Automatic Segmentation Framework of Building Anatomical Mouse Model for Bioluminescence Tomography

    OpenAIRE

    Abdullah Alali

    2013-01-01

    Bioluminescence tomography is known as a highly ill-posed inverse problem. To improve the reconstruction performance by introducing anatomical structures as a priori knowledge, an automatic segmentation framework has been proposed in this paper to extract the mouse whole-body organs and tissues, which enables to build up a heterogeneous mouse model for reconstruction of bioluminescence tomography. Finally, an in vivo mouse experiment has been conducted to evaluate this framework by using an X...

  13. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    OpenAIRE

    Veena Thakur; Trupti Gedam

    2015-01-01

    Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS) are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify...

  14. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature.

    Science.gov (United States)

    Destrieux, Christophe; Fischl, Bruce; Dale, Anders; Halgren, Eric

    2010-10-15

    Precise localization of sulco-gyral structures of the human cerebral cortex is important for the interpretation of morpho-functional data, but requires anatomical expertise and is time consuming because of the brain's geometric complexity. Software developed to automatically identify sulco-gyral structures has improved substantially as a result of techniques providing topologically correct reconstructions permitting inflated views of the human brain. Here we describe a complete parcellation of the cortical surface using standard internationally accepted nomenclature and criteria. This parcellation is available in the FreeSurfer package. First, a computer-assisted hand parcellation classified each vertex as sulcal or gyral, and these were then subparcellated into 74 labels per hemisphere. Twelve datasets were used to develop rules and algorithms (reported here) that produced labels consistent with anatomical rules as well as automated computational parcellation. The final parcellation was used to build an atlas for automatically labeling the whole cerebral cortex. This atlas was used to label an additional 12 datasets, which were found to have good concordance with manual labels. This paper presents a precisely defined method for automatically labeling the cortical surface in standard terminology. PMID:20547229

  15. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT

    International Nuclear Information System (INIS)

    Highlights: •Automatic segmentation and labeling of the thoracolumbar spine. •Automatically generated double-angulated and aligned axial images of spine segments. •High grade of accurateness for the symmetric depiction of anatomical structures. •Time-saving and may improve workflow in daily practice. -- Abstract: Objectives: To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. Material and methods: 77 patients (28 women, 49 men, mean age 65.3 ± 14.4 years) with known or suspected spinal disorders (degenerative spine disease n = 32; disc herniation n = 36; traumatic vertebral fractures n = 9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. Results: In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1 min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p < 0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p < 0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. Conclusion: The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time

  16. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT

    Energy Technology Data Exchange (ETDEWEB)

    Scholtz, Jan-Erik, E-mail: janerikscholtz@gmail.com; Wichmann, Julian L.; Kaup, Moritz; Fischer, Sebastian; Kerl, J. Matthias; Lehnert, Thomas; Vogl, Thomas J.; Bauer, Ralf W.

    2015-03-15

    Highlights: •Automatic segmentation and labeling of the thoracolumbar spine. •Automatically generated double-angulated and aligned axial images of spine segments. •High grade of accurateness for the symmetric depiction of anatomical structures. •Time-saving and may improve workflow in daily practice. -- Abstract: Objectives: To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. Material and methods: 77 patients (28 women, 49 men, mean age 65.3 ± 14.4 years) with known or suspected spinal disorders (degenerative spine disease n = 32; disc herniation n = 36; traumatic vertebral fractures n = 9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. Results: In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1 min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p < 0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p < 0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. Conclusion: The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time

  17. Automatic generation of multilingual sports summaries

    OpenAIRE

    Hasan, Fahim Muhammad

    2011-01-01

    Natural Language Generation is a subfield of Natural Language Processing, which is concerned with automatically creating human readable text from non-linguistic forms of information. A template-based approach to Natural Language Generation utilizes base formats for different types of sentences, which are subsequently transformed to create the final readable forms of the output. In this thesis, we investigate the suitability of a template-based approach to multilingual Natural Language Generat...

  18. Generating Semi-Markov Models Automatically

    Science.gov (United States)

    Johnson, Sally C.

    1990-01-01

    Abstract Semi-Markov Specification Interface to SURE Tool (ASSIST) program developed to generate semi-Markov model automatically from description in abstract, high-level language. ASSIST reads input file describing failure behavior of system in abstract language and generates Markov models in format needed for input to Semi-Markov Unreliability Range Evaluator (SURE) program (COSMIC program LAR-13789). Facilitates analysis of behavior of fault-tolerant computer. Written in PASCAL.

  19. Automatic Test Pattern Generation for Digital Circuits

    Directory of Open Access Journals (Sweden)

    S. Hemalatha

    2014-04-01

    Full Text Available Digital circuits complexity and density are increasing and at the same time it should have more quality and reliability. It leads with high test costs and makes the validation more complex. The main aim is to develop a complete behavioral fault simulation and automatic test pattern generation (ATPG system for digital circuits modeled in verilog and VHDL. An integrated Automatic Test Generation (ATG and Automatic Test Executing/Equipment (ATE system for complex boards is developed here. An approach to use memristors (resistors with memory in programmable analog circuits. The Main idea consists in a circuit design in which low voltages are applied to memristors during their operation as analog circuit elements and high voltages are used to program the memristor’s states. This way, as it was demonstrated in recent experiments, the state of memristors does not essentially change during analog mode operation. As an example of our approach, we have built several programmable analog circuits demonstrating memristor -based programming of threshold, gain and frequency. In these circuits the role of memristor is played by a memristor emulator developed by us. A multiplexer is developed to generate a class of minimum transition sequences. The entire hardware is realized as digital logical circuits and the test results are simulated in Model sim software. The results of this research show that behavioral fault simulation will remain as a highly attractive alternative for the future generation of VLSI and system-on-chips (SoC.

  20. Different Manhattan project: automatic statistical model generation

    Science.gov (United States)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  1. Linguistics Computation, Automatic Model Generation, and Intensions

    OpenAIRE

    Nourani, Cyrus F.

    1994-01-01

    Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computa...

  2. XML-Based Automatic Test Data Generation

    OpenAIRE

    Halil Ibrahim Bulbul; Turgut Bakir

    2012-01-01

    Software engineering aims at increasing quality and reliability while decreasing the cost of the software. Testing is one of the most time-consuming phases of the software development lifecycle. Improvement in software testing results in decrease in cost and increase in quality of the software. Automation in software testing is one of the most popular ways of software cost reduction and reliability improvement. In our work we propose a system called XML-based automatic test data generation th...

  3. Automatic Caption Generation for Electronics Textbooks

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2014-12-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  4. AUTOMATIC CAPTION GENERATION FOR ELECTRONICS TEXTBOOKS

    Directory of Open Access Journals (Sweden)

    Veena Thakur

    2015-10-01

    Full Text Available Automatic or semi-automatic approaches for developing Technology Supported Learning Systems (TSLS are required to lighten their development cost. The main objective of this paper is to automate the generation of a caption module; it aims at reproducing the way teachers prepare their lessons and the learning material they will use throughout the course. Teachers tend to choose one or more textbooks that cover the contents of their subjects, determine the topics to be addressed, and identify the parts of the textbooks which may be helpful for the students it describes the entities, attributes, role and their relationship plus the constraints that govern the problem domain. The caption model is created in order to represent the vocabulary and key concepts of the problem domain. The caption model also identifies the relationships among all the entities within the scope of the problem domain, and commonly identifies their attributes. It defines a vocabulary and is helpful as a communication tool. DOM-Sortze, a framework that enables the semi-automatic generation of the Caption Module for technology supported learning system (TSLS from electronic textbooks. The semiautomatic generation of the Caption Module entails the identification and elicitation of knowledge from the documents to which end Natural Language Processing (NLP techniques are combined with ontologies and heuristic reasoning.

  5. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  6. Automatic generation of simple (statistical) exams

    OpenAIRE

    Grün, Bettina; Zeileis, Achim

    2008-01-01

    Package exams provides a framework for automatic generation of simple (statistical) exams. To employ the tools, users just need to supply a pool of exercises and a master file controlling the layout of the final PDF document. The exercises are specified in separate Sweave files (containing R code for data generation and LaTeX code for problem and solution description) and the master file is a LaTeX document with some additional control commands. This paper gives an overview on the main design...

  7. Automatic Metadata Generation using Associative Networks

    CERN Document Server

    Rodriguez, Marko A; Van de Sompel, Herbert

    2008-01-01

    In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete...

  8. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  9. Automatic Quiz Generation for the Elderly.

    Science.gov (United States)

    Chen, Weiqin; Samuelsen, Jeanette

    2015-01-01

    According to the literature, ageing causes declines in sensory, perceptual, motor and cognitive abilities. The combination of reduced vision, hearing, memory and mobility contributes to isolation and depression. We argue that memory games have potential for enhancing the cognitive ability of the elderly and improving their life quality. In our earlier research, we designed tangible tabletop games to help the elderly remember and talk about the past. In this paper, we report on our further research in the automatic generation of quizzes based on Wikipedia and other online resources for entertainment and memory training of the elderly. PMID:26294527

  10. Suction-generated noise in an anatomic silicon ear model.

    Science.gov (United States)

    Luxenberger, Wolfgang; Lahousen, T; Walch, C

    2012-10-01

    The objectives of this study were to evaluate noise levels generated during micro-suction aural toilet using an anatomic silicon ear model. It is an experimental study. In an anatomic ear model made of silicone, the eardrum was replaced by a 1-cm diameter microphone of a calibrated sound-level measuring device. Ear wax was removed using the sucker of a standard ENT treatment unit (Atmos Servant 5(®)). Mean and peak sound levels during the suction procedure were recorded with suckers of various diameters (Fergusson-Frazier 2.7-4 mm as well as Rosen 1.4-2.5 mm). Average noise levels during normal suction in a distance of 1 cm in front of the eardrum ranged between 97 and 103.5 dB(A) (broadband noise). Peak noise levels reached 118 dB(A). During partial obstruction of the sucker by cerumen or dermal flakes, peak noise levels reached 146 dB(A). Peak noise levels observed during the so-called clarinet phenomena were independent of the diameter or type of suckers used. Although micro-suction aural toilet is regarded as an established, widespread and usually safe method to clean the external auditory canal, some caution seems advisable. The performance of long-lasting suction periods straight in front of the eardrum without sound-protecting earwax between sucker and eardrum should be avoided. In particular, when clarinet phenomena are occurring (as described above), the suction procedure should be aborted immediately. In the presence of dermal flakes blocking the auditory canal, cleaning with micro-forceps or other non-suctioning instruments might represent a reasonable alternative. PMID:22740154

  11. Automatic Testcase Generation for Flight Software

    Science.gov (United States)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to

  12. Automatic generation of digital anthropomorphic phantoms from simulated MRI acquisitions

    Science.gov (United States)

    Lindsay, C.; Gennert, M. A.; KÓ§nik, A.; Dasari, P. K.; King, M. A.

    2013-03-01

    In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.

  13. Automatic tool path generation for finish machining

    Energy Technology Data Exchange (ETDEWEB)

    Kwok, Kwan S.; Loucks, C.S.; Driessen, B.J.

    1997-03-01

    A system for automatic tool path generation was developed at Sandia National Laboratories for finish machining operations. The system consists of a commercially available 5-axis milling machine controlled by Sandia developed software. This system was used to remove overspray on cast turbine blades. A laser-based, structured-light sensor, mounted on a tool holder, is used to collect 3D data points around the surface of the turbine blade. Using the digitized model of the blade, a tool path is generated which will drive a 0.375 inch diameter CBN grinding pin around the tip of the blade. A fuzzified digital filter was developed to properly eliminate false sensor readings caused by burrs, holes and overspray. The digital filter was found to successfully generate the correct tool path for a blade with intentionally scanned holes and defects. The fuzzified filter improved the computation efficiency by a factor of 25. For application to general parts, an adaptive scanning algorithm was developed and presented with simulation results. A right pyramid and an ellipsoid were scanned successfully with the adaptive algorithm.

  14. Generating IDS Attack Pattern Automatically Based on Attack Tree

    Institute of Scientific and Technical Information of China (English)

    向尕; 曹元大

    2003-01-01

    Generating attack pattern automatically based on attack tree is studied. The extending definition of attack tree is proposed. And the algorithm of generating attack tree is presented. The method of generating attack pattern automatically based on attack tree is shown, which is tested by concrete attack instances. The results show that the algorithm is effective and efficient. In doing so, the efficiency of generating attack pattern is improved and the attack trees can be reused.

  15. Automatic generation of a view to geographical database

    OpenAIRE

    Dunkars, Mats

    2001-01-01

    This thesis concerns object oriented modelling and automatic generalisation of geographic information. The focus however is not on traditional paper maps, but on screen maps that are automatically generated from a geographical database. Object oriented modelling is used to design screen maps that are equipped with methods that automatically extracts information from a geographical database, generalises the information and displays it on a screen. The thesis consists of three parts: a theoreti...

  16. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18F-FDG imaging and the relationship between the 18F-FDG accumulation of brain anatomic function area and the 18F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18F-FDG accumulation of every brain anatomic function area and the 18F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  17. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  18. Automatic segmentation of thoracic and pelvic CT images for radiotherapy planning using implicit anatomic knowledge and organ-specific segmentation strategies

    International Nuclear Information System (INIS)

    Automatic segmentation of anatomical structures in medical images is a valuable tool for efficient computer-aided radiotherapy and surgery planning and an enabling technology for dynamic adaptive radiotherapy. This paper presents the design, algorithms and validation of new software for the automatic segmentation of CT images used for radiotherapy treatment planning. A coarse to fine approach is followed that consists of presegmentation, anatomic orientation and structure segmentation. No user input or a priori information about the image content is required. In presegmentation, the body outline, the bones and lung equivalent tissue are detected. Anatomic orientation recognizes the patient's position, orientation and gender and creates an elastic mapping of the slice positions to a reference scale. Structure segmentation is divided into localization, outlining and refinement, performed by procedures with implicit anatomic knowledge using standard image processing operations. The presented version of algorithms automatically segments the body outline and bones in any gender and patient position, the prostate, bladder and femoral heads for male pelvis in supine position, and the spinal canal, lungs, heart and trachea in supine position. The software was developed and tested on a collection of over 600 clinical radiotherapy planning CT stacks. In a qualitative validation on this test collection, anatomic orientation correctly detected gender, patient position and body region in 98% of the cases, a correct mapping was produced for 89% of thorax and 94% of pelvis cases. The average processing time for the entire segmentation of a CT stack was less than 1 min on a standard personal computer. Two independent retrospective studies were carried out for clinical validation. Study I was performed on 66 cases (30 pelvis, 36 thorax) with dosimetrists, study II on 52 cases (39 pelvis, 13 thorax) with radio-oncologists as experts. The experts rated the automatically produced

  19. Automatically Generating Game Tactics through Evolutionary Learning

    OpenAIRE

    Ponsen, Marc; Munoz-Avila, Hector; Spronck, Pieter; Aha, David W.

    2006-01-01

    The decision-making process of computer-controlled opponents in video games is called game AI. Adaptive game AI can improve the entertainment value of games by allowing computer-controlled opponents to ix weaknesses automatically in the game AI and to respond to changes in human-player tactics. Dynamic scripting is a reinforcement learning approach to adaptive game AI that learns, during gameplay, which game tactics an opponent should select to play effectively. In previous work, the tactics ...

  20. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  1. A New Approach to Fully Automatic Mesh Generation

    Institute of Scientific and Technical Information of China (English)

    闵卫东; 张征明; 等

    1995-01-01

    Automatic mesh generation is one of the most important parts in CIMS (Computer Integrated Manufacturing System).A method based on mesh grading propagation which automatically produces a triangular mesh in a multiply connected planar region is presented in this paper.The method decomposes the planar region into convex subregions,using algorithms which run in linear time.For every subregion,an algorithm is used to generate shrinking polygons according to boundary gradings and form delaunay triangulation between two adjacent shrinking polygons,both in linear time.It automatically propagates boundary gradings into the interior of the region and produces satisfactory quasi-uniform mesh.

  2. Integrated Coordinated Optimization Control of Automatic Generation Control and Automatic Voltage Control in Regional Power Grids

    OpenAIRE

    Qiu-Yu Lu; Wei Hu; Le Zheng; Yong Min; Miao Li; Xiao-Ping Li; Wei-Chun Ge; Zhi-Ming Wang

    2012-01-01

    Automatic Generation Control (AGC) and Automatic Voltage Control (AVC) are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC) system for AGC and AVC. Concerning the diversity of contro...

  3. Automatic Test Case Generation of C Program Using CFG

    Directory of Open Access Journals (Sweden)

    Sangeeta Tanwer

    2010-07-01

    Full Text Available Software quality and assurance in a software company is the only way to gain the customer confidence by removing all possible errors. It can be done by automatic test case generation. Taking popularly C programs as tests object, this paper explores how to create CFG of a C program and generate automatic Test Cases. It explores the feasibility and nonfeasibility of path basis upon no. of iteration. First C is code converted to instrumented code. Then test cases are generated by using Symbolic Testing and random Testing. System is developed by using C#.net in Visual Studio 2008. In addition some future research directions are also explored.

  4. Automatic Generation of Video Narratives from Shared UGC

    NARCIS (Netherlands)

    Zsombori, V.; Frantzis, M.; Guimarães, R.L.; Ursu, M.; Cesar Garcia, P.S.; Kegel, I.; Craigie, R.; Bulterman, D.C.A.

    2011-01-01

    This paper introduces an evaluated approach to the automatic generation of video narratives from user generated content gathered in a shared repository. In the context of social events, end-users record video material with their personal cameras and upload the content to a common repository. Video n

  5. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    Bocconi, S.; Nack, F.-M.; Hardman, L.

    2008-01-01

    In this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical presentation patte

  6. Automatic generation of a neural network architecture using evolutionary computation

    NARCIS (Netherlands)

    Vonk, E.; Jain, L.C.; Veelenturf, L.P.J.; Johnson, R.

    1995-01-01

    This paper reports the application of evolutionary computation in the automatic generation of a neural network architecture. It is a usual practice to use trial and error to find a suitable neural network architecture. This is not only time consuming but may not generate an optimal solution for a gi

  7. Vox populi: a tool for automatically generating video documentaries

    OpenAIRE

    Bocconi, S.; Nack, Frank; Hardman, Hazel Lynda

    2005-01-01

    Vox Populi is a system that automatically generates video documentaries. Our application domain is video interviews about controversial topics. Via a Web interface the user selects one of the possible topics and a point of view she would like the generated sequence to present, and the engine selects and assembles video material from the repository to satisfy the user request.

  8. Procedure for the automatic mesh generation of innovative gear teeth

    Directory of Open Access Journals (Sweden)

    Radicella Andrea Chiaramonte

    2016-01-01

    Full Text Available After having described gear wheels with teeth having the two sides constituted by different involutes and their importance in engineering applications, we stress the need for an efficient procedure for the automatic mesh generation of innovative gear teeth. First, we describe the procedure for the subdivision of the tooth profile in the various possible cases, then we show the method for creating the subdivision mesh, defined by two series of curves called meridians and parallels. Finally, we describe how the above procedure for automatic mesh generation is able to solve specific cases that may arise when dealing with teeth having the two sides constituted by different involutes.

  9. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    International Nuclear Information System (INIS)

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 x 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs

  10. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    Energy Technology Data Exchange (ETDEWEB)

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 {times} 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs.

  11. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-01-01

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology. PMID:20375445

  12. Automatic Generation of Tests from Domain and Multimedia Ontologies

    Science.gov (United States)

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  13. Mppsocgen: A framework for automatic generation of mppsoc architecture

    CERN Document Server

    Kallel, Emna; Baklouti, Mouna; Abid, Mohamed

    2012-01-01

    Automatic code generation is a standard method in software engineering since it improves the code consistency and reduces the overall development time. In this context, this paper presents a design flow for automatic VHDL code generation of mppSoC (massively parallel processing System-on-Chip) configuration. Indeed, depending on the application requirements, a framework of Netbeans Platform Software Tool named MppSoCGEN was developed in order to accelerate the design process of complex mppSoC. Starting from an architecture parameters design, VHDL code will be automatically generated using parsing method. Configuration rules are proposed to have a correct and valid VHDL syntax configuration. Finally, an automatic generation of Processor Elements and network topologies models of mppSoC architecture will be done for Stratix II device family. Our framework improves its flexibility on Netbeans 5.5 version and centrino duo Core 2GHz with 22 Kbytes and 3 seconds average runtime. Experimental results for reduction al...

  14. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  15. Algorithm for automatic generating motion trajectories of plant maintenance robot

    International Nuclear Information System (INIS)

    The algorithm for automatic generating motion trajectories of robot manipulator is proposed as a new method to operate plant maintenance robots. This algorithm consists of two procedures, motion trajectories of the end effecter and the posture of robot manipulator. Motion trajectories of the end effecter are generated by using a concept of repulsive force vector field. The motion trajectories model which consists of many virtual springs and mass points are changed their form using the repulsive force from obstacles. Then, a posture of robot manipulator is also automatically generated with the same concept. By using this algorithm, an experiment of generating motion with the 7 degrees of freedom (DOF) manipulator was carried out. As a result, it was confirmed that the proposed method realizes obstacle avoidance during task motion. We are planning to apply this system to nuclear power plants. This system can realize shortening of preparation and operation periods for maintenance work in the nuclear reactor. (author)

  16. Video2GIF: Automatic Generation of Animated GIFs from Video

    OpenAIRE

    Gygli, Michael; Song, Yale; Cao, Liangliang

    2016-01-01

    We introduce the novel problem of automatically generating animated GIFs from video. GIFs are short looping video with no sound, and a perfect combination between image and video that really capture our attention. GIFs tell a story, express emotion, turn events into humorous moments, and are the new wave of photojournalism. We pose the question: Can we automate the entirely manual and elaborate process of GIF creation by leveraging the plethora of user generated GIF content? We propose a Robu...

  17. AN APPROACH TO GENERATE TEST CASES AUTOMATICALLY USING GENETIC ALGORITHM

    OpenAIRE

    Deepika Sharma*, Dr. Sanjay Tyagi

    2016-01-01

    Software testing is a very crucial part among all phases of software life cycle model in software engineering, which leads to better software quality and reliability. The main issue of software testing is the incompleteness of testing due to the vast amount of possible test cases which increase the effort and cost of the software. So generating adequate test cases will help to reduce the effort and cost of the software. The purpose of this research paper is to automatically generate test case...

  18. An Application of Reverse Engineering to Automatic Item Generation: A Proof of Concept Using Automatically Generated Figures

    Science.gov (United States)

    Lorié, William A.

    2013-01-01

    A reverse engineering approach to automatic item generation (AIG) was applied to a figure-based publicly released test item from the Organisation for Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA) mathematical literacy cognitive instrument as part of a proof of concept. The author created an item…

  19. A semi-automatic framework of measuring pulmonary arterial metrics at anatomic airway locations using CT imaging

    Science.gov (United States)

    Jin, Dakai; Guo, Junfeng; Dougherty, Timothy M.; Iyer, Krishna S.; Hoffman, Eric A.; Saha, Punam K.

    2016-03-01

    Pulmonary vascular dysfunction has been implicated in smoking-related susceptibility to emphysema. With the growing interest in characterizing arterial morphology for early evaluation of the vascular role in pulmonary diseases, there is an increasing need for the standardization of a framework for arterial morphological assessment at airway segmental levels. In this paper, we present an effective and robust semi-automatic framework to segment pulmonary arteries at different anatomic airway branches and measure their cross-sectional area (CSA). The method starts with user-specified endpoints of a target arterial segment through a custom-built graphical user interface. It then automatically detect the centerline joining the endpoints, determines the local structure orientation and computes the CSA along the centerline after filtering out the adjacent pulmonary structures, such as veins or airway walls. Several new techniques are presented, including collision-impact based cost function for centerline detection, radial sample-line based CSA computation, and outlier analysis of radial distance to subtract adjacent neighboring structures in the CSA measurement. The method was applied to repeat-scan pulmonary multirow detector CT (MDCT) images from ten healthy subjects (age: 21-48 Yrs, mean: 28.5 Yrs; 7 female) at functional residual capacity (FRC). The reproducibility of computed arterial CSA from four airway segmental regions in middle and lower lobes was analyzed. The overall repeat-scan intra-class correlation (ICC) of the computed CSA from all four airway regions in ten subjects was 96% with maximum ICC found at LB10 and RB4 regions.

  20. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  1. Automatic generation of neural network architecture using evolutionary computation

    CERN Document Server

    Vonk, E

    1997-01-01

    This book describes the application of evolutionary computation in the automatic generation of a neural network architecture. The architecture has a significant influence on the performance of the neural network. It is the usual practice to use trial and error to find a suitable neural network architecture for a given problem. The process of trial and error is not only time-consuming but may not generate an optimal network. The use of evolutionary computation is a step towards automation in neural network architecture generation.An overview of the field of evolutionary computation is presented

  2. An automatic control system for a power-generating unit

    Energy Technology Data Exchange (ETDEWEB)

    Itelman, U.R.; Mankin, M.N.; Mikhailova, I.V.

    1979-02-05

    There exists an automatic control system for a power-generating unit, which contains a load regulator for the turbine, which is connected to the output of the actuator valve servo motor together with the slide valve of the regulator measuring channel, a boiler productivity regulator and a frequency-compensation unit for controlling the input power; the output from this unit is connected to the input to the turbine load regulator and the boiler productivity regulator. In this automatic control system, the compensation unit is manufactured in the form of a frequency deviation sensor connected to the voltage transformer of the generator--it is a complex electronic and conversion component. In order to simplify this design of the compensation unit, it is manufactured as a motion sensor, which is mechanically connected to the slide valve. This connection is made through the slide box of the valve or through the valve position rod.

  3. Automatic Generation of Thematically Focused Information Portals from Web Data

    OpenAIRE

    Sizov, Sergej

    2005-01-01

    Finding the desired information on the Web is often a hard and time-consuming task. This thesis presents the methodology of automatic generation of thematically focused portals from Web data. The key component of the proposed Web retrieval framework is the thematically focused Web crawler that is interested only in a specific, typically small, set of topics. The focused crawler uses classification methods for filtering of fetched documents and identifying most likely relevant Web source...

  4. MadEvent: automatic event generation with MadGraph

    International Nuclear Information System (INIS)

    We present a new multi-channel integration method and its implementation in the multi-purpose event generator MadEvent, which is based on MadGraph. Given a process, MadGraph automatically identifies all the relevant subprocesses, generates both the amplitudes and the mappings needed for an efficient integration over the phase space, and passes them to MadEvent. As a result, a process-specific, stand-alone code is produced that allows the user to calculate cross sections and produce unweighted events in a standard output format. Several examples are given for processes that are relevant for physics studies at present and forthcoming colliders. (author)

  5. Progressive Concept Evaluation Method for Automatically Generated Concept Variants

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2014-07-01

    Full Text Available Conceptual design is one of the most critical and important phases of design process with least computer support system. Conceptual design support tool (CDST is a conceptual design support system developed to automatically generate concepts for each subfunction in functional structure. The automated concept generation process results in large number of concept variants which require a thorough evaluation process to select the best design. To address this, a progressive concept evaluation technique consisting of absolute comparison, concept screening and weighted decision matrix using analytical hierarchy process (AHP is proposed to eliminate infeasible concepts at each stage. The software implementation of the proposed method is demonstrated.

  6. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  7. Visual definition of procedures for automatic virtual scene generation

    CERN Document Server

    Lucanin, Drazen

    2012-01-01

    With more and more digital media, especially in the field of virtual reality where detailed and convincing scenes are much required, procedural scene generation is a big helping tool for artists. A problem is that defining scene descriptions through these procedures usually requires a knowledge in formal language grammars, programming theory and manually editing textual files using a strict syntax, making it less intuitive to use. Luckily, graphical user interfaces has made a lot of tasks on computers easier to perform and out of the belief that creating computer programs can also be one of them, visual programming languages (VPLs) have emerged. The goal in VPLs is to shift more work from the programmer to the integrated development environment (IDE), making programming an user-friendlier task. In this thesis, an approach of using a VPL for defining procedures that automatically generate virtual scenes is presented. The methods required to build a VPL are presented, including a novel method of generating read...

  8. Automatic generation of alignments for 3D QSAR analyses.

    Science.gov (United States)

    Jewell, N E; Turner, D B; Willett, P; Sexton, G J

    2001-01-01

    Many 3D QSAR methods require the alignment of the molecules in a dataset, which can require a fair amount of manual effort in deciding upon a rational basis for the superposition. This paper describes the use of FBSS, a program for field-based similarity searching in chemical databases, for generating such alignments automatically. The CoMFA and CoMSIA experiments with several literature datasets show that the QSAR models resulting from the FBSS alignments are broadly comparable in predictive performance with the models resulting from manual alignments. PMID:11774998

  9. Automatic generation of alignments for 3D QSAR analyses

    OpenAIRE

    Jewell, N.E.; D.B. Turner; Willett, P.; Sexton, G.J.

    2001-01-01

    Many 3D QSAR methods require the alignment of the molecules in a dataset, which can require a fair amount of manual effort in deciding upon a rational basis for the superposition. This paper describes the use of FBSS, a pro-ram for field-based similarity searching in chemical databases, for generating such alignments automatically. The CoMFA and CoMSIA experiments with several literature datasets show that the QSAR models resulting from the FBSS alignments are broadly comparable in predictive...

  10. Automatic generation of application specific FPGA multicore accelerators

    DEFF Research Database (Denmark)

    Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo;

    2014-01-01

    . In this paper we propose a tool flow, which automatically generates highly optimized hardware multicore systems based on parameters. Profiling feedback is used to adjust these parameters to improve performance and lower the power consumption. For an image processing application we show that our tools are able......High performance computing systems make increasing use of hardware accelerators to improve performance and power properties. For large high-performance FPGAs to be successfully integrated in such computing systems, methods to raise the abstraction level of FPGA programming are required...

  11. Automatic Generation of 3D Building Models with Multiple Roofs

    Institute of Scientific and Technical Information of China (English)

    Kenichi Sugihara; Yoshitugu Hayashi

    2008-01-01

    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  12. Integrated Coordinated Optimization Control of Automatic Generation Control and Automatic Voltage Control in Regional Power Grids

    Directory of Open Access Journals (Sweden)

    Qiu-Yu Lu

    2012-09-01

    Full Text Available Automatic Generation Control (AGC and Automatic Voltage Control (AVC are key approaches to frequency and voltage regulation in power systems. However, based on the assumption of decoupling of active and reactive power control, the existing AGC and AVC systems work independently without any coordination. In this paper, a concept and method of hybrid control is introduced to set up an Integrated Coordinated Optimization Control (ICOC system for AGC and AVC. Concerning the diversity of control devices and the characteristics of discrete control interaction with a continuously operating power system, the ICOC system is designed in a hierarchical structure and driven by security, quality and economic events, consequently reducing optimization complexity and realizing multi-target quasi-optimization. In addition, an innovative model of Loss Minimization Control (LMC taking into consideration active and reactive power regulation is proposed to achieve a substantial reduction in network losses and a cross iterative method for AGC and AVC instructions is also presented to decrease negative interference between control systems. The ICOC system has already been put into practice in some provincial regional power grids in China. Open-looping operation tests have proved the validity of the presented control strategies.

  13. A probabilistic approach using deformable organ models for automatic definition of normal anatomical structures for 3D treatment planning

    International Nuclear Information System (INIS)

    Purpose/Objective : Current clinical methods for defining normal anatomical structures on tomographic images are time consuming and subject to intra- and inter-user variability. With the widespread implementation of 3D RTP, conformal radiotherapy, and dose escalation the implications of imprecise object definition have assumed a much higher level of importance. Object definition and volume-weighted metrics for normal anatomy, such as DVHs and NTCPs, play critical roles in aiming, shaping, and weighting beams. Improvements in object definition, including computer automation, are essential to yield reliable volume-weighted metrics and gains in human efficiency. The purpose of this study was to investigate a probabilistic approach using deformable models to automatically recognize and extract normal anatomy in tomographic images. Materials and Methods: Object models were created from normal organs that were segmented by an interactive method which involved placing a cursor near the center of the object on a slice and clicking a mouse button to initiate computation of structures called cores. Cores describe the skeletal and boundary shape of image objects in a manner that, in 2D, associates a location on the skeleton with the width of the object at that location. A significant advantage of cores is stability against image disturbances such as noise and blur. The model was composed of a relatively small set of extracted points on the skeleton and boundary. The points were carefully chosen to summarize the shape information captured by the cores. Neighborhood relationships between points were represented mathematically by energy functions that penalize, due to warping of the model, the ''goodness'' of match between the model and the image data at any stage during the segmentation process. The model was matched against the image data using a probabilistic approach based on Bayes theorem, which provides a means for computing a posteriori (posterior) probability from 1) a

  14. Automatic Tamil lyric generation based on ontological interpretation for semantics

    Indian Academy of Sciences (India)

    Rajeswari Sridhar; D Jalin Gladis; Kameswaran Ganga; G Dhivya Prabha

    2014-02-01

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of the system includes the ontological interpretation of the scenario, and the selection of the appropriate tri-grams for generating the lyrics. To fulfill this, we have designed a new ontology with weighted edges, where the edges correspond to a set of sentences, which indicate a relationship, and are represented as a tri-gram. Once the appropriate tri-grams are selected, the root words from these tri-grams are sent to the morphological generator, to form words in their packed form. These words are then assembled to form the final lyrics. Parameters of poetry like rhyme, alliteration, simile, vocative words, etc., are also taken care of by the system. Using this approach, we achieved an average accuracy of 77.3% with respect to the exact semantic details being conveyed in the generated lyrics.

  15. AUTO-LAY: automatic layout generation for procedure flow diagrams

    International Nuclear Information System (INIS)

    Nuclear Power Plant Procedures can be seen from essentially two viewpoints: the process and the information management. From the first point of view, it is important to supply the knowledge apt to solve problems connected with the control of the process, from the second one the focus of attention is on the knowledge representation, its structure, elicitation and maintenance, formal quality assurance. These two aspects of procedure representation can be considered and solved separately. In particular, methodological, formal and management issues require long and tedious activities, that in most cases constitute a great barrier for procedures development and upgrade. To solve these problems, Ansaldo is developing DIAM, a wide integrated tool for procedure management to support in procedure writing, updating, usage and documentation. One of the most challenging features of DIAM is AUTO-LAY, a CASE sub-tool that, in a complete automatical way, structures parts or complete flow diagrams. This is a feature that is partially present in some other CASE products, that, anyway, do not allow complex graph handling and isomorphism between video and paper representation AUTO-LAY has the unique prerogative to draw graphs of any complexity, to section them in pages, and to automatically compose a document. This has been recognized in the literature as the most important second-generation CASE improvement. (author). 5 refs., 9 figs

  16. Automatic Mesh Generation on a Regular Background Grid

    Institute of Scientific and Technical Information of China (English)

    LO S.H; 刘剑飞

    2002-01-01

    This paper presents an automatic mesh generation procedure on a 2D domainbased on a regular background grid. The idea is to devise a robust mesh generation schemewith equal emphasis on quality and efficiency. Instead of using a traditional regular rectangulargrid, a mesh of equilateral triangles is employed to ensure triangular element of the best qualitywill be preserved in the interior of the domain.As for the boundary, it is to be generated by a node/segment insertion process. Nodes areinserted into the background mesh one by one following the sequence of the domain boundary.The local structure of the mesh is modified based on the Delaunay criterion with the introduc-tion of each node. Those boundary segments, which are not produced in the phase of nodeinsertion, will be recovered through a systematic element swap process. Two theorems will bepresented and proved to set up the theoretical basic of the boundary recovery part. Exampleswill be presented to demonstrate the robustness and the quality of the mesh generated by theproposed technique.

  17. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  18. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning......, based on questionnaires administered to players after playing different levels. The contributions of the current paper are (1) more accurate models based on a much larger data set; (2) a mechanism for adapting level design parameters to given players and playing style; (3) evaluation of this adaptation...... mechanism using both algorithmic and human players. The results indicate that the adaptation mechanism effectively optimizes level design parameters for particular players....

  19. Hybrid Generative/Discriminative Learning for Automatic Image Annotation

    CERN Document Server

    Yang, Shuang Hong; Zha, Hongyuan

    2012-01-01

    Automatic image annotation (AIA) raises tremendous challenges to machine learning as it requires modeling of data that are both ambiguous in input and output, e.g., images containing multiple objects and labeled with multiple semantic tags. Even more challenging is that the number of candidate tags is usually huge (as large as the vocabulary size) yet each image is only related to a few of them. This paper presents a hybrid generative-discriminative classifier to simultaneously address the extreme data-ambiguity and overfitting-vulnerability issues in tasks such as AIA. Particularly: (1) an Exponential-Multinomial Mixture (EMM) model is established to capture both the input and output ambiguity and in the meanwhile to encourage prediction sparsity; and (2) the prediction ability of the EMM model is explicitly maximized through discriminative learning that integrates variational inference of graphical models and the pairwise formulation of ordinal regression. Experiments show that our approach achieves both su...

  20. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Institute of Scientific and Technical Information of China (English)

    Wei-Wen Xu

    2004-01-01

    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  1. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  2. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  3. Provenance-Powered Automatic Workflow Generation and Composition

    Science.gov (United States)

    Zhang, J.; Lee, S.; Pan, L.; Lee, T. J.

    2015-12-01

    In recent years, scientists have learned how to codify tools into reusable software modules that can be chained into multi-step executable workflows. Existing scientific workflow tools, created by computer scientists, require domain scientists to meticulously design their multi-step experiments before analyzing data. However, this is oftentimes contradictory to a domain scientist's daily routine of conducting research and exploration. We hope to resolve this dispute. Imagine this: An Earth scientist starts her day applying NASA Jet Propulsion Laboratory (JPL) published climate data processing algorithms over ARGO deep ocean temperature and AMSRE sea surface temperature datasets. Throughout the day, she tunes the algorithm parameters to study various aspects of the data. Suddenly, she notices some interesting results. She then turns to a computer scientist and asks, "can you reproduce my results?" By tracking and reverse engineering her activities, the computer scientist creates a workflow. The Earth scientist can now rerun the workflow to validate her findings, modify the workflow to discover further variations, or publish the workflow to share the knowledge. In this way, we aim to revolutionize computer-supported Earth science. We have developed a prototyping system to realize the aforementioned vision, in the context of service-oriented science. We have studied how Earth scientists conduct service-oriented data analytics research in their daily work, developed a provenance model to record their activities, and developed a technology to automatically generate workflow starting from user behavior and adaptability and reuse of these workflows for replicating/improving scientific studies. A data-centric repository infrastructure is established to catch richer provenance to further facilitate collaboration in the science community. We have also established a Petri nets-based verification instrument for provenance-based automatic workflow generation and recommendation.

  4. Intelligent control schemes applied to Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Dingguo Chen

    2016-04-01

    Full Text Available Integrating ever increasing amount of renewable generating resources to interconnected power systems has created new challenges to the safety and reliability of today‟s power grids and posed new questions to be answered in the power system modeling, analysis and control. Automatic Generation Control (AGC must be extended to be able to accommodate the control of renewable generating assets. In addition, AGC is mandated to operate in accordance with the NERC‟s Control Performance Standard (CPS criteria, which represent a greater flexibility in relaxing the control of generating resources and yet assuring the stability and reliability of interconnected power systems when each balancing authority operates in full compliance. Enhancements in several aspects to the traditional AGC must be made in order to meet the aforementioned challenges. It is the intention of this paper to provide a systematic, mathematical formulation for AGC as a first attempt in the context of meeting the NERC CPS requirements and integrating renewable generating assets, which has not been seen reported in the literature to the best knowledge of the authors. Furthermore, this paper proposes neural network based predictive control schemes for AGC. The proposed controller is capable of handling complicated nonlinear dynamics in comparison with the conventional Proportional Integral (PI controller which is typically most effective to handle linear dynamics. The neural controller is designed in such a way that it has the capability of controlling the system generation in the relaxed manner so the ACE is controlled to a desired range instead of driving it to zero which would otherwise increase the control effort and cost; and most importantly the resulting system control performance meets the NERC CPS requirements and/or the NERC Balancing Authority’s ACE Limit (BAAL compliance requirements whichever are applicable.

  5. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    Science.gov (United States)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  6. Reinforcement-Based Fuzzy Neural Network ontrol with Automatic Rule Generation

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A reinforcemen-based fuzzy neural network control with automatic rule generation RBFNNC) is pro-posed. A set of optimized fuzzy control rules can be automatically generated through reinforcement learning based onthe state variables of object system. RBFNNC was applied to a cart-pole balancing system and simulation resultshows significant improvements on the rule generation.

  7. Shape design sensitivities using fully automatic 3-D mesh generation

    Science.gov (United States)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  8. Automatic Overset Grid Generation with Heuristic Feedback Control

    Science.gov (United States)

    Robinson, Peter I.

    2001-01-01

    An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.

  9. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.)

  10. Automatic ID heat load generation in ANSYS code

    International Nuclear Information System (INIS)

    Detailed power density profiles are critical in the execution of a thermal analysis using a finite element (FE) code such as ANSYS. Unfortunately, as yet there is no easy way to directly input the precise power profiles into ANSYS. A straight-forward way to do this is to hand-calculate the power of each node or element and then type the data into the code. Every time a change is made to the FE model, the data must be recalculated and reentered. One way to solve this problem is to generate a set of discrete data, using another code such as PHOTON2, and curve-fit the data. Using curve-fitted formulae has several disadvantages. It is time consuming because of the need to run a second code for generation of the data, curve-fitting, and doing the data check, etc. Additionally, because there is no generality for different beamlines or different parameters, the above work must be repeated for each case. And, errors in the power profiles due to curve-fitting result in errors in the analysis. To solve the problem once and for all and with the capability to apply to any insertion device (ID), a program for ED power profile was written in ANSYS Parametric Design Language (APDL). This program is implemented as an ANSYS command with input parameters of peak magnetic field, deflection parameter, length of ID, and distance from the source. Once the command is issued, all the heat load will be automatically generated by the code

  11. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  12. Learning Techniques for Automatic Test Pattern Generation using Boolean Satisfiability

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2013-07-01

    Full Text Available Automatic Test Pattern Generation (ATPG is one of the core problems in testing of digital circuits. ATPG algorithms based on Boolean Satisfiability (SAT turned out to be very powerful, due to great advances in the performance of satisfiability solvers for propositional logic in the last two decades. SAT-based ATPG clearly outperforms classical approaches especially for hard-to-detect faults. But its inaccessibility of structural information and don’t care, there exists the over-specification problem of input patterns. In this paper we present techniques to delve into an additional layer to make use of structural properties of the circuit and value justification relations to a generic SAT algorithm. It joins binary decision graphs (BDD and SAT techniques to improve the efficiency of ATPG. It makes a study of inexpensive reconvergent fanout analysis of circuit to gather information on the local signal correlation by using BDD learning, then uses the above learned information to restrict and focus the overall search space of SAT-based ATPG. The learning technique is effective and lightweight. Experimental results show the effectiveness of the approach.

  13. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    Science.gov (United States)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  14. PUS Services Software Building Block Automatic Generation for Space Missions

    Science.gov (United States)

    Candia, S.; Sgaramella, F.; Mele, G.

    2008-08-01

    The Packet Utilization Standard (PUS) has been specified by the European Committee for Space Standardization (ECSS) and issued as ECSS-E-70-41A to define the application-level interface between Ground Segments and Space Segments. The ECSS-E- 70-41A complements the ECSS-E-50 and the Consultative Committee for Space Data Systems (CCSDS) recommendations for packet telemetry and telecommand. The ECSS-E-70-41A characterizes the identified PUS Services from a functional point of view and the ECSS-E-70-31 standard specifies the rules for their mission-specific tailoring. The current on-board software design for a space mission implies the production of several PUS terminals, each providing a specific tailoring of the PUS services. The associated on-board software building blocks are developed independently, leading to very different design choices and implementations even when the mission tailoring requires very similar services (from the Ground operative perspective). In this scenario, the automatic production of the PUS services building blocks for a mission would be a way to optimize the overall mission economy and improve the robusteness and reliability of the on-board software and of the Ground-Space interactions. This paper presents the Space Software Italia (SSI) activities for the development of an integrated environment to support: the PUS services tailoring activity for a specific mission. the mission-specific PUS services configuration. the generation the UML model of the software building block implementing the mission-specific PUS services and the related source code, support documentation (software requirements, software architecture, test plans/procedures, operational manuals), and the TM/TC database. The paper deals with: (a) the project objectives, (b) the tailoring, configuration, and generation process, (c) the description of the environments supporting the process phases, (d) the characterization of the meta-model used for the generation, (e) the

  15. Automatic Grasp Generation and Improvement for Industrial Bin-Picking

    DEFF Research Database (Denmark)

    Kraft, Dirk; Ellekilde, Lars-Peter; Rytz, Jimmy Alison

    2014-01-01

    and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented...... work allows to minimize cycle time as well as setup cost, which are essential factors in automatic bin-picking. It therefore leads to a wider applicability of bin-picking in industry....

  16. Historical Author Affiliations Assist Verification of Automatically Generated MEDLINE® Citations

    OpenAIRE

    Sabir, Tehseen F.; Hauser, Susan E.; Thoma, George R.

    2006-01-01

    High OCR error rates encountered in author affiliations increase the manual labor needed to verify MEDLINE citations automatically created from scanned journal articles. This is due to poor OCR recognition of the small text and italics frequently used in printed affiliations. Using author-affiliation relationships found in existing MEDLINE records, the SeekAffiliation (SA) program automatically finds potentially correct and complete affiliations, thereby reducing manual effort and increasing ...

  17. Automatic Generation of Remote Visualization Tools with WATT

    Science.gov (United States)

    Jensen, P. A.; Bollig, E. F.; Yuen, D. A.; Erlebacher, G.; Momsen, A. R.

    2006-12-01

    The ever increasing size and complexity of geophysical and other scientific datasets has forced developers to turn to more powerful alternatives for visualizing results of computations and experiments. These alternative need to be faster, scalable, more efficient, and able to be run on large machines. At the same time, advances in scripting languages and visualization libraries have significantly decreased the development time of smaller, desktop visualization tools. Ideally, programmers would be able to develop visualization tools in a high-level, local, scripted environment and then automatically convert their programs into compiled, remote visualization tools for integration into larger computation environments. The Web Automation and Translation Toolkit (WATT) [1] converts a Tcl script for the Visualization Toolkit (VTK) [2] into a standards-compliant web service. We will demonstrate the used of WATT for the automated conversion of a desktop visualization application (written in Tcl for VTK) into a remote visualization service of interest to geoscientists. The resulting service will allow real-time access to a large dataset through the Internet, and will be easily integrated into the existing architecture of the Virtual Laboratory for Earth and Planetary Materials (VLab) [3]. [1] Jensen, P.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kigelman, D.G., Shukh, E.A., Automated Generation of Web Services for Visualization Toolkits, Eos Trans. AGU, 86(52), Fall Meet. Suppl., Abstract IN42A-06, 2005. [2] The Visualization Toolkit, http://www.vtk.org [3] The Virtual Laboratory for Earth and Planetary Materials, http://vlab.msi.umn.edu

  18. Automatic Perceptual Color Map Generation for Realistic Volume Visualization

    OpenAIRE

    Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor

    2008-01-01

    Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical pla...

  19. A strategy for automatically generating programs in the lucid programming language

    Science.gov (United States)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  20. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  1. A stochastic approach for automatic registration and fusion of left atrial electroanatomic maps with 3D CT anatomical images

    International Nuclear Information System (INIS)

    The integration of electroanatomic maps with highly resolved computed tomography cardiac images plays an important role in the successful planning of the ablation procedure of arrhythmias. In this paper, we present and validate a fully-automated strategy for the registration and fusion of sparse, atrial endocardial electroanatomic maps (CARTO maps) with detailed left atrial (LA) anatomical reconstructions segmented from a pre-procedural MDCT scan. Registration is accomplished by a parameterized geometric transformation of the CARTO points and by a stochastic search of the best parameter set which minimizes the misalignment between transformed CARTO points and the LA surface. The subsequent fusion of electrophysiological information on the registered CT atrium is obtained through radial basis function interpolation. The algorithm is validated by simulation and by real data from 14 patients referred to CT imaging prior to the ablation procedure. Results are presented, which show the validity of the algorithmic scheme as well as the accuracy and reproducibility of the integration process. The obtained results encourage the application of the integration method in post-intervention ablation assessment and basic AF research and suggest the development for real-time applications in catheter guiding during ablation intervention

  2. Automated Theorem Proving for Cryptographic Protocols with Automatic Attack Generation

    OpenAIRE

    Jan Juerjens; Thomas A. Kuhn

    2016-01-01

    Automated theorem proving is both automatic and can be quite efficient. When using theorem proving approaches for security protocol analysis, however, the problem is often that absence of a proof of security of a protocol may give little hint as to where the security weakness lies, to enable the protocol designer to improve the protocol. For our approach to verify cryptographic protocols using automated theorem provers for first-order logic (such as e-SETHEO or SPASS), we demonstrate a method...

  3. Research on Object-oriented Software Testing Cases of Automatic Generation

    Directory of Open Access Journals (Sweden)

    Junli Zhang

    2013-11-01

    Full Text Available In the research on automatic generation of testing cases, there are different execution paths under drivers of different testing cases. The probability of these paths being executed is also different. For paths which are easy to be executed, more redundant testing case tend to be generated; But only fewer testing cases are generated for the control paths which are hard to be executed. Genetic algorithm can be used to instruct the automatic generation of testing cases. For the former paths, it can restrict the generation of these kinds of testing cases. On the contrary, the algorithm will encourage the generation of such testing cases as much as possible. So based on the study on the technology of path-oriented testing case automatic generation, the genetic algorithm is adopted to construct the process of automatic generation. According to the triggering path during the dynamic execution of program, the generated testing cases are separated into different equivalence class. The number of testing case is adjusted dynamicly by the fitness corresponding to the paths. The method can create a certain number of testing cases for each execution path to ensure the sufficiency. It also reduces redundant testing cases so it is an effective method for automatic generation of testing cases.

  4. System and Component Software Specification, Run-time Verification and Automatic Test Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  5. The challenge of Automatic Level Generation for platform videogames based on Stories and Quests

    OpenAIRE

    Mourato, Fausto; Birra, Fernando; Santos, Manuel Próspero dos

    2013-01-01

    In this article we bring the concepts of narrativism and ludology to automatic level generation for platform videogames. The initial motivation is to understand how this genre has been used as a storytelling medium. Based on a narrative theory of games, the differences among several titles have been identified. In addition, we propose a set of abstraction layers to describe the content of a quest-based story in the particular context of videogames. Regarding automatic level generation for pla...

  6. A system for automatically generating documentation for (C)LP programs

    OpenAIRE

    Hermenegildo, Manuel V.

    2000-01-01

    We describe lpdoc, a tool which generates documentation manuals automatically from one or more logic program source files, written in ISO-Prolog, Ciao, and other (C)LP languages. It is particularly useful for documenting library modules, for which it automatically generates a rich description of the module interface. However, it can also be used quite successfully to document full applications. A fundamental advantage of using lpdoc is that it helps maintaining a true correspondence between t...

  7. A complete discrimination system for polynomials with complex coefficients and its automatic generation

    Institute of Scientific and Technical Information of China (English)

    梁松新; 张景中

    1999-01-01

    By establishing a complete discrimination system for polynomials, the problem of complete root classification for polynomials with complex coefficients is utterly solved, furthermore, the algorithm obtained is made into a general program in Maple, which enables the complete discrimination system and complete root classification of a polynomial to be automatically generated by computer, without any human intervention. Besides, by using the automatic generation of root classification, a method to determine the positive definiteness of a polynomial in one or two indeterminates is automatically presented.

  8. Development of automatic inspection and maintenance technology for steam generator in nuclear power plants

    International Nuclear Information System (INIS)

    In this paper, we propose a new approach to the development of the automatic vision system to examine and repair the steam generator tubes at remote distance. In nuclear power plants, workers are reluctant of works in steam generator because of the high radiation environment and limited working space. It is strongly recommended that the examination and maintenance works be done by an automatic system for the protection of the operator from the radiation exposure. Digital signal processors are used in implementing real time recognition and examination of steam generator tubes in the proposed vision system. Performance of proposed digital vision system is illustrated by simulation and experiment for similar steam generator model

  9. Validation of simple quantification methods for 18F FP CIT PET Using Automatic Delineation of volumes of interest based on statistical probabilistic anatomical mapping and isocontour margin setting

    International Nuclear Information System (INIS)

    18F FP CIT positron emission tomography (PET) is an effective imaging for dopamine transporters. In usual clinical practice, 18F FP CIT PET is analyzed visually or quantified using manual delineation of a volume of interest (VOI) fir the stratum. in this study, we suggested and validated two simple quantitative methods based on automatic VOI delineation using statistical probabilistic anatomical mapping (SPAM) and isocontour margin setting. Seventy five 18F FP CIT images acquired in routine clinical practice were used for this study. A study-specific image template was made and the subject images were normalized to the template. afterwards, uptakes in the striatal regions and cerebellum were quantified using probabilistic VOI based on SPAM. A quantitative parameter, QSPAM, was calculated to simulate binding potential. additionally, the functional volume of each striatal region and its uptake were measured in automatically delineated VOI using isocontour margin setting. Uptake volume product(QUVP) was calculated for each striatal region. QSPAMand QUVPwas calculated for each visual grading and the influence of cerebral atrophy on the measurements was tested. Image analyses were successful in all the cases. Both the QSPAMand QUVPwere significantly different according to visual grading (0.001). The agreements of QUVPand QSPAMwith visual grading were slight to fair for the caudate nucleus (K= 0.421 and 0.291, respectively) and good to prefect to the putamen (K=0.663 and 0.607, respectively). Also, QSPAMand QUVPhad a significant correlation with each other (0.001). Cerebral atrophy made a significant difference in QSPAMand QUVPof the caudate nuclei regions with decreased 18F FP CIT uptake. Simple quantitative measurements of QSPAMand QUVPshowed acceptable agreement with visual grad-ing. although QSPAMin some group may be influenced by cerebral atrophy, these simple methods are expected to be effective in the quantitative analysis of F FP CIT PET in usual clinical

  10. Automatic Generation Control Strategy Based on Balance of Daily Electric Energy

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    An automatic generation control strategy based on balance of daily total electric energy is put forward. It makes the balance between actual total generated energy controlled by automatic generation system and planned total energy on base of area control error, and makes the actual 24-hour active power load curve to approach the planned load curve. The generated energy is corrected by velocity weighting factor so that it conducts dynamic regulation and reaches the speed of response. Homologous strategy is used according to the real-time data in the operation of automatic generation control. Results of simulation are perfect and power energy compensation control with ideal effect can be achieved in the particular duration.

  11. AROMA: Automatic Generation of Radio Maps for Localization Systems

    CERN Document Server

    Eleryan, Ahmed; Youssef, Moustafa

    2010-01-01

    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced wi...

  12. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  13. Automatic Test case Generation from UML Activity Diagrams

    OpenAIRE

    V.Mary Sumalatha*1; Dr G.S.V.P.Raju2

    2014-01-01

    Test Case Generation is an important phase in software development. Nowadays much of the research is done on UML diagrams for generating test cases. Activity diagrams are different from flow diagrams in the fact that activity diagrams express parallel behavior which flow diagrams cannot express. This paper concentrates on UML 2.0 Activity Diagram for generating test cases. Fork and join pair in activity diagram are used to represent concurrent activities. A novel method is pro...

  14. AUTOMATIC BIOMASS BOILER WITH AN EXTERNAL THERMOELECTRIC GENERATOR

    OpenAIRE

    Marian Brázdil; Ladislav Šnajdárek; Petr Kracík; Jirí Pospíšil

    2014-01-01

    This paper presents the design and test results of an external thermoelectric generator that utilizes the waste heat from a small-scale domestic biomass boiler with nominal rated heat output of 25 kW. The low-temperature Bi2Te3 generator based on thermoelectric modules has the potential to recover waste heat from gas combustion products as effective energy. The small-scale generator is constructed from independent segments. Measurements have shown that up to 11 W of electricity can be generat...

  15. A Unified Overset Grid Generation Graphical Interface and New Concepts on Automatic Gridding Around Surface Discontinuities

    Science.gov (United States)

    Chan, William M.; Akien, Edwin (Technical Monitor)

    2002-01-01

    For many years, generation of overset grids for complex configurations has required the use of a number of different independently developed software utilities. Results created by each step were then visualized using a separate visualization tool before moving on to the next. A new software tool called OVERGRID was developed which allows the user to perform all the grid generation steps and visualization under one environment. OVERGRID provides grid diagnostic functions such as surface tangent and normal checks as well as grid manipulation functions such as extraction, extrapolation, concatenation, redistribution, smoothing, and projection. Moreover, it also contains hyperbolic surface and volume grid generation modules that are specifically suited for overset grid generation. It is the first time that such a unified interface existed for the creation of overset grids for complex geometries. New concepts on automatic overset surface grid generation around surface discontinuities will also be briefly presented. Special control curves on the surface such as intersection curves, sharp edges, open boundaries, are called seam curves. The seam curves are first automatically extracted from a multiple panel network description of the surface. Points where three or more seam curves meet are automatically identified and are called seam corners. Seam corner surface grids are automatically generated using a singular axis topology. Hyperbolic surface grids are then grown from the seam curves that are automatically trimmed away from the seam corners.

  16. Automatic exposure control in CT: the effect of patient size, anatomical region and prescribed modulation strength on tube current and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Papadakis, Antonios E. [University Hospital of Heraklion, Department of Medical Physics, Stavrakia, P.O. Box 1352, Heraklion, Crete (Greece); Perisinakis, Kostas; Damilakis, John [University of Crete, Faculty of Medicine, Department of Medical Physics, P.O. Box 2208, Heraklion, Crete (Greece)

    2014-10-15

    To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAs{sub mod}) values were recorded. Image noise was measured at selected anatomical sites. The mAs{sub mod} recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P < 0.05). The mAs{sub mod} was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAs{sub mod} ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. circle ATCM efficiency is related to the size of the patient's body. (orig.)

  17. Towards a Pattern-based Automatic Generation of Logical Specifications for Software Models

    OpenAIRE

    Klimek, Radoslaw

    2014-01-01

    The work relates to the automatic generation of logical specifications, considered as sets of temporal logic formulas, extracted directly from developed software models. The extraction process is based on the assumption that the whole developed model is structured using only predefined workflow patterns. A method of automatic transformation of workflow patterns to logical specifications is proposed. Applying the presented concepts enables bridging the gap between the benefits of deductive rea...

  18. Validating EHR documents: automatic schematron generation using archetypes.

    Science.gov (United States)

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations. PMID:24825691

  19. Impact of automatic threshold capture on pulse generator longevity

    Institute of Scientific and Technical Information of China (English)

    CHEN Ruo-han; CHEN Ke-ping; WANG Fang-zheng; HUA Wei; ZHANG Shu

    2006-01-01

    Background The automatic, threshold tracking, pacing algorithm developed by St. Jude Medical, verifies ventricular capture beat by beat by recognizing the evoked response following each pacemaker stimulus. This function was assumed to be not only energy saving but safe. This study estimated the extension in longevity obtained by AutoCapture (AC) compared with pacemakers programmed to manually optimized, nominal output.Methods Thirty-four patients who received the St. Jude Affinity series pacemaker were included in the study.The following measurements were taken: stimulation and sensing threshold, impedance of leads, evoked response and polarization signals by 3501 programmer during followup, battery current and battery impedance under different conditions. For longevity comparison, ventricular output was programmed under three different conditions: (1) AC on; (2) AC off with nominal output, and (3) AC off with pacing output set at twice the pacing threshold with a minimum of 2.0 V. Patients were divided into two groups: chronic threshold is higher or lower than 1 V. The efficacy of AC was evaluated.Results Current drain in the AC on group, AC off with optimized programming or nominal output was (14.33±2.84) mA, (16.74±2.75) mA and (18.4±2.44) mA, respectively (AC on or AC off with optimized programming vs. nominal output, P < 0.01). Estimated longevity was significantly extended by AC on when compared with nominal setting [(103 ± 27) months, (80 ± 24) months, P < 0.01). Furthermore, compared with the optimized programming, AC extends the longevity when the pacing threshold is higher than 1 V.Conclusion AC could significantly prolong pacemaker longevity; especially in the patient with high pacing threshold.

  20. AN ALGORITHM FOR AUTOMATICALLY GENERATING BLACK-BOX TEST CASES

    Institute of Scientific and Technical Information of China (English)

    Xu Baowen; Nie Changhai; Shi Qunfeng; Lu Hong

    2003-01-01

    Selection of test cases plays a key role in improving testing efficiency. Black-box testing is an important way of testing, and its validity lies on the selection of test cases in some sense. A reasonable and effective method about the selection and generation of test cases is urgently needed. This letter first introduces some usualmethods on black-box test case generation,then proposes a new algorithm based on interface parameters and discusses its properties, finally shows the effectiveness of the algorithm.

  1. AN ALGORITHM FOR AUTOMATICALLY GENERATING BLACK-BOX TEST CASES

    Institute of Scientific and Technical Information of China (English)

    XuBaowen; NieChanghai; 等

    2003-01-01

    Selection of test cases plays a key role in improving testing efficiency.Black-box testing is an important way of testing,and is validity lies on the secection of test cases in some sense.A reasonable and effective method about the selection and generation of test cascs is urgently needed.This letter first introduces some usual methods on black-box test case generation,then proposes a new glgorithm based on interface parameters and discusses its properties,finally shows the effectiveness of the algorithm.

  2. Intermediate leak protection/automatic shutdown for B and W helical coil steam generator

    International Nuclear Information System (INIS)

    The report summarizes a follow-on study to the multi-tiered Intermediate Leak/Automatic Shutdown System report. It makes the automatic shutdown system specific to the Babcock and Wilcox (B and W) helical coil steam generator and to the Large Development LMFBR Plant. Threshold leak criteria specific to this steam generator design are developed, and performance predictions are presented for a multi-tier intermediate leak, automatic shutdown system applied to this unit. Preliminary performance predictions for application to the helical coil steam generator were given in the referenced report; for the most part, these predictions have been confirmed. The importance of including a cover gas hydrogen meter in this unit is demonstrated by calculation of a response time one-fifth that of an in-sodium meter at hot standby and refueling conditions

  3. Automatic Generation of Audio Content for Open Learning Resources

    Science.gov (United States)

    Brasher, Andrew; McAndrew, Patrick

    2009-01-01

    This paper describes how digital talking books (DTBs) with embedded functionality for learners can be generated from content structured according to the OU OpenLearn schema. It includes examples showing how a software transformation developed from open source components can be used to remix OpenLearn content, and discusses issues concerning the…

  4. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes. PMID:25910958

  5. Automatic generation of min-weighted persistent formations

    Institute of Scientific and Technical Information of China (English)

    Luo Xiao-Yuan; Li Shao-Bao; Guan Xin-Ping

    2009-01-01

    This paper researched into some methods for generating min-weighted rigid graphs and min-weighted persistent graphs.Rigidity and persistence are currently used in various studies on coordination and control of autonomous multi-agent formations.To minimize the communication complexity of formations and reduce energy consumption,this paper introduces the rigidity matrix and presents three algorithms for generating min-weighted rigid and min weighted persistent graphs.First,the existence of a min-weighted rigid graph is proved by using the rigidity matrix,and algorithm 1 is presented to generate the min-weighted rigid graphs.Second,the algorithm 2 based on the rigidity matrix is presented to direct the edges of min-weighted rigid graphs to generate min-weighted persistent graphs.Third,the formations with range constraints are considered,and algorithm 3 is presented to find whether a framework can form a min-weighted persistent formation.Finally,some simulations are given to show the efficiency of our research.

  6. Use of design pattern layout for automatic metrology recipe generation

    Science.gov (United States)

    Tabery, Cyrus; Page, Lorena

    2005-05-01

    As critical dimension control requirements become more challenging, due to complex designs, aggressive lithography, and the constant need to shrink,metrology recipe generation and design evaluation have also become very complex. Hundreds of unique sites must be measured and monitored to ensure good device performance and high yield. The use of the design and layout for automated metrology recipe generation will be critical to that challenge. The DesignGauge from Hitachi implements a system enabling arbitrary recipe generation and control of SEM observations performed on the wafer, based only on the design information. This concept for recipe generation can reduce the time to develop a technology node from RET and design rule selection, through OPC model calibration and verification, and all the way to high volume manufacturing. Conventional recipe creation for a large number of measurement targets requires a significant amount of engineering time. Often these recipes are used only once or twice during mask and process verification or OPC calibration data acquisition. This process of manual setup and analysis is also potentially error prone. CD-SEM recipe creation typically requires an actual wafer, so the recipe creation cannot occur until the scanner and reticle are in house. All of these problems with conventional CD SEM lead to increased development time and reduced final process quality. The new model of CD-SEM recipe generation and management utilizes design-to-SEM matching technology. This new technology extracts an idealized shape from the designed pattern, and utilizes the shape information for pattern matching. As a result, the designed pattern is used as basis for the template instead of the actual SEM image. Recipe creation can be achieved in a matter of seconds once the target site list is finalized. The sequence of steps for creating a recipe are: generate a target site list, pass the design polygons (GDS) and site list to the CD SEM, define references

  7. Automatic generation of indoor navigation instructions for blind users using a user-centric graph.

    Science.gov (United States)

    Dong, Hao; Ganz, Aura

    2014-01-01

    The complexity and diversity of indoor environments brings significant challenges to automatic generation of navigation instructions for blind and visually impaired users. Unlike generation of navigation instructions for robots, we need to take into account the blind users wayfinding ability. In this paper we introduce a user-centric graph based solution for cane users that takes into account the blind users cognitive ability as well as the user's mobility patterns. We introduce the principles of generating the graph and the algorithm used to automatically generate the navigation instructions using this graph. We successfully tested the efficiency of the instruction generation algorithm, the correctness of the generated paths, and the quality of the navigation instructions. Blindfolded sighted users were successful in navigating through a three-story building. PMID:25570105

  8. Semantic annotation of requirements for automatic UML class diagram generation

    Directory of Open Access Journals (Sweden)

    Soumaya Amdouni

    2011-05-01

    Full Text Available The increasing complexity of software engineering requires effective methods and tools to support requirements analysts' activities. While much of a company's knowledge can be found in text repositories, current content management systems have limited capabilities for structuring and interpreting documents. In this context, we propose a tool for transforming text documents describing users' requirements to an UML model. The presented tool uses Natural Language Processing (NLP and semantic rules to generate an UML class diagram. The main contribution of our tool is to provide assistance to designers facilitating the transition from a textual description of user requirements to their UML diagrams based on GATE (General Architecture of Text by formulating necessary rules that generate new semantic annotations.

  9. VOX POPULI: automatic generation of biased video sequences

    OpenAIRE

    Bocconi, S; Nack, Frank

    2004-01-01

    We describe our experimental rhetoric engine Vox Populi that generates biased video-sequences from a repository of video interviews and other related audio-visual web sources. Users are thus able to explore their own opinions on controversial topics covered by the repository. The repository contains interviews with United States residents stating their opinion on the events occurring after the terrorist attack on the United States on the 11th of September 2001. We present a model for biased d...

  10. Automatic Generation of Correlation Rules to Detect Complex Attack Scenarios

    OpenAIRE

    Godefroy, Erwan; Totel, Eric; Hurfin, Michel; Majorczyk, Frédéric

    2014-01-01

    In large distributed information systems, alert correlation systems are necessary to handle the huge amount of elementary security alerts and to identify complex multi-step attacks within the flow of low level events and alerts. In this paper, we show that, once a human expert has provided an action tree derived from an attack tree, a fully automated transformation process can generate exhaustive correlation rules that would be tedious and error prone to enumerate by hand. The transformation ...

  11. FAsTA: A Folksonomy-Based Automatic Metadata Generator

    OpenAIRE

    Al-Khalifa, Hend S.; Davis, Hugh C.

    2007-01-01

    Folksonomies provide a free source of keywords describing web resources, however, these keywords are free form and unstructured. In this paper, we describe a novel tool that converts folksonomy tags into semantic metadata, and present a case study consisting of a framework for evaluating the usefulness of this metadata within the context of a particular eLearning application. The evaluation shows the number of ways in which the generated semantic metadata adds value to the raw folksonomy tags.

  12. A hybrid approach to automatic generation of NC programs

    OpenAIRE

    G. Payeganeh; M. Tolouei-Rad

    2005-01-01

    Purpose: This paper describes AGNCP, an intelligent system for integrating commercial CAD and CAM systems for 2.5D milling operations at a low cost.Design/methodology/approach: It deals with different machining problems with the aid of two expert systems. It recognizes machining features, determines required machining process plans, cutting tools and parameters necessary for generation of NC programs.Findings: The system deals with different machining problems with the aid of two expert syste...

  13. Using DSL for Automatic Generation of Software Connectors

    Czech Academy of Sciences Publication Activity Database

    Bureš, Tomáš; Malohlava, M.; Hnětynka, P.

    Los Alamitos: IEEE Computer Society, 2008, s. 138-147. ISBN 978-0-7695-3091-8. [ICCBSS 2008. International Conference on Composition-Based Software Systems /7./. Madrid (ES), 25.02.2008-29.02.2008,] R&D Projects: GA AV ČR 1ET400300504 Institutional research plan: CEZ:AV0Z10300504 Keywords : component based systems * software connectors * code generation * domain-specific languages Subject RIV: JC - Computer Hardware ; Software

  14. VOX POPULI: Automatic Generation of Biased Video Sequences

    OpenAIRE

    Bocconi, S.; Nack, Frank

    2004-01-01

    We describe our experimental rhetoric engine Vox Populi that generates biased video-sequences from a repository of video interviews and other related audio-visual web sources. Users are thus able to explore their own opinions on controversial topics covered by the repository. The repository contains interviews with United States residents stating their opinion on the events occurring after the terrorist attack on the United States on the 11th of September 2001. We present a model for biased d...

  15. Research and implementation of report automatic generation measure based on perl CGI

    International Nuclear Information System (INIS)

    The running process is usually accompanied with a large number of data about operation states in the large scale real time data processing system, and the data should be managed through out some performance report by operation engineers. A solution for performance report automatic generation is presented. It is to build a performance report automatic generation system by extracting the massages of the database and UNIX file system and deploying it to an application system. The system has been applied at the CTBT NDC. (authors)

  16. Automatic Geometry Generation from Point Clouds for BIM

    OpenAIRE

    Charles Thomson; Jan Boehm

    2015-01-01

    The need for better 3D documentation of the built environment has come to the fore in recent years, led primarily by city modelling at the large scale and Building Information Modelling (BIM) at the smaller scale. Automation is seen as desirable as it removes the time-consuming and therefore costly amount of human intervention in the process of model generation. BIM is the focus of this paper as not only is there a commercial need, as will be shown by the number of commercial solutions, but a...

  17. Automatic Generation of Network Protocol Gateways

    DEFF Research Database (Denmark)

    Bromberg, Yérom-David; Réveillère, Laurent; Lawall, Julia;

    2009-01-01

    , however, requires an intimate knowledge of the relevant protocols and a substantial understanding of low-level network programming, which can be a challenge for many application programmers. This paper presents a generative approach to gateway construction, z2z, based on a domain-specific language......The emergence of networked devices in the home has made it possible to develop applications that control a variety of household functions. However, current devices communicate via a multitude of incompatible protocols, and thus gateways are needed to translate between them.  Gateway construction...

  18. Automatic Geometry Generation from Point Clouds for BIM

    Directory of Open Access Journals (Sweden)

    Charles Thomson

    2015-09-01

    Full Text Available The need for better 3D documentation of the built environment has come to the fore in recent years, led primarily by city modelling at the large scale and Building Information Modelling (BIM at the smaller scale. Automation is seen as desirable as it removes the time-consuming and therefore costly amount of human intervention in the process of model generation. BIM is the focus of this paper as not only is there a commercial need, as will be shown by the number of commercial solutions, but also wide research interest due to the aspiration of automated 3D models from both Geomatics and Computer Science communities. The aim is to go beyond the current labour-intensive tracing of the point cloud to an automated process that produces geometry that is both open and more verifiable. This work investigates what can be achieved today with automation through both literature review and by proposing a novel point cloud processing process. We present an automated workflow for the generation of BIM data from 3D point clouds. We also present quality indicators for reconstructed geometry elements and a framework in which to assess the quality of the reconstructed geometry against a reference.

  19. Automatic generation of synchronization instructions for parallel processors

    Energy Technology Data Exchange (ETDEWEB)

    Midkiff, S.P.

    1986-05-01

    The development of high speed parallel multi-processors, capable of parallel execution of doacross and forall loops, has stimulated the development of compilers to transform serial FORTRAN programs to parallel forms. One of the duties of such a compiler must be to place synchronization instructions in the parallel version of the program to insure the legal execution order of doacross and forall loops. This thesis gives strategies usable by a compiler to generate these synchronization instructions. It presents algorithms for reducing the parallelism in FORTRAN programs to match a target architecture, recovering some of the parallelism so discarded, and reducing the number of synchronization instructions that must be added to a FORTRAN program, as well as basic strategies for placing synchronization instructions. These algorithms are developed for two synchronization instruction sets. 20 refs., 56 figs.

  20. Automatic generation of Feynman rules in the Schroedinger functional

    Energy Technology Data Exchange (ETDEWEB)

    Takeda, Shinji [Humboldt Universitaet zu Berlin, Newtonstr. 15, 12489 Berlin (Germany)], E-mail: takeda@physik.hu-berlin.de

    2009-04-11

    We provide an algorithm to generate vertices for the Schroedinger functional with an abelian background gauge field. The background field has a non-trivial color structure, therefore we mainly focus on a manipulation of the color matrix part. We propose how to implement the algorithm especially in python code. By using python outputs produced by the code, we also show how to write a numerical expression of vertices in the time-momentum as well as the coordinate space into a Feynman diagram calculation code. As examples of the applications of the algorithm, we provide some one-loop results, ratios of the {lambda} parameters between the plaquette gauge action and the improved gauge actions composed from six-link loops (rectangular, chair and parallelogram), the determination of the O(a) boundary counter term to this order, and the perturbative cutoff effects of the step scaling function of the Schroedinger functional coupling constant.

  1. Modular code supervisor. Automatic generation of command language

    International Nuclear Information System (INIS)

    It is shown how, starting from a problem formulated by the user, to generate the adequate calculation procedure in the command code, and acquire the data necessary for the calculation while verifying their validity. Modular codes are used, because of their flexibility and wide utilisation. Modules are written in Fortran, and calculations are done in batches according to an algorithm written in the GIBIANE command language. The action plans are based on the STRIPS and WARPLAN families. Elementary representation of a module and special instructions are illustrated. Dynamic construction macro-actions, and acquisition of the specification (which allows users to express the goal of a program without indicating which algorithm is used to reach the goal) are illustrated. The final phase consists in translating the algorithm into the command language

  2. Contribution of supraspinal systems to generation of automatic postural responses

    Directory of Open Access Journals (Sweden)

    Tatiana G Deliagina

    2014-10-01

    Full Text Available Different species maintain a particular body orientation in space due to activity of the closed-loop postural control system. In this review we discuss the role of neurons of descending pathways in operation of this system as revealed in animal models of differing complexity: lower vertebrate (lamprey and higher vertebrates (rabbit and cat.In the lamprey and quadruped mammals, the role of spinal and supraspinal mechanisms in the control of posture is different. In the lamprey, the system contains one closed-loop mechanism consisting of supraspino-spinal networks. Reticulospinal (RS neurons play a key role in generation of postural corrections. Due to vestibular input, any deviation from the stabilized body orientation leads to activation of a specific population of RS neurons. Each of the neurons activates a specific motor synergy. Collectively, these neurons evoke the motor output necessary for the postural correction. In contrast to lampreys, postural corrections in quadrupeds are primarily based not on the vestibular input but on the somatosensory input from limb mechanoreceptors. The system contains two closed-loop mechanisms – spinal and spino-supraspinal networks, which supplement each other. Spinal networks receive somatosensory input from the limb signaling postural perturbations, and generate spinal postural limb reflexes. These reflexes are relatively weak, but in intact animals they are enhanced due to both tonic supraspinal drive and phasic supraspinal commands. Recent studies of these supraspinal influences are considered in this review. A hypothesis suggesting common principles of operation of the postural systems stabilizing body orientation in a particular plane in the lamprey and quadrupeds, that is interaction of antagonistic postural reflexes, is discussed.

  3. Automatic Generation of Printed Catalogs: An Initial Attempt

    Directory of Open Access Journals (Sweden)

    Jared Camins-Esakov

    2010-06-01

    Full Text Available Printed catalogs are useful in a variety of contexts. In special collections, they are often used as reference tools and to commemorate exhibits. They are useful in settings, such as in developing countries, where reliable access to the Internet—or even electricity—is not available. In addition, many private collectors like to have printed catalogs of their collections. All the information needed for creating printed catalogs is readily available in the MARC bibliographic records used by most libraries, but there are no turnkey solutions available for the conversion from MARC to printed catalog. This article describes the development of a system, available on github, that uses XSLT, Perl, and LaTeX to produce press-ready PDFs from MARCXML files. The article particularly focuses on the two XSLT stylesheets which comprise the core of the system, and do the "heavy lifting" of sorting and indexing the entries in the catalog. The author also highlights points where the data stored in MARC bibliographic records requires particular "massaging," and suggests improvements for future attempts at automated printed catalog generation.

  4. A hybrid approach to automatic generation of NC programs

    Directory of Open Access Journals (Sweden)

    G. Payeganeh

    2005-12-01

    Full Text Available Purpose: This paper describes AGNCP, an intelligent system for integrating commercial CAD and CAM systems for 2.5D milling operations at a low cost.Design/methodology/approach: It deals with different machining problems with the aid of two expert systems. It recognizes machining features, determines required machining process plans, cutting tools and parameters necessary for generation of NC programs.Findings: The system deals with different machining problems with the aid of two expert systems. The first communicates with CAD system for recognizing machining features. It is developed in LISP as machining features can be properly represented by LISP codes is ideal for manipulating lists and input data. The second expert system requires extensive communications with several databases for retrieving tooling and machining information and VP-Expert shell was found to be the most suitable package to perform this task.Research limitations/implications: 2.5D milling covers a wide range of operations. However, work is in progress cover 3D milling operations. The system can also be modified to be used for other activities such as turning, flame cutting, electro discharge machining (EDM, punching, etc.Practical implications: Use of AGNCP resulted in improved efficiency, noticeable time savings, and elimination of the need for expert process planners.Originality/value: The paper describes a method for eliminating the need for extensive user intervention for CAD/CAM integration.

  5. EXTRACSION: a system for automatic Eddy Current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automation of all process that contribute to diagnosis. This paper describes how signal processing, pattern recognition and artificial and artificial intelligence are used to build a software package that is able to automatically provide an efficient diagnosis. (author)

  6. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  7. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    Science.gov (United States)

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of…

  8. SIMULATION STUDIES ON AUTOMATIC GENERATION CONTROL IN DEREGULATED ENVIRONMENT WITHOUT CONSIDERING GRC

    Directory of Open Access Journals (Sweden)

    A. Suresh Babu

    2012-03-01

    Full Text Available In this paper, analysis of automatic generation control (AGC using integral controller is carried out in the deregulated environment. The traditional AGC of two area system is modified and implemented inderegulated environment to account the effect of contracted and un-contracted power demands on system dynamics. The concept of DISCO participation matrix (DPM to simulate bilateral contracts is proposed. Gain setting of integral controller is optimized without considering Generation Rate Constraint (GRC using Integral Squared Error (ISE technique.

  9. Revisiting the Steam-Boiler Case Study with LUTESS : Modeling for Automatic Test Generation

    OpenAIRE

    Papailiopoulou, Virginia; Seljimi, Besnik; Parissis, Ioannis

    2009-01-01

    International audience LUTESS is a testing tool for synchronous software making possible to automatically build test data generators. The latter rely on a formal model of the program environment composed of a set of invariant properties, supposed to hold for every software execution. Additional assumptions can be used to guide the test data generation. The environment descriptions together with the assumptions correspond to a test model of the program. In this paper, we apply this modeling...

  10. Accuracy assessment of building point clouds automatically generated from iphone images

    Science.gov (United States)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  11. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    OpenAIRE

    Park, Jung-ran; Brenza, Andrew

    2015-01-01

    Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semi)automatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata...

  12. The ACR-program for automatic finite element model generation for part through cracks

    International Nuclear Information System (INIS)

    The ACR-program (Automatic Finite Element Model Generation for Part Through Cracks) has been developed at the Technical Research Centre of Finland (VTT) for automatic finite element model generation for surface flaws using three dimensional solid elements. Circumferential or axial cracks can be generated on the inner or outer surface of a cylindrical or toroidal geometry. Several crack forms are available including the standard semi-elliptical surface crack. The program can be used in the development of automated systems for fracture mechanical analyses of structures. The tests for the accuracy of the FE-mesh have been started with two-dimensional models. The results indicate that the accuracy of the standard mesh is sufficient for practical analyses. Refinement of the standard mesh is needed in analyses with high load levels well over the limit load of the structure

  13. The mesh-matching algorithm: an automatic 3D mesh generator for Finite element structures

    CERN Document Server

    Couteau, B; Lavallee, S; Payan, Yohan; Lavallee, St\\'{e}phane

    2000-01-01

    Several authors have employed Finite Element Analysis (FEA) for stress and strain analysis in orthopaedic biomechanics. Unfortunately, the use of three-dimensional models is time consuming and consequently the number of analysis to be performed is limited. The authors have investigated a new method allowing automatically 3D mesh generation for structures as complex as bone for example. This method called Mesh-Matching (M-M) algorithm generated automatically customized 3D meshes of bones from an already existing model. The M-M algorithm has been used to generate FE models of ten proximal human femora from an initial one which had been experimentally validated. The new meshes seemed to demonstrate satisfying results.

  14. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    Science.gov (United States)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the

  15. Morphologlcal and anatomical structure of generative organs of Salsola kali ssp. ruthenica (lljin Soó at the SEM level

    Directory of Open Access Journals (Sweden)

    Krystyna Idzikowska

    2011-04-01

    Full Text Available The morphology and anatomy of generative organs of Salsola kali ssp. ruthenica was examined in detail using the light (LM and scanning electron microscopy (SEM. The whole flowers, fruits and their parts (pistil, stamens, sepals, embryo, seed were observed in different developmental stages. In the first stage (June, flower buds were closed. In the second stage (August, flowers were ready for pollination/fertilization. In the third stage (September, fruits were mature. Additionally, the anatomical and morphological structure of sepals was observed by means of LM and SEM. Thanks to the transverse and longitudinal semi-sections through sepals, the first phase of wing formation was recorded by SEM. The appearance of stomata in the epidermal cells of sepals above the forming wings was very interesting, too. The stomata were observed also in mature fruits.

  16. Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Sagonas

    2012-10-01

    Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.

  17. Automatic feature template generation for maximum entropy based intonational phrase break prediction

    Science.gov (United States)

    Zhou, You

    2013-03-01

    The prediction of intonational phrase (IP) breaks is important for both the naturalness and intelligibility of Text-to- Speech (TTS) systems. In this paper, we propose a maximum entropy (ME) model to predict IP breaks from unrestricted text, and evaluate various keyword selection approaches in different domains. Furthermore, we design a hierarchical clustering algorithm for automatic generation of feature templates, which minimizes the need for human supervision during ME model training. Results of comparative experiments show that, for the task of IP break prediction, ME model obviously outperforms classification and regression tree (CART), log-likelihood ratio is the best scoring measure of keyword selection, compared with manual templates, templates automatically generated by our approach greatly improves the F-score of ME based IP break prediction, and significantly reduces the size of ME model.

  18. HIGH QUALITY IMPLEMENTATION FOR AUTOMATIC GENERATION C# CODE BY EVENT-B PATTERN

    Directory of Open Access Journals (Sweden)

    Eman K Elsayed

    2014-01-01

    Full Text Available In this paper we proposed the logical correct path to implement automatically any algorithm or model in verified C# code. Our proposal depends on using the event-B as a formal method. It is suitable solution for un-experience in programming language and profession in mathematical modeling. Our proposal also integrates requirements, codes and verification in system development life cycle. We suggest also using event-B pattern. Our suggestion is classify into two cases, the algorithm case and the model case. The benefits of our proposal are reducing the prove effort, reusability, increasing the automation degree and generate high quality code. In this paper we applied and discussed the three phases of automatic code generation philosophy on two case studies the first is “minimum algorithm” and the second one is a model for ATM.

  19. Automatic Generation of Deep Web Wrappers based on Discovery of Repetition

    OpenAIRE

    Nakatoh, Tetsuya; Yamada, Yasuhiro; Hirokawa, Sachio

    2004-01-01

    A Deep Web wrapper is a program that extracts contents from search results. We propose a new automatic wrapper generation algorithm which discovers a repetitive pattern from search results. The repetitive pattern is expressed by token sequences which consist of HTML tags, plain texts and wild-cards. The algorithm applies a string matching with mismatches to unify the variation from the template and uses FFT(fast Fourier transformation) to attain efficiency. We show an empirical evaluation of ...

  20. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    OpenAIRE

    Luo Hanwu; Li Mengke; Xu Xinyao; Cui Shigang; Han Yin; Yan Kai; Wang Jing; Le Jian

    2016-01-01

    This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface p...

  1. Automatic Generation of Predictive Dynamic Models Reveals Nuclear Phosphorylation as the Key Msn2 Control Mechanism

    OpenAIRE

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-01-01

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. Here, we describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model, and automatically generates a set of simpler models compatible with observational data. As a proof-of-principle, we analyzed the dynamic control o...

  2. Deriving Safety Cases for the Formal Safety Certification of Automatically Generated Code

    OpenAIRE

    Basir, Nurlida; Denney, Ewen; Fischer, Bernd

    2008-01-01

    We present an approach to systematically derive safety cases for automatically generated code from information collected during a formal, Hoare-style safety certification of the code. This safety case makes explicit the formal and informal reasoning principles, and reveals the top-level assumptions and external dependencies that must be taken into account; however, the evidence still comes from the formal safety proofs. It uses a generic goal-based argument that is instantiated with respect t...

  3. An integrated automatic system for the eddy-current testing of the steam generator tubes

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Hee Gon; Choi, Seong Su [Korea Electric Power Corp. (KEPCO), Taejon (Korea, Republic of). Research Center

    1995-12-31

    This research project was focused on automation of steam generator tubes inspection for nuclear power plants. ECT (Eddy Current Testing) inspection process in nuclear power plants is classified into 3 subprocesses such as signal acquisition process, signal evaluation process, and inspection planning and data management process. Having been automated individually, these processes were effectively integrated into an automatic inspection system, which was implemented in HP workstation with expert system developed (author). 25 refs., 80 figs.

  4. Fully automatic adjoints: a robust and efficient mechanism for generating adjoint ocean models

    Science.gov (United States)

    Ham, D. A.; Farrell, P. E.; Funke, S. W.; Rognes, M. E.

    2012-04-01

    The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is employed by models such as the MITGCM [2] and the Alfred Wegener Institute model FESOM [3]. This technique is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This approach, known as the continuous adjoint and employed in ROMS [4], has the disadvantage that two different model code bases must be maintained and manually kept synchronised as the model develops. The discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint [5]. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint [1]. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation

  5. A semi-automatic method of generating subject-specific pediatric head finite element models for impact dynamic responses to head injury.

    Science.gov (United States)

    Li, Zhigang; Han, Xiaoqiang; Ge, Hao; Ma, Chunsheng

    2016-07-01

    To account for the effects of head realistic morphological feature variation on the impact dynamic responses to head injury, it is necessary to develop multiple subject-specific pediatric head finite element (FE) models based on computed tomography (CT) or magnetic resonance imaging (MRI) scans. However, traditional manual model development is very time-consuming. In this study, a new automatic method was developed to extract anatomical points from pediatric head CT scans to represent pediatric head morphological features (head size/shape, skull thickness, and suture/fontanel width). Subsequently, a geometry-adaptive mesh morphing method based on radial basis function was developed that can automatically morph a baseline pediatric head FE model into target FE models with geometries corresponding to the extracted head morphological features. In the end, five subject-specific head FE models of approximately 6-month-old (6MO) were automatically generated using the developed method. These validated models were employed to investigate differences in the head dynamic responses among subjects with different head morphologies. The results show that variations in head morphological features have a relatively large effect on pediatric head dynamic response. The results of this study indicate that pediatric head morphological variation had better be taken into account when reconstructing pediatric head injury due to traffic/fall accidents or child abuses using computational models as well as predicting head injury risk for children with obvious difference in head size and morphologies. PMID:27058003

  6. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...... movements of fibers from their initial regular hexagonal arrangement. Damageable layers are introduced into the fibers to take into account the random distribution of the fiber strengths. A series of computational experiments on the glass fibers reinforced polymer epoxy matrix composite is performed to...

  7. Automatic generation of stop word lists for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  8. AUTOMATIC GENERATION CONTROL OF MULTI AREA POWER SYSTEMS USING ANN CONTROLLER

    Directory of Open Access Journals (Sweden)

    Bipasha Bhatia

    2012-07-01

    Full Text Available This paper presents the use of one of the methods of artificial intelligence to study the automatic generation control of interconnected power systems. In the given paper, a control line of track is established for interconnected three area thermal-thermal-thermal power system using generation rate constraints (GRC &Artificial Neural Network (ANN. The working of the controllers is simulated using MATLAB/SIMULINK package. The outputs using both controllers are compared and it is established that ANN based approach is better than GRC for 1% step load conditions.

  9. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  10. Automatic geometric modeling, mesh generation and FE analysis for pipelines with idealized defects and arbitrary location

    Energy Technology Data Exchange (ETDEWEB)

    Motta, R.S.; Afonso, S.M.B.; Willmersdorf, R.B.; Lyra, P.R.M. [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Cabral, H.L.D. [TRANSPETRO, Rio de Janeiro, RJ (Brazil); Andrade, E.Q. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Although the Finite Element Method (FEM) has proved to be a powerful tool to predict the failure pressure of corroded pipes, the generation of good computational models of pipes with corrosion defects can take several days. This makes the use of computational simulation procedure difficult to apply in practice. The main purpose of this work is to develop a set of computational tools to produce automatically models of pipes with defects, ready to be analyzed with commercial FEM programs, starting from a few parameters that locate and provide the main dimensions of the defect or a series of defects. Here these defects can be internal and external and also assume general spatial locations along the pipe. Idealized rectangular and elliptic geometries can be generated. These tools were based on MSC.PATRAN pre and post-processing programs and were written with PCL (Patran Command Language). The program for the automatic generation of models (PIPEFLAW) has a simplified and customized graphical interface, so that an engineer with basic notions of computational simulation with the FEM can generate rapidly models that result in precise and reliable simulations. Some examples of models of pipes with defects generated by the PIPEFLAW system are shown, and the results of numerical analyses, done with the tools presented in this work, are compared with, empiric results. (author)

  11. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  12. Automatic Generation Control Using PI Controller with Bacterial Foraging for both Thermal and Hydro Plants

    Directory of Open Access Journals (Sweden)

    Preeti Hooda,

    2014-06-01

    Full Text Available The load-frequency control (LFC is used to restore the balance between load and generation in each control area by means of speed control. In power system, the main goal of load frequency control (LFC or automatic generation control (AGC is to maintain the frequency of each area and tie- line power flow within specified tolerance by adjusting the MW outputs of LFC generators so as to accommodate fluctuating load demands. In this paper, attempt is made to make a scheme for automatic generation control within a restructured environment considering effects of contracts between DISCOs and GENCOs to make power system network in normal state where, GENCO used are hydro plants as well as thermal plants. The bacterial foraging optimization technique is being developed, which is applied to AGC in an interconnected four area system.The performance of the system is obtained by MATLAB Simulink tool. The results are shown in frequency and power response for four area AGC system. In this paper we have shown practical work by using thermal and hydro both system at Genco’s side.As reheated system transfer function is being used.

  13. An Automatic K-Point Grid Generation Scheme for Enhanced Efficiency and Accuracy in DFT Calculations

    Science.gov (United States)

    Mohr, Jennifer A.-F.; Shepherd, James J.; Alavi, Ali

    2013-03-01

    We seek to create an automatic k-point grid generation scheme for density functional theory (DFT) calculations that improves the efficiency and accuracy of the calculations and is suitable for use in high-throughput computations. Current automated k-point generation schemes often result in calculations with insufficient k-points, which reduces the reliability of the results, or too many k-points, which can significantly increase computational cost. By controlling a wider range of k-point grid densities for the Brillouin zone based upon factors of conductivity and symmetry, a scalable k-point grid generation scheme can lower calculation runtimes and improve the accuracy of energy convergence. Johns Hopkins University

  14. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Venkat; Das, Trishna

    2016-05-01

    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  15. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  16. Application of GA optimization for automatic generation control design in an interconnected power system

    International Nuclear Information System (INIS)

    Highlights: → A realistic model for automatic generation control (AGC) design is proposed. → The model considers GRC, Speed governor dead band, filters and time delay. → The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  17. Application of GA optimization for automatic generation control design in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Golpira, H., E-mail: hemin.golpira@uok.ac.i [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Bevrani, H. [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Golpira, H. [Department of Industrial Engineering, Islamic Azad University, Sanandaj Branch, PO Box 618, Kurdistan (Iran, Islamic Republic of)

    2011-05-15

    Highlights: {yields} A realistic model for automatic generation control (AGC) design is proposed. {yields} The model considers GRC, Speed governor dead band, filters and time delay. {yields} The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  18. Reliable Mining of Automatically Generated Test Cases from Software Requirements Specification (SRS)

    CERN Document Server

    Raamesh, Lilly

    2010-01-01

    Writing requirements is a two-way process. In this paper we use to classify Functional Requirements (FR) and Non Functional Requirements (NFR) statements from Software Requirements Specification (SRS) documents. This is systematically transformed into state charts considering all relevant information. The current paper outlines how test cases can be automatically generated from these state charts. The application of the states yields the different test cases as solutions to a planning problem. The test cases can be used for automated or manual software testing on system level. And also the paper presents a method for reduction of test suite by using mining methods thereby facilitating the mining and knowledge extraction from test cases.

  19. Automatic Generation of Mashups for Personalized Commerce in Digital TV by Semantic Reasoning

    Science.gov (United States)

    Blanco-Fernández, Yolanda; López-Nores, Martín; Pazos-Arias, José J.; Martín-Vicente, Manuela I.

    The evolution of information technologies is consolidating recommender systems as essential tools in e-commerce. To date, these systems have focused on discovering the items that best match the preferences, interests and needs of individual users, to end up listing those items by decreasing relevance in some menus. In this paper, we propose extending the current scope of recommender systems to better support trading activities, by automatically generating interactive applications that provide the users with personalized commercial functionalities related to the selected items. We explore this idea in the context of Digital TV advertising, with a system that brings together semantic reasoning techniques and new architectural solutions for web services and mashups.

  20. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  1. Design and construction of a graphical interface for automatic generation of simulation code GEANT4

    International Nuclear Information System (INIS)

    This work is set in the context of the engineering studies final project; it is accomplished in the center of nuclear sciences and technologies in Sidi Thabet. This project is about conceiving and developing a system based on graphical user interface which allows an automatic codes generation for simulation under the GEANT4 engine. This system aims to facilitate the use of GEANT4 by scientific not necessary expert in this engine and to be used in different areas: research, industry and education. The implementation of this project uses Root library and several programming languages such as XML and XSL. (Author). 5 refs

  2. Automatic Generation of Human-like Route Descriptions: A Corpus-driven Approach

    Directory of Open Access Journals (Sweden)

    Rafael Teles

    2013-11-01

    Full Text Available Most of Web applications combines differents services, features and contents in order to enable the creation of new features and services. Such systems are called mashups. One of the most popular kind of mashups are the location ones that use geographic data to provide functionalites to users. The RotaCerta is a location system that uses the Google Maps and perform Natural Language Generation to provide textual descriptions of routes between two different locations. The great advantage of RotaCerta is the use of points of interest (POI to describe routes. POIs help the user to understand and assimilate the route. However, RotaCerta suffers from a several limitation: the need for manualy updating of a POIs dataset. Such work is exhausting, costly and greatly limits their use. Another point to highlight is the poor linguistic variability of texts it provides. In this work, we propose a mechanism to enable automatic feeding of POIs and a corpus-driven approach to enhance the linguistic variability of location mashups such as RotaCerta.We adopt both manual and automatic generation of new textual templates. In order to assess the quality of the routes descriptions, we use TF-IDF and cosine distance to calculate the similarity between descriptions of routes created by human volunteers and descriptions generated by the proposed approach. Route generation examples have been performed for three different brazilian cities. We also show that the text generated from the new template base is more similar to the texts used by people when describing routes if compared to Google Maps.

  3. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  4. Semi-Automatic Mapping Generation for the DBpedia Information Extraction Framework

    Directory of Open Access Journals (Sweden)

    Arup Sarkar, Ujjal Marjit, Utpal Biswas

    2013-03-01

    Full Text Available DBpedia is one of the very well known live projectsfrom the Semantic Web. It is likeamirror version ofthe Wikipedia site in Semantic Web. Initially itpublishes the information collected from theWikipedia, but only that part which is relevant tothe Semantic Web.Collecting information forSemantic Web from the Wikipedia is demonstratedas the extraction of structured data. DBpedianormally do this by using a specially designedframework called DBpedia Information ExtractionFramework. This extraction framework do itsworks thorough the evaluation of the similarproperties from the DBpedia Ontology and theWikipedia template. This step is known as DBpediamapping.At present mostof the mapping jobs aredone complete manually.In this paper a newframework is introduced considering the issuesrelated to the template to ontology mapping. A semi-automatic mapping tool for the DBpedia projectisproposedwith the capability of automaticsuggestion generation for the end usersso thatusers can identify the similar Ontology and templateproperties.Proposed framework is useful since afterselection of similar properties, the necessary code tomaintain the mapping between Ontology andtemplate is generated automatically.

  5. Automatic Generation Control in Multi Area Interconnected Power System by using HVDC Links

    Directory of Open Access Journals (Sweden)

    Yogendra Arya

    2012-01-01

    Full Text Available This paper investigates the effects of HVDC link in parallel with HVAC link on automatic generation control (AGC problem for a multi-area power system taking into consideration system parameter variations. A fuzzy logic controller is proposed for four area power system interconnected via parallel HVAC/HVDC transmission link which is also referred as asynchronous tie-lines. The linear model of HVAC/HVDC link is developed and the system responses to sudden load change are studied. The simulation studies are carried out for a four area interconnected thermal power system. Suitable solution for automatic generation control problem of four area electrical power system is obtained by means of improving the dynamic performance of power system under study. Robustness of controller is also checked by varying parameters. Simulation results indicate that the scheme works well. The dynamic analyses have been done with and without HVDC link using fuzzy logic controller in Matlab-Simulink. Further a comparison between the two is presented and it has been shown that the performance of the proposed scheme is superior in terms of overshoot and settling time.

  6. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    CERN Document Server

    Fujimoto, J

    2003-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplitudes at tree-level are automatically created. The Monte-Carlo phase space integration by means of BASES gives the total and differential cross sections. When combined with SPRING, an event generator, the program package provides us with the simulation of the SUSY particle productions.

  7. Hybrid Chaotic Particle Swarm Optimization Based Gains For Deregulated Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Cheshta Jain Dr. H. K. Verma

    2011-12-01

    Full Text Available Generation control is an important objective of power system operation. In modern power system, the traditional automatic generation control (AGC is modified by incorporating the effect of bilateral contracts. This paper investigates application of chaotic particle swarm optimization (CPSO for optimized operation of restructured AGC system. To obtain optimum gains of controllers, application of adaptive inertia weight factor and constriction factors is proposed to improve performance of particle swarm optimization (PSO algorithm. It is also observed that chaos mapping using logistic map sequence increases convergence rate of traditional PSO algorithm. The hybrid method presented in this paper gives global optimum gains of controller with significant improvement in convergence rate over basic PSO algorithm. The effectiveness and efficiency of the proposed algorithm have been tested on two area restructure system.

  8. Differential Evolution for Optimization of PID Gains in Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Dr. L.D. Arya,

    2011-05-01

    Full Text Available Automatic generation control (AGC of a multi area power system provides power demand signals for AGC power generators to control frequency and tie-line power flow due to the large load changes or other disturbances. Occurrence of large megawatt imbalance causes large frequency deviations from its nominal value which may be a threat to secure operation of power system. To avoid such situation,emergency control to maintain the system frequency using differential evolution (DE based proportionalintegral- derivative (PID controller is proposed in this paper. DE is one of the most powerful stochastic real parameter optimization in current use. DE based optimum gains give better optimal transient response of frequency and tie line power changes compared to particle swarm optimization based gains.

  9. Automatic deodorizing system for waste water from radioisotope facilities using an ozone generator

    International Nuclear Information System (INIS)

    We applied an ozone generator to sterilize and to deodorize the waste water from radioisotope facilities. A small tank connected to the generator is placed outside of the drainage facility founded previously, not to oxidize the other apparatus. The waste water is drained 1 m3 at a time from the tank of drainage facility, treated with ozone and discharged to sewer. All steps proceed automatically once the draining work is started remotely in the office. The waste water was examined after the ozone treatment for 0 (original), 0.5, 1.0, 1.5 and 2.0 h. Regarding original waste water, the sum of coliform groups varied with every examination repeated - probably depend on the colibacilli used in experiments; hydrogen sulfide, biochemical oxygen demand and the offensive odor increased with increasing coliform groups. The ozone treatment remarkably decreased hydrogen sulfide and the offensive odor, decreased coliform groups when the original water had rich coliforms. (author)

  10. An automatic granular structure generation and finite element analysis of heterogeneous semi-solid materials

    Science.gov (United States)

    Sharifi, Hamid; Larouche, Daniel

    2015-09-01

    The quality of cast metal products depends on the capacity of the semi-solid metal to sustain the stresses generated during the casting. Predicting the evolution of these stresses with accuracy in the solidification interval should be highly helpful to avoid the formation of defects like hot tearing. This task is however very difficult because of the heterogeneous nature of the material. In this paper, we propose to evaluate the mechanical behaviour of a metal during solidification using a mesh generation technique of the heterogeneous semi-solid material for a finite element analysis at the microscopic level. This task is done on a two-dimensional (2D) domain in which the granular structure of the solid phase is generated surrounded by an intergranular and interdendritc liquid phase. Some basic solid grains are first constructed and projected in the 2D domain with random orientations and scale factors. Depending on their orientation, the basic grains are combined to produce larger grains or separated by a liquid film. Different basic grain shapes can produce different granular structures of the mushy zone. As a result, using this automatic grain generation procedure, we can investigate the effect of grain shapes and sizes on the thermo-mechanical behaviour of the semi-solid material. The granular models are automatically converted to the finite element meshes. The solid grains and the liquid phase are meshed properly using quadrilateral elements. This method has been used to simulate the microstructure of a binary aluminium-copper alloy (Al-5.8 wt% Cu) when the fraction solid is 0.92. Using the finite element method and the Mie-Grüneisen equation of state for the liquid phase, the transient mechanical behaviour of the mushy zone under tensile loading has been investigated. The stress distribution and the bridges, which are formed during the tensile loading, have been detected.

  11. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling.

    Science.gov (United States)

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to

  12. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  13. Automatic generation of 3D motifs for classification of protein binding sites

    Directory of Open Access Journals (Sweden)

    Herzyk Pawel

    2007-08-01

    Full Text Available Abstract Background Since many of the new protein structures delivered by high-throughput processes do not have any known function, there is a need for structure-based prediction of protein function. Protein 3D structures can be clustered according to their fold or secondary structures to produce classes of some functional significance. A recent alternative has been to detect specific 3D motifs which are often associated to active sites. Unfortunately, there are very few known 3D motifs, which are usually the result of a manual process, compared to the number of sequential motifs already known. In this paper, we report a method to automatically generate 3D motifs of protein structure binding sites based on consensus atom positions and evaluate it on a set of adenine based ligands. Results Our new approach was validated by generating automatically 3D patterns for the main adenine based ligands, i.e. AMP, ADP and ATP. Out of the 18 detected patterns, only one, the ADP4 pattern, is not associated with well defined structural patterns. Moreover, most of the patterns could be classified as binding site 3D motifs. Literature research revealed that the ADP4 pattern actually corresponds to structural features which show complex evolutionary links between ligases and transferases. Therefore, all of the generated patterns prove to be meaningful. Each pattern was used to query all PDB proteins which bind either purine based or guanine based ligands, in order to evaluate the classification and annotation properties of the pattern. Overall, our 3D patterns matched 31% of proteins with adenine based ligands and 95.5% of them were classified correctly. Conclusion A new metric has been introduced allowing the classification of proteins according to the similarity of atomic environment of binding sites, and a methodology has been developed to automatically produce 3D patterns from that classification. A study of proteins binding adenine based ligands showed that

  14. An immunochromatographic biosensor combined with a water-swellable polymer for automatic signal generation or amplification.

    Science.gov (United States)

    Kim, Kahee; Joung, Hyou-Arm; Han, Gyeo-Re; Kim, Min-Gon

    2016-11-15

    An immunochromatographic assay (ICA) strip is one of the most widely used platforms in the field of point-of-care biosensors for the detection of various analytes in a simple, fast, and inexpensive manner. Currently, several approaches for sequential reactions in ICA platforms have improved their usability, sensitivity, and versatility. In this study, a new, simple, and low-cost approach using automatic sequential-reaction ICA strip is described. The automatic switching of a reagent pad from separation to attachment to the test membrane was achieved using a water-swellable polymer. The reagent pad was dried with an enzyme substrate for signal generation or with signal-enhancing materials. The strip design and system operation were confirmed by the characterization of the raw materials and flow analysis. We demonstrated the operation of the proposed sensor by using various chemical reaction-based assays, including metal-ion amplification, enzyme-colorimetric reaction, and enzyme-catalyzed chemiluminescence. Furthermore, by employing C-reactive protein as a model, we successfully demonstrated that the new water-swellable polymer-based ICA sensor can be utilized to detect biologically relevant analytes in human serum. PMID:27203463

  15. A Development Process for Enterprise Information Systems Based on Automatic Generation of the Components

    Directory of Open Access Journals (Sweden)

    Adrian ALEXANDRESCU

    2008-01-01

    Full Text Available This paper contains some ideas concerning the Enterprise Information Systems (EIS development. It combines known elements from the software engineering domain, with original elements, which the author has conceived and experimented. The author has followed two major objectives: to use a simple description for the concepts of an EIS, and to achieve a rapid and reliable EIS development process with minimal cost. The first goal was achieved defining some models, which describes the conceptual elements of the EIS domain: entities, events, actions, states and attribute-domain. The second goal is based on a predefined architectural model for the EIS, on predefined analyze and design models for the elements of the domain and finally on the automatic generation of the system components. The proposed methods do not depend on a special programming language or a data base management system. They are general and may be applied to any combination of such technologies.

  16. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    Science.gov (United States)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  17. HELAC-Onia: an automatic matrix element generator for heavy quarkonium physics

    CERN Document Server

    Shao, Hua-Sheng

    2013-01-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code \\mtt{HELAC} to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub \\mtt{HELAC-Onia}. We rewrote the original \\mtt{HELAC} to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, \\mtt{HELAC-Onia} is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. $h_{c,b},\\chi_{c,b}$) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  18. HELAC-Onia: An automatic matrix element generator for heavy quarkonium physics

    Science.gov (United States)

    Shao, Hua-Sheng

    2013-11-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code HELAC to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub HELAC-Onia. We rewrote the original HELAC to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, HELAC-Onia is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. h,χ) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  19. Grey wolf optimizer based regulator design for automatic generation control of interconnected power system

    Directory of Open Access Journals (Sweden)

    Esha Gupta

    2016-12-01

    Full Text Available This paper presents an application of grey wolf optimizer (GWO in order to find the parameters of primary governor loop for successful Automatic Generation Control of two areas’ interconnected power system. Two standard objective functions, Integral Square Error and Integral Time Absolute Error (ITAE, have been employed to carry out this parameter estimation process. Eigenvalues along with dynamic response analysis reveals that criterion of ITAE yields better performance. The comparison of the regulator performance obtained from GWO is carried out with Genetic Algorithm (GA, Particle Swarm Optimization, and Gravitational Search Algorithm. Different types of perturbations and load changes are incorporated in order to establish the efficacy of the obtained design. It is observed that GWO outperforms all three optimization methods. The optimization performance of GWO is compared with other algorithms on the basis of standard deviations in the values of parameters and objective functions.

  20. Utilizing Linked Open Data Sources for Automatic Generation of Semantic Metadata

    Science.gov (United States)

    Nummiaho, Antti; Vainikainen, Sari; Melin, Magnus

    In this paper we present an application that can be used to automatically generate semantic metadata for tags given as simple keywords. The application that we have implemented in Java programming language creates the semantic metadata by linking the tags to concepts in different semantic knowledge bases (CrunchBase, DBpedia, Freebase, KOKO, Opencyc, Umbel and/or WordNet). The steps that our application takes in doing so include detecting possible languages, finding spelling suggestions and finding meanings from amongst the proper nouns and common nouns separately. Currently, our application supports English, Finnish and Swedish words, but other languages could be included easily if the required lexical tools (spellcheckers, etc.) are available. The created semantic metadata can be of great use in, e.g., finding and combining similar contents, creating recommendations and targeting advertisements.

  1. Automatic fuzzy rule generation and its application to the navigation control for mobile robot

    International Nuclear Information System (INIS)

    This paper presents an approach to building multi-input and single output fuzzy models. Such a model is composed of fuzzy implications, and its output is inferred by simplified reasoning. The implications are automatically generated by the structure and parameter identification. In the structure identification, the optimal or near optimal number of fuzzy implications is determined in view of valid partition of data set. The parameters defining the implications are identified by a gradient method to minimize mean square errors. Numerical examples are provided to evaluate the feasibility of the proposed number of fuzzy implications than the ones achieved previously in other methods. The proposed approach has also been applied to construct a fuzzy model for the navigation control of a mobile robot. The validity of the resultant model is demonstrated by experimentation. (author)

  2. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  3. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    Science.gov (United States)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  4. A QUANTIFIER-ELIMINATION BASED HEURISTIC FOR AUTOMATICALLY GENERATING INDUCTIVE ASSERTIONS FOR PROGRAMS

    Institute of Scientific and Technical Information of China (English)

    Deepak KAPUR

    2006-01-01

    A method using quantifier-elimination is proposed for automatically generating program invariants/inductive assertions. Given a program, inductive assertions, hypothesized as parameterized formulas in a theory, are associated with program locations. Parameters in inductive assertions are discovered by generating constraints on parameters by ensuring that an inductive assertion is indeed preserved by all execution paths leading to the associated location of the program. The method can be used to discover loop invariants-properties of variables that remain invariant at the entry of a loop. The parameterized formula can be successively refined by considering execution paths one by one; heuristics can be developed for determining the order in which the paths are considered. Initialization of program variables as well as the precondition and postcondition, if available, can also be used to further refine the hypothesized invariant. The method does not depend on the availability of the precondition and postcondition of a program. Constraints on parameters generated in this way are solved for possible values of parameters. If no solution is possible, this means that an invariant of the hypothesized form is not likely to exist for the loop under the assumptions/approximations made to generate the associated verification condition. Otherwise, if the parametric constraints are solvable, then under certain conditions on methods for generating these constraints, the strongest possible invariant of the hypothesized form can be generated from most general solutions of the parametric constraints. The approach is illustrated using the logical languages of conjunction of polynomial equations as well as Presburger arithmetic for expressing assertions.

  5. Tra-la-Lyrics 2.0: Automatic Generation of Song Lyrics on a Semantic Domain

    Science.gov (United States)

    Gonçalo Oliveira, Hugo

    2015-12-01

    Tra-la-Lyrics is a system that generates song lyrics automatically. In its original version, the main focus was to produce text where stresses matched the rhythm of given melodies. There were no concerns on whether the text made sense or if the selected words shared some kind of semantic association. In this article, we describe the development of a new version of Tra-la-Lyrics, where text is generated on a semantic domain, defined by one or more seed words. This effort involved the integration of the original rhythm module of Tra-la-Lyrics in PoeTryMe, a generic platform that generates poetry with semantically coherent sentences. To measure our progress, the rhythm, the rhymes, and the semantic coherence in lyrics produced by the original Tra-la-Lyrics were analysed and compared with lyrics produced by the new instantiation of this system, dubbed Tra-la-Lyrics 2.0. The analysis showed that, in the lyrics by the new system, words have higher semantic association among them and with the given seeds, while the rhythm is still matched and rhymes are present. The previous analysis was complemented with a crowdsourced evaluation, where contributors answered a survey about relevant features of lyrics produced by the previous and the current versions of Tra-la-Lyrics. Though tight, the survey results confirmed the improvements of the lyrics by Tra-la-Lyrics 2.0.

  6. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery. PMID:26577253

  7. Comparison of Intensity-Modulated Radiotherapy Planning Based on Manual and Automatically Generated Contours Using Deformable Image Registration in Four-Dimensional Computed Tomography of Lung Cancer Patients

    International Nuclear Information System (INIS)

    Purpose: To evaluate the implications of differences between contours drawn manually and contours generated automatically by deformable image registration for four-dimensional (4D) treatment planning. Methods and Materials: In 12 lung cancer patients intensity-modulated radiotherapy (IMRT) planning was performed for both manual contours and automatically generated ('auto') contours in mid and peak expiration of 4D computed tomography scans, with the manual contours in peak inspiration serving as the reference for the displacement vector fields. Manual and auto plans were analyzed with respect to their coverage of the manual contours, which were assumed to represent the anatomically correct volumes. Results: Auto contours were on average larger than manual contours by up to 9%. Objective scores, D2% and D98% of the planning target volume, homogeneity and conformity indices, and coverage of normal tissue structures (lungs, heart, esophagus, spinal cord) at defined dose levels were not significantly different between plans (p = 0.22-0.94). Differences were statistically insignificant for the generalized equivalent uniform dose of the planning target volume (p = 0.19-0.94) and normal tissue complication probabilities for lung and esophagus (p = 0.13-0.47). Dosimetric differences >2% or >1 Gy were more frequent in patients with auto/manual volume differences ≥10% (p = 0.04). Conclusions: The applied deformable image registration algorithm produces clinically plausible auto contours in the majority of structures. At this stage clinical supervision of the auto contouring process is required, and manual interventions may become necessary. Before routine use, further investigations are required, particularly to reduce imaging artifacts

  8. Automatic generation and verification of railway interlocking control tables using FSM and NuSMV

    Directory of Open Access Journals (Sweden)

    Mohammad B. YAZDI

    2009-01-01

    Full Text Available Due to their important role in providing safe conditions for train movements, railway interlocking systems are considered as safety critical systems. The reliability, safety and integrity of these systems, relies on reliability and integrity of all stages in their lifecycle including the design, verification, manufacture, test, operation and maintenance.In this paper, the Automatic generation and verification of interlocking control tables, as one of the most important stages in the interlocking design process has been focused on, by the safety critical research group in the School of Railway Engineering, SRE. Three different subsystems including a graphical signalling layout planner, a Control table generator and a Control table verifier, have been introduced. Using NuSMV model checker, the control table verifier analyses the contents of control table besides the safe train movement conditions and checks for any conflicting settings in the table. This includes settings for conflicting routes, signals, points and also settings for route isolation and single and multiple overlap situations. The latest two settings, as route isolation and multiple overlap situations are from new outcomes of the work comparing to works represented on the subject recently.

  9. Performance Evaluation of Antlion Optimizer Based Regulator in Automatic Generation Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Esha Gupta

    2016-01-01

    Full Text Available This paper presents an application of the recently introduced Antlion Optimizer (ALO to find the parameters of primary governor loop of thermal generators for successful Automatic Generation Control (AGC of two-area interconnected power system. Two standard objective functions, Integral Square Error (ISE and Integral Time Absolute Error (ITAE, have been employed to carry out this parameter estimation process. The problem is transformed in optimization problem to obtain integral gains, speed regulation, and frequency sensitivity coefficient for both areas. The comparison of the regulator performance obtained from ALO is carried out with Genetic Algorithm (GA, Particle Swarm Optimization (PSO, and Gravitational Search Algorithm (GSA based regulators. Different types of perturbations and load changes are incorporated to establish the efficacy of the obtained design. It is observed that ALO outperforms all three optimization methods for this real problem. The optimization performance of ALO is compared with other algorithms on the basis of standard deviations in the values of parameters and objective functions.

  10. Automatic controller for steam generator water level during low power operation

    International Nuclear Information System (INIS)

    This research proposes a new controller which ensures a satisfactory automatic control for the steam generator water level from low power to full power. It is premised that the current analog control loop is replaced with digital computer control thus expanding the range of possible solutions. The proposed approach is to compensate the level measurement for thermal shrink and swell effects which cause complications in level control during low power operation. A non-linear digital predictor is a part of the controller and is used to estimate shrink and swell effects. The predictor is found to be stable and on-line applicable with micro-processors. The controller is evaluated by calculations in which it controls an existing non-linear digital computer model of a steam generator. For a multi-ramp power increase from low power to full power, the proposed controller shows good performances for the entire range. Water level settles down within 3 min after a single ramp increase (5% power increase in a minute) without any stability problem. Even at very low power, the maximum overshoot is judged to be acceptable. (orig.)

  11. A Simulink Library of cryogenic components to automatically generate control schemes for large Cryorefrigerators

    Science.gov (United States)

    Bonne, François; Alamir, Mazen; Hoa, Christine; Bonnay, Patrick; Bon-Mardion, Michel; Monteiro, Lionel

    2015-12-01

    In this article, we present a new Simulink library of cryogenics components (such as valve, phase separator, mixer, heat exchanger...) to assemble to generate model-based control schemes. Every component is described by its algebraic or differential equation and can be assembled with others to build the dynamical model of a complete refrigerator or the model of a subpart of it. The obtained model can be used to automatically design advanced model based control scheme. It also can be used to design a model based PI controller. Advanced control schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT- 60SA). The paper gives the example of the generation of the dynamical model of the 400W@1.8K refrigerator and shows how to build a Constrained Model Predictive Control for it. Based on the scheme, experimental results will be given. This work is being supported by the French national research agency (ANR) through the ANR-13-SEED-0005 CRYOGREEN program.

  12. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  13. A review of metaphase chromosome image selection techniques for automatic karyotype generation.

    Science.gov (United States)

    Arora, Tanvi; Dhir, Renu

    2016-08-01

    The karyotype is analyzed to detect the genetic abnormalities. It is generated by arranging the chromosomes after extracting them from the metaphase chromosome images. The chromosomes are non-rigid bodies that contain the genetic information of an individual. The metaphase chromosome image spread contains the chromosomes, but these chromosomes are not distinct bodies; they can either be individual chromosomes or be touching one another; they may be bent or even may be overlapping and thus forming a cluster of chromosomes. The extraction of chromosomes from these touching and overlapping chromosomes is a very tedious process. The segmentation of a random metaphase chromosome image may not give us correct and accurate results. Therefore, before taking up a metaphase chromosome image for analysis, it must be analyzed for the orientation of the chromosomes it contains. The various reported methods for metaphase chromosome image selection for automatic karyotype generation are compared in this paper. After analysis, it has been concluded that each metaphase chromosome image selection method has its advantages and disadvantages. PMID:26676686

  14. An automatic MRI/SPECT registration algorithm using image intensity and anatomical feature as matching characters: application on the evaluation of Parkinson's disease

    International Nuclear Information System (INIS)

    Single-photon emission computed tomography (SPECT) of dopamine transporters with 99mTc-TRODAT-1 has recently been proposed to offer valuable information in assessing the functionality of dopaminergic systems. Magnetic resonance imaging (MRI) and SPECT imaging are important in the noninvasive examination of dopamine concentration in vivo. Therefore, this investigation presents an automated MRI/SPECT image registration algorithm based on a new similarity metric. This similarity metric combines anatomical features that are characterized by specific binding, the mean count per voxel in putamens and caudate nuclei, and the distribution of image intensity that is characterized by normalized mutual information (NMI). A preprocess, a novel two-cluster SPECT normalization algorithm, is also presented for MRI/SPECT registration. Clinical MRI/SPECT data from 18 healthy subjects and 13 Parkinson's disease (PD) patients are involved to validate the performance of the proposed algorithms. An appropriate color map, such as 'rainbow,' for image display enables the two-cluster SPECT normalization algorithm to provide clinically meaningful visual contrast. The proposed registration scheme reduces target registration error from >7 mm for conventional registration algorithm based on NMI to approximately 4 mm. The error in the specific/nonspecific 99mTc-TRODAT-1 binding ratio, which is employed as a quantitative measure of TRODAT receptor binding, is also reduced from 0.45±0.22 to 0.08±0.06 among healthy subjects and from 0.28±0.18 to 0.12±0.09 among PD patients

  15. Automatic Generation of Building Mapping Using Digital, Vertical and Aerial High Resolution Photographs and LIDAR Point Clouds

    Science.gov (United States)

    Barragán, W.; Campos, A.; Sanchez, G.

    2016-06-01

    The objective of this research is automatic generation of buildings in the interest areas. This research was developed by using high resolution vertical aerial photographs and the LIDAR point cloud through radiometric and geometric digital processes. The research methodology usesknown building heights and various segmentation algorithms and spectral band combination. The overall effectiveness of the algorithm is 97.2% with the test data.

  16. Wind power integration into the automatic generation control of power systems with large-scale wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Altin, Müfit;

    2014-01-01

    Transmission system operators have an increased interest in the active participation of wind power plants (WPP) in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC) of...

  17. Automatic Code Generation for Recurring Code Patterns in Web Based Applications and Increasing Efficiency of Data Access Code

    OpenAIRE

    Senthil, J; Arumugam, S.; S Margret Anouncia; Abhinav Kapoor

    2012-01-01

    Today, a lot of web applications and web sites are data driven. These web applications have all the static and dynamic data stored in relational databases. The aim of this thesis is to generate automatic code for data access located in relational databases in minimum time.

  18. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  19. Automatic Stress Testing of Multi-tier Systems by Dynamic Bottleneck Switch Generation

    Science.gov (United States)

    Casale, Giuliano; Kalbasi, Amir; Krishnamurthy, Diwakar; Rolia, Jerry

    The performance of multi-tier systems is known to be significantly degraded by workloads that place bursty service demands on system resources. Burstiness can cause queueing delays, oversubscribe limited threading resources, and even cause dynamic bottleneck switches between resources. Thus, there is need for a methodology to create benchmarks with controlled burstiness and bottleneck switches to evaluate their impact on system performance. We tackle this problem using a model-based technique for the automatic and controlled generation of bursty benchmarks. Markov models are constructed in an automated manner to model the distribution of service demands placed by sessions of a given system on various system resources. The models are then used to derive session submission policies that result in user-specified levels of service demand burstiness for resources at the different tiers in a system. Our approach can also predict under what conditions these policies can create dynamic bottleneck switching among resources. A case study using a three-tier TPC-W testbed shows that our method is able to control and predict burstiness for session service demands. Further, results from the study demonstrate that our approach was able to inject controlled bottleneck switches. Experiments show that these bottleneck switches cause dramatic latency and throughput degradations that are not shown by the same session mix with non-bursty conditions.

  20. Automatic generation of predictive dynamic models reveals nuclear phosphorylation as the key Msn2 control mechanism.

    Science.gov (United States)

    Sunnåker, Mikael; Zamora-Sillero, Elias; Dechant, Reinhard; Ludwig, Christina; Busetto, Alberto Giovanni; Wagner, Andreas; Stelling, Joerg

    2013-05-28

    Predictive dynamical models are critical for the analysis of complex biological systems. However, methods to systematically develop and discriminate among systems biology models are still lacking. We describe a computational method that incorporates all hypothetical mechanisms about the architecture of a biological system into a single model and automatically generates a set of simpler models compatible with observational data. As a proof of principle, we analyzed the dynamic control of the transcription factor Msn2 in Saccharomyces cerevisiae, specifically the short-term mechanisms mediating the cells' recovery after release from starvation stress. Our method determined that 12 of 192 possible models were compatible with available Msn2 localization data. Iterations between model predictions and rationally designed phosphoproteomics and imaging experiments identified a single-circuit topology with a relative probability of 99% among the 192 models. Model analysis revealed that the coupling of dynamic phenomena in Msn2 phosphorylation and transport could lead to efficient stress response signaling by establishing a rate-of-change sensor. Similar principles could apply to mammalian stress response pathways. Systematic construction of dynamic models may yield detailed insight into nonobvious molecular mechanisms. PMID:23716718

  1. Iqpc 2015 Track: Evaluation of Automatically Generated 2d Footprints from Urban LIDAR Data

    Science.gov (United States)

    Truong-Hong, L.; Laefer, D.; Bisheng, Y.; Ronggang, H.; Jianping, L.

    2015-08-01

    Over the last decade, several automatic approaches have been proposed to extract and reconstruct 2D building footprints and 2D road profiles from ALS data, satellite images, and/or aerial imagery. Since these methods have to date been applied to various data sets and assessed through a variety of different quality indicators and ground truths, comparing the relative effectiveness of the techniques and identifying their strengths and short-comings has not been possible in a systematic way. This contest as part of IQPC15 was designed to determine pros and cons of submitted approaches in generating 2D footprint of a city region from ALS data. Specifically, participants were asked to submit 2D footprints (building outlines and road profiles) derived from ALS data from a highly dense dataset (approximately 225 points/m2) across a 1km2 of Dublin, Ireland's city centre. The proposed evaluation strategies were designed to measure not only the capacity of each method to detect and reconstruct 2D buildings and roads but also the quality of the reconstructed building and road models in terms of shape similarity and positional accuracy.

  2. Performing Label-Fusion-Based Segmentation Using Multiple Automatically Generated Templates

    Science.gov (United States)

    Chakravarty, M. Mallar; Steadman, Patrick; van Eede, Matthijs C.; Calcott, Rebecca D.; Gu, Victoria; Shaw, Philip; Raznahan, Armin; Collins, D. Louis; Lerch, Jason P.

    2016-01-01

    Classically, model-based segmentation procedures match magnetic resonance imaging (MRI) volumes to an expertly labeled atlas using nonlinear registration. The accuracy of these techniques are limited due to atlas biases, misregistration, and resampling error. Multi-atlas-based approaches are used as a remedy and involve matching each subject to a number of manually labeled templates. This approach yields numerous independent segmentations that are fused using a voxel-by-voxel label-voting procedure. In this article, we demonstrate how the multi-atlas approach can be extended to work with input atlases that are unique and extremely time consuming to construct by generating a library of multiple automatically generated templates of different brains (MAGeT Brain). We demonstrate the efficacy of our method for the mouse and human using two different nonlinear registration algorithms (ANIMAL and ANTs). The input atlases consist a high-resolution mouse brain atlas and an atlas of the human basal ganglia and thalamus derived from serial histological data. MAGeT Brain segmentation improves the identification of the mouse anterior commissure (mean Dice Kappa values (κ = 0.801), but may be encountering a ceiling effect for hippocampal segmentations. Applying MAGeT Brain to human subcortical structures improves segmentation accuracy for all structures compared to regular model-based techniques (κ = 0.845, 0.752, and 0.861 for the striatum, globus pallidus, and thalamus, respectively). Experiments performed with three manually derived input templates suggest that MAGeT Brain can approach or exceed the accuracy of multi-atlas label-fusion segmentation (κ = 0.894, 0.815, and 0.895 for the striatum, globus pallidus, and thalamus, respectively). PMID:22611030

  3. Matching and Clustering: Two Steps Towards Automatic Model Generation in Computer Vision

    OpenAIRE

    Gros, Patrick

    1993-01-01

    International audience In this paper, we present a general frame for a system of automatic modelling and recognition of 3D polyhedral objects. Such a system has many applications for robotics : recognition, localization, grasping,...Here we focus upon one main aspect of the system : when many images of one 3D object are taken from different unknown viewpoints, how to recognize those of them which represent the same aspect of the object ? Briefly, it is possible to determine automatically i...

  4. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ole Green

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  5. Development on quantitative safety analysis method of accident scenario. The automatic scenario generator development for event sequence construction of accident

    International Nuclear Information System (INIS)

    This study intends to develop a more sophisticated tool that will advance the current event tree method used in all PSA, and to focus on non-catastrophic events, specifically a non-core melt sequence scenario not included in an ordinary PSA. In the non-catastrophic event PSA, it is necessary to consider various end states and failure combinations for the purpose of multiple scenario construction. Therefore it is anticipated that an analysis work should be reduced and automated method and tool is required. A scenario generator that can automatically handle scenario construction logic and generate the enormous size of sequences logically identified by state-of-the-art methodology was developed. To fulfill the scenario generation as a technical tool, a simulation model associated with AI technique and graphical interface, was introduced. The AI simulation model in this study was verified for the feasibility of its capability to evaluate actual systems. In this feasibility study, a spurious SI signal was selected to test the model's applicability. As a result, the basic capability of the scenario generator could be demonstrated and important scenarios were generated. The human interface with a system and its operation, as well as time dependent factors and their quantification in scenario modeling, was added utilizing human scenario generator concept. Then the feasibility of an improved scenario generator was tested for actual use. Automatic scenario generation with a certain level of credibility, was achieved by this study. (author)

  6. Embedded Platform for Automatic Testing and Optimizing of FPGA Based Cryptographic True Random Number Generators

    Directory of Open Access Journals (Sweden)

    M. Varchola

    2009-12-01

    Full Text Available This paper deals with an evaluation platform for cryptographic True Random Number Generators (TRNGs based on the hardware implementation of statistical tests for FPGAs. It was developed in order to provide an automatic tool that helps to speed up the TRNG design process and can provide new insights on the TRNG behavior as it will be shown on a particular example in the paper. It enables to test sufficient statistical properties of various TRNG designs under various working conditions on the fly. Moreover, the tests are suitable to be embedded into cryptographic hardware products in order to recognize TRNG output of weak quality and thus increase its robustness and reliability. Tests are fully compatible with the FIPS 140 standard and are implemented by the VHDL language as an IP-Core for vendor independent FPGAs. A recent Flash based Actel Fusion FPGA was chosen for preliminary experiments. The Actel version of the tests possesses an interface to the Actel’s CoreMP7 softcore processor that is fully compatible with the industry standard ARM7TDMI. Moreover, identical tests suite was implemented to the Xilinx Virtex 2 and 5 in order to compare the performance of the proposed solution with the performance of already published one based on the same FPGAs. It was achieved 25% and 65% greater clock frequency respectively while consuming almost equal resources of the Xilinx FPGAs. On the top of it, the proposed FIPS 140 architecture is capable of processing one random bit per one clock cycle which results in 311.5 Mbps throughput for Virtex 5 FPGA.

  7. Automatic test pattern generation for stuck-at and delay faults in combinational circuits

    International Nuclear Information System (INIS)

    The present studies are developed to propose the automatic test pattern generation (ATG) algorithms for combinational circuits. These ATG algorithms are realized in two ATG programs: One is the ATG program for stuck-at fault and the other one for delay faults. In order to accelerate the ATG process, these two ATG programs have a common feature (the search method based on the concept of the degree of freedom), whereas only ATG program for the delay fault utilizes the 19-valued logic, a type of composite valued logic. This difference between two ATG programs results from the difference of the target fault. Accelerating the ATG process is indispensable for improving the ATG algorithms. This acceleration is mainly achieved by reducing the number of the unnecessary backtrackings, making the earlier detection of the conflicts, and shortening the computation time between the implication. Because of this purpose, the developed ATG programs include the new search method based on the concept of the degree of freedom (DF). The DF concept, computed directly and easily from the system descriptions such as types of gates and their interconnections, is the criterion to decide which, among several alternate lines' logic values required along each path, promises to be the most effective in order to accelerate and improve the ATG process. This DF concept is utilized to develop and improve both of ATG programs for stuck-at and delay faults in combinational circuits. In addition to improving the ATG process, reducing number of test pattern is indispensable for testing the delay faults because the size of the delay faults grows rapidly as increasing the size of the circuit. In order to improve the compactness of the test set, 19-valued logic are derived. Unlike other TG logic systems, 19-valued logic is utilized to generate the robustly hazard-free test pattern. This is achieved by using the basic 5-valued logic, proposed in this work, where the transition with no hazard is

  8. Automatic Generation of Analytic Equations for Vibrational and Rovibrational Constants from Fourth-Order Vibrational Perturbation Theory

    Science.gov (United States)

    Matthews, Devin A.; Gong, Justin Z.; Stanton, John F.

    2014-06-01

    The derivation of analytic expressions for vibrational and rovibrational constants, for example the anharmonicity constants χij and the vibration-rotation interaction constants α^B_r, from second-order vibrational perturbation theory (VPT2) can be accomplished with pen and paper and some practice. However, the corresponding quantities from fourth-order perturbation theory (VPT4) are considerably more complex, with the only known derivations by hand extensively using many layers of complicated intermediates and for rotational quantities requiring specialization to orthorhombic cases or the form of Watson's reduced Hamiltonian. We present an automatic computer program for generating these expressions with full generality based on the adaptation of an existing numerical program based on the sum-over-states representation of the energy to a computer algebra context. The measures taken to produce well-simplified and factored expressions in an efficient manner are discussed, as well as the framework for automatically checking the correctness of the generated equations.

  9. Development of user interface to support automatic program generation of nuclear power plant analysis by module-based simulation system

    International Nuclear Information System (INIS)

    Module-based Simulation System (MSS) has been developed to realize a new software work environment enabling versatile dynamic simulation of a complex nuclear power system flexibly. The MSS makes full use of modern software technology to replace a large fraction of human software works in complex, large-scale program development by computer automation. Fundamental methods utilized in MSS and developmental study on human interface system SESS-1 to help users in generating integrated simulation programs automatically are summarized as follows: (1) To enhance usability and 'communality' of program resources, the basic mathematical models of common usage in nuclear power plant analysis are programed as 'modules' and stored in a module library. The information on usage of individual modules are stored in module database with easy registration, update and retrieval by the interactive management system. (2) Target simulation programs and the input/output files are automatically generated with simple block-wise languages by a precompiler system for module integration purpose. (3) Working time for program development and analysis in an example study of an LMFBR plant thermal-hydraulic transient analysis was demonstrated to be remarkably shortened, with the introduction of an interface system SESS-1 developed as an automatic program generation environment. (author)

  10. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  11. ARP: Automatic rapid processing for the generation of problem dependent SAS2H/ORIGEN-s cross section libraries

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.; Hermann, O.W.; Bowman, S.M.; Parks, C.V.

    1998-04-01

    In this report, a methodology is described which serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. ARP, Automatic Rapid Processing, is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent fuel isotopic compositions for PWR and BWR systems.

  12. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    International Nuclear Information System (INIS)

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests

  13. Automatic rapid process for the generation of problem-dependent SAS2H/ORIGEN-S cross-section libraries

    International Nuclear Information System (INIS)

    A methodology is described that serves as an alternative to the SAS2H path of the SCALE system to generate cross sections for point-depletion calculations with the ORIGEN-S code. Automatic Rapid Processing (ARP) is an algorithm that allows the generation of cross-section libraries suitable to the ORIGEN-S code by interpolation over pregenerated SAS2H libraries. The interpolations are carried out on the following variables: burnup, enrichment, and water density. The adequacy of the methodology is evaluated by comparing measured and computed spent-fuel isotopic compositions for pressurized water reactor and boiling water reactor systems

  14. Development of the automatic test pattern generation for NPP digital electronic circuits using the degree of freedom concept

    International Nuclear Information System (INIS)

    In this paper, an improved algorithm for automatic test pattern generation (ATG) for nuclear power plant digital electronic circuits--the combinational type of logic circuits is presented. For accelerating and improving the ATG process for combinational circuits the presented ATG algorithm has the new concept--the degree of freedom (DF). The DF, directly computed from the system descriptions such as types of gates and their interconnections, is the criterion to decide which among several alternate lines' logic values required along each path promises to be the most effective in order to accelerate and improve the ATG process. Based on the DF the proposed ATG algorithm is implemented in the automatic fault diagnosis system (AFDS) which incorporates the advanced fault diagnosis method of artificial intelligence technique, it is shown that the AFDS using the ATG algorithm makes Universal Card (UV Card) testing much faster than the present testing practice or by using exhaustive testing sets

  15. Plagiarism meets paraphrasing: insights for the new generation in automatic plagiarism detection

    OpenAIRE

    Barrón-Cedeño, Alberto; Vila Rigat, Marta; Martí Antonin, M. Antònia; Rosso, Paolo

    2013-01-01

    Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyse the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagia...

  16. Plagiarism meets paraphrasing: insights for the next generation in automatic plagiarism detection

    OpenAIRE

    Barrón-Cedeño, Alberto; Vila, Marta; Martí, Maria Antonia; Rosso, Paolo

    2013-01-01

    Although paraphrasing is the linguistic mechanism underlying many plagiarism cases, little attention has been paid to its analysis in the framework of automatic plagiarism detection. Therefore, state-of-the-art plagiarism detectors find it difficult to detect cases of paraphrase plagiarism. In this article, we analyze the relationship between paraphrasing and plagiarism, paying special attention to which paraphrase phenomena underlie acts of plagiarism and which of them are detected by plagia...

  17. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;

    2014-01-01

    Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures and...... heterogeneity, which spatially scarce borehole lithology data may overlook, are well resolved in AEM surveys. This study presents a semi-automatic sequential hydrogeophysical inversion method for the integration of AEM and borehole data into regional groundwater models in sedimentary areas, where sand/ clay...

  18. Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier; Lindstaedt, Stefanie; Veas, Eduardo; Di Sciascio, Cecilia

    2015-01-01

        Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new ...

  19. Presentation: Visual analytics for automatic quality assessment of user-generated content on the English Wikipedia

    OpenAIRE

    David Strohmaier

    2015-01-01

    Related work has shown that it is possible to automatically measure the quality of Wikipedia articles. Yet, despite all these quality measures, it is difficult to identify what would improve an article. Therefore this master thesis is about an interactive graphic tool made for ranking and editing Wikipedia articles with support from quality measures. The contribution of this work is twofold: i) The Quality Analyzer that allows for creating new quality metrics and co...

  20. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  1. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  2. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which then...... execute on top of an existing software portability platform such as Java. The result is a considerably reduced implementation effort....

  3. Algorithm of automatic generation of technology process and process relations of automotive wiring harnesses

    Institute of Scientific and Technical Information of China (English)

    XU Benzhu; ZHU Jiman; LIU Xiaoping

    2012-01-01

    Identifying each process and their constraint relations from the complex wiring harness drawings quickly and accurately is the basis for formulating process routes. According to the knowledge of automotive wiring harness and the characteristics of wiring harness components, we established the model of wiring harness graph. Then we research the algorithm of identifying technology processes automatically, finally we describe the relationships between processes by introducing the constraint matrix, which is in or- der to lay a good foundation for harness process planning and production scheduling.

  4. SUNMAP: A Tool for Automatic Topology Selection and Generation for NoCs

    OpenAIRE

    Murali, Srinivasan; Micheli, Giovanni De

    2004-01-01

    Increasing communication demands of processor and memory cores in Systems on Chips (SoCs) necessitate the use of Networks on Chip (NoC) to interconnect the cores. An important phase in the design of NoCs is the mapping of cores onto the most suitable topology for a given application. In this paper, we present SUNMAP a tool for automatically selecting the best topology for a given application and producing a mapping of cores onto that topology. SUNMAP explores various design objective such as ...

  5. Automatic selection of informative sentences: The sentences that can generate multiple choice questions

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-12-01

    Full Text Available Traditional education cannot meet the expectation and requirement of a Smart City; it require more advance forms like active learning, ICT education etc. Multiple choice questions (MCQs play an important role in educational assessment and active learning which has a key role in Smart City education. MCQs are effective to assess the understanding of well-defined concepts. A fraction of all the sentences of a text contain well-defined concepts or information that can be asked as a MCQ. These informative sentences are required to be identified first for preparing multiple choice questions manually or automatically. In this paper we propose a technique for automatic identification of such informative sentences that can act as the basis of MCQ. The technique is based on parse structure similarity. A reference set of parse structures is compiled with the help of existing MCQs. The parse structure of a new sentence is compared with the reference structures and if similarity is found then the sentence is considered as a potential candidate. Next a rule-based post-processing module works on these potential candidates to select the final set of informative sentences. The proposed approach is tested in sports domain, where many MCQs are easily available for preparing the reference set of structures. The quality of the system selected sentences is evaluated manually. The experimental result shows that the proposed technique is quite promising.

  6. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    Science.gov (United States)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  7. A Solar Automatic Tracking System that Generates Power for Lighting Greenhouses

    OpenAIRE

    Qi-Xun Zhang; Hai-Ye Yu; Qiu-Yuan Zhang; Zhong-Yuan Zhang; Cheng-Hui Shao; Di Yang

    2015-01-01

    In this study we design and test a novel solar tracking generation system. Moreover, we show that this system could be successfully used as an advanced solar power source to generate power in greenhouses. The system was developed after taking into consideration the geography, climate, and other environmental factors of northeast China. The experimental design of this study included the following steps: (i) the novel solar tracking generation system was measured, and its performance was analyz...

  8. Uav Aerial Survey: Accuracy Estimation for Automatically Generated Dense Digital Surface Model and Orthothoto Plan

    Science.gov (United States)

    Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.

    2016-06-01

    A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.

  9. An Approach to Automatic Generation of Test Cases Based on Use Cases in the Requirements Phase

    Directory of Open Access Journals (Sweden)

    U.Senthil Kumaran

    2011-01-01

    Full Text Available The main aim of this paper is to generate test cases from the use cases. In the real-time scenario we have to face several issues like inaccuracy, ambiguity, and incompleteness in requirements this is because the requirements are not properly updated after various change requests. This will reduce the quality of test cases. To overcome these problems we develop a solution which generates test cases at the early stages of system development life cycle which captures maximum number of requirements. As requirements are best captured by use cases our focus lies on generating test cases from use case diagrams.

  10. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  11. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    Science.gov (United States)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  12. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    Science.gov (United States)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  13. PGPG: An Automatic Generator of Pipeline Design for Programmable GRAPE Systems

    OpenAIRE

    Hamada, Tsuyoshi; Fukushige, Toshiyuki; Makino, Junichiro

    2007-01-01

    We have developed PGPG (Pipeline Generator for Programmable GRAPE), a software which generates the low-level design of the pipeline processor and communication software for FPGA-based computing engines (FBCEs). An FBCE typically consists of one or multiple FPGA (Field-Programmable Gate Array) chips and local memory. Here, the term "Field-Programmable" means that one can rewrite the logic implemented to the chip after the hardware is completed, and therefore a single FBCE can be used for calcu...

  14. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  15. On the Automatic Generation of Plans for Life Cycle Assembly Processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-01-01

    Designing products for easy assembly and disassembly during their entire life cycles for purposes including product assembly, product upgrade, product servicing and repair, and product disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and manufacturing plan selection criteria, as compared to initial assembly, require re-visiting significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or to applied studies of life cycle assembly processes that give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for, optimize, and analyze the cycle assembly processes. The study of assembly planning is at the very heart of manufacturing research facilities and academic engineering institutions; and, in recent years a number of significant advances in the field of assembly planning have been made. These advances have ranged from the development of automated assembly planning systems, such as Sandia's Automated Assembly Analysis System Archimedes 3.0{copyright}, to the startling revolution in microprocessors and computer-controlled production tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), flexible manufacturing systems (EMS), and computer-integrated manufacturing (CIM). These results have kindled considerable interest in the study of algorithms for life cycle related assembly processes and have blossomed into a field of intense interest. The intent of this manuscript is to bring together the fundamental results in this area, so that the unifying principles and underlying concepts of algorithm design may more easily be implemented in practice.

  16. AUTOMATIC GENERATION CONTROL OF TWO AREA POWER SYSTEM WITH AND WITHOUT SMES: FROM CONVENTIONAL TO MODERN AND INTELLIGENT CONTROL

    Directory of Open Access Journals (Sweden)

    SATHANS,

    2011-05-01

    Full Text Available This work proposes a Fuzzy Gain Scheduled Proportional-Integral (FGSPI controller for automatic generation control (AGC of two-equal area interconnected thermal power system including the Superconducting Magnetic Energy Storage (SMES unit in both areas. The reheat effect nonlinearity of the steam turbine is also consideredin this study. Simulation results show that the proposed control scheme with SMES is very effective in damping the frequency and tie-line power oscillations due to load perturbations in one of the areas. To further improve the performance of the controller, a new formulation of the area control error (ACE is also adopted. Theproposed FGSPI controller is compared against conventional PI controller and state feedback LQR controller using settling times, overshoots and undershoots of the power and frequency deviations as performance indices and the performance of the proposed controller is found better than the other two. Simulations have been performed using Matlab®.

  17. Automatic generation of analogy questions for student assessment: an Ontology-based approach

    Directory of Open Access Journals (Sweden)

    Bijan Parsia

    2012-08-01

    Full Text Available Different computational models for generating analogies of the form “A is to B as C is to D” have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ format that can be used for students’ assessment. We propose to use existing high-quality ontologies as a source for mining analogies to avoid the classic problem of hand-coding concepts in previous methods. We also describe the characteristics of a good analogy question and report on experiments carried out to evaluate the new approach.

  18. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: ... you'll be able to watch a live global AP anatomic total shoulder surgery from Methodist Hospital ...

  19. Anatomical integration of newly generated dentate granule neurons following traumatic brain injury in adult rats and its association to cognitive recovery.

    Science.gov (United States)

    Sun, Dong; McGinn, Melissa J; Zhou, Zhengwen; Harvey, H Ben; Bullock, M Ross; Colello, Raymond J

    2007-03-01

    The hippocampus is particularly vulnerable to traumatic brain injury (TBI), the consequences of which are manifested as learning and memory deficits. Following injury, substantive spontaneous cognitive recovery occurs, suggesting that innate repair mechanisms exist in the brain. However, the underlying mechanism contributing to this is largely unknown. The existence of neural stem cells in the adult hippocampal dentate gyrus (DG) and their proliferative response following injury led us to speculate that neurogenesis may contribute to cognitive recovery following TBI. To test this, we first examined the time course of cognitive recovery following lateral fluid percussion injury in rats. Cognitive deficits were tested at 11-15, 26-30 or 56-60 days post-injury using Morris Water Maze. At 11-15 and 26-30 days post-injury, animals displayed significant cognitive deficits, which were no longer apparent at 56-60 days post-TBI, suggesting an innate cognitive recovery at 56-60 days. We next examined the proliferative response, maturational fate and integration of newly generated cells in the DG following injury. Specifically, rats received BrdU at 2-5 days post-injury followed by Fluorogold (FG) injection into the CA3 region at 56 days post-TBI. We found the majority of BrdU+ cells which survived for 10 weeks became dentate granule neurons, as assessed by NeuN and calbindin labeling, approximately 30% being labeled with FG, demonstrating their integration into the hippocampus. Additionally, some BrdU+ cells were synaptophysin-positive, suggesting they received synaptic input. Collectively, our data demonstrate the extensive anatomical integration of new born dentate granule neurons at the time when innate cognitive recovery is observed. PMID:17198703

  20. Automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory

    CERN Document Server

    MacLeod, Matthew K

    2015-01-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. With full internal contraction the size of first-order wave functions scales polynomially with the number of active orbitals. The CASPT2 gradient program and the code generator are both publicly available. This work enables the CASPT2 geometry optimization of molecules as complex as those investigated by respective single-point calculations.

  1. A rule-based expert system for automatic control rod pattern generation for boiling water reactors

    International Nuclear Information System (INIS)

    This paper reports on an expert system for generating control rod patterns that has been developed. The knowledge is transformed into IF-THEN rules. The inference engine uses the Rete pattern matching algorithm to match facts, and rule premises and conflict resolution strategies to make the system function intelligently. A forward-chaining mechanism is adopted in the inference engine. The system is implemented in the Common Lisp programming language. The three-dimensional core simulation model performs the core status and burnup calculations. The system is successfully demonstrated by generating control rod programming for the 2894-MW (thermal) Kuosheng nuclear power plant in Taiwan. The computing time is tremendously reduced compared to programs using mathematical methods

  2. PGPG: An Automatic Generator of Pipeline Design for Programmable GRAPE Systems

    CERN Document Server

    Hamada, T; Makino, J; Hamada, Tsuyoshi; Fukushige, Toshiyuki; Makino, Junichiro

    2007-01-01

    We have developed PGPG (Pipeline Generator for Programmable GRAPE), a software which generates the low-level design of the pipeline processor and communication software for FPGA-based computing engines (FBCEs). An FBCE typically consists of one or multiple FPGA (Field-Programmable Gate Array) chips and local memory. Here, the term "Field-Programmable" means that one can rewrite the logic implemented to the chip after the hardware is completed, and therefore a single FBCE can be used for calculation of various functions, for example pipeline processors for gravity, SPH interaction, or image processing. The main problem with FBCEs is that the user need to develop the detailed hardware design for the processor to be implemented to FPGA chips. In addition, she or he has to write the control logic for the processor, communication and data conversion library on the host processor, and application program which uses the developed processor. These require detailed knowledge of hardware design, a hardware description ...

  3. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    International Nuclear Information System (INIS)

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  4. Automatic generation of synthesizable hardware implementation from high level RVC-cal description

    OpenAIRE

    Jerbi, Khaled; Raulet, Mickaël; Deforges, Olivier; Abid, Mohamed

    2012-01-01

    International audience Data process algorithms are increasing in complexity especially for image and video coding. Therefore, hardware development using directly hardware description languages (HDL) such as VHDL or Verilog is a difficult task. Current research axes in this context are introducing new methodologies to automate the generation of such descriptions. In our work we adopted a high level and target-independent language called CAL (Caltrop Actor Language). This language is associa...

  5. A New Model for Automatic Generation of Plan Libraries for Plan Recognition

    OpenAIRE

    Marchetta, Martín G.; Raymundo Q. Forradellas

    2010-01-01

    In the context of Computer Aided Process Planning (CAPP), feature recognition as well as the generation of manufacturing process plans are very diffi cult problems. The selection of the best manufacturing process plan usually involves not only measurable factors, but also idiosyncrasies, preferences and the know-how of both the company and the manufacturing engineer. In this scenario, mixed-initiative techniques such as plan
    recognition, where both human users and intelligent agent...

  6. On-line multiobjective automatic control system generation by evolutionary algorithms

    OpenAIRE

    Stewart, Paul; Stone, D. A.; Fleming, P.A.

    2006-01-01

    Evolutionary algorithms are applied to the on- line generation of servo-motor control systems. In this paper, the evolving population of controllers is evaluated at run-time via hardware in the loop, rather than on a simulated model. Disturbances are also introduced at run-time in order to pro- duce robust performance. Multiobjective optimisation of both PI and Fuzzy Logic controllers is considered. Finally an on-line implementation of Genetic Programming is presented based around the Simulin...

  7. Automatic Generation of Individual Finite-Element Models for Computational Fluid Dynamics and Computational Structure Mechanics Simulations in the Arteries

    Science.gov (United States)

    Hazer, D.; Schmidt, E.; Unterhinninghofen, R.; Richter, G. M.; Dillmann, R.

    2009-08-01

    Abnormal hemodynamics and biomechanics of blood flow and vessel wall conditions in the arteries may result in severe cardiovascular diseases. Cardiovascular diseases result from complex flow pattern and fatigue of the vessel wall and are prevalent causes leading to high mortality each year. Computational Fluid Dynamics (CFD), Computational Structure Mechanics (CSM) and Fluid Structure Interaction (FSI) have become efficient tools in modeling the individual hemodynamics and biomechanics as well as their interaction in the human arteries. The computations allow non-invasively simulating patient-specific physical parameters of the blood flow and the vessel wall needed for an efficient minimally invasive treatment. The numerical simulations are based on the Finite Element Method (FEM) and require exact and individual mesh models to be provided. In the present study, we developed a numerical tool to automatically generate complex patient-specific Finite Element (FE) mesh models from image-based geometries of healthy and diseased vessels. The mesh generation is optimized based on the integration of mesh control functions for curvature, boundary layers and mesh distribution inside the computational domain. The needed mesh parameters are acquired from a computational grid analysis which ensures mesh-independent and stable simulations. Further, the generated models include appropriate FE sets necessary for the definition of individual boundary conditions, required to solve the system of nonlinear partial differential equations governed by the fluid and solid domains. Based on the results, we have performed computational blood flow and vessel wall simulations in patient-specific aortic models providing a physical insight into the pathological vessel parameters. Automatic mesh generation with individual awareness in terms of geometry and conditions is a prerequisite for performing fast, accurate and realistic FEM-based computations of hemodynamics and biomechanics in the

  8. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  9. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Audenaert, Emmanuel [Department of Physical Medicine and Orthopedic Surgery, Ghent University, Ghent (Belgium); Speleers, Bruno; Vercauteren, Tom; Mulliez, Thomas [Department of Radiotherapy, Ghent University, Ghent (Belgium); Vandemaele, Pieter; Achten, Eric [Department of Radiology, Ghent University, Ghent (Belgium); Kerckaert, Ingrid; D' Herde, Katharina [Department of Anatomy, Ghent University, Ghent (Belgium); De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2013-11-15

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection.

  10. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    International Nuclear Information System (INIS)

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection

  11. Communication: Automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory

    International Nuclear Information System (INIS)

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability

  12. Communication: Automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory

    Science.gov (United States)

    MacLeod, Matthew K.; Shiozaki, Toru

    2015-02-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability.

  13. Communication: automatic code generation enables nuclear gradient computations for fully internally contracted multireference theory.

    Science.gov (United States)

    MacLeod, Matthew K; Shiozaki, Toru

    2015-02-01

    Analytical nuclear gradients for fully internally contracted complete active space second-order perturbation theory (CASPT2) are reported. This implementation has been realized by an automated code generator that can handle spin-free formulas for the CASPT2 energy and its derivatives with respect to variations of molecular orbitals and reference coefficients. The underlying complete active space self-consistent field and the so-called Z-vector equations are solved using density fitting. The implementation has been applied to the vertical and adiabatic ionization potentials of the porphin molecule to illustrate its capability. PMID:25662628

  14. GRACE/SUSY Automatic Generation of Tree Amplitudes in the Minimal Supersymmetric Standard Model

    OpenAIRE

    Fujimoto, J

    2002-01-01

    GRACE/SUSY is a program package for generating the tree-level amplitude and evaluating the corresponding cross section of processes of the minimal supersymmetric extension of the standard model (MSSM). The Higgs potential adopted in the system, however, is assumed to have a more general form indicated by the two-Higgs-doublet model. This system is an extension of GRACE for the standard model(SM) of the electroweak and strong interactions. For a given MSSM process the Feynman graphs and amplit...

  15. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M. [Univ. of California, San Diego, CA (United States

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  16. Automatic mechanism generation for pyrolysis of di-tert-butyl sulfide.

    Science.gov (United States)

    Class, Caleb A; Liu, Mengjie; Vandeputte, Aäron G; Green, William H

    2016-08-01

    The automated Reaction Mechanism Generator (RMG), using rate parameters derived from ab initio CCSD(T) calculations, is used to build reaction networks for the thermal decomposition of di-tert-butyl sulfide. Simulation results were compared with data from pyrolysis experiments with and without the addition of a cyclohexene inhibitor. Purely free-radical chemistry did not properly explain the reactivity of di-tert-butyl sulfide, as the previous experimental work showed that the sulfide decomposed via first-order kinetics in the presence and absence of the radical inhibitor. The concerted unimolecular decomposition of di-tert-butyl sulfide to form isobutene and tert-butyl thiol was found to be a key reaction in both cases, as it explained the first-order sulfide decomposition. The computer-generated kinetic model predictions quantitatively match most of the experimental data, but the model is apparently missing pathways for radical-induced decomposition of thiols to form elemental sulfur. Cyclohexene has a significant effect on the composition of the radical pool, and this led to dramatic changes in the resulting product distribution. PMID:27431650

  17. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models

    Science.gov (United States)

    Abayowa, Bernard O.; Yilmaz, Alper; Hardie, Russell C.

    2015-08-01

    This paper presents a framework for automatic registration of both the optical and 3D structural information extracted from oblique aerial imagery to a Light Detection and Ranging (LiDAR) point cloud without prior knowledge of an initial alignment. The framework employs a coarse to fine strategy in the estimation of the registration parameters. First, a dense 3D point cloud and the associated relative camera parameters are extracted from the optical aerial imagery using a state-of-the-art 3D reconstruction algorithm. Next, a digital surface model (DSM) is generated from both the LiDAR and the optical imagery-derived point clouds. Coarse registration parameters are then computed from salient features extracted from the LiDAR and optical imagery-derived DSMs. The registration parameters are further refined using the iterative closest point (ICP) algorithm to minimize global error between the registered point clouds. The novelty of the proposed approach is in the computation of salient features from the DSMs, and the selection of matching salient features using geometric invariants coupled with Normalized Cross Correlation (NCC) match validation. The feature extraction and matching process enables the automatic estimation of the coarse registration parameters required for initializing the fine registration process. The registration framework is tested on a simulated scene and aerial datasets acquired in real urban environments. Results demonstrates the robustness of the framework for registering optical and 3D structural information extracted from aerial imagery to a LiDAR point cloud, when co-existing initial registration parameters are unavailable.

  18. 朴素贝叶斯应用于自动化测试用例生成%Naive Bayesian Applied in Automatic Test Cases Generation

    Institute of Scientific and Technical Information of China (English)

    李欣; 张聪; 罗宪

    2012-01-01

    提出一种使用朴素贝叶斯作为核心算法来产生自动化测试用例的方法。该方法以实现自动化测试为目标,引入了朴素贝叶斯对产生的随机测试用例分类的思想。实验结果表明,这是一种可行的生成测试用例的方法。%Test cases generation was the key of automatic testing. Test cases generated great significance in software testing process. Automatic testing cases generated by as the core algorithm were presented in this paper. And the thoughts of classificatio in test case generation. The results showed the method presented in this paper was to generate test cases. effectively had Bayesian methods n were introduced a feasible method

  19. Wind power integration into the automatic generation control of power systems with large-scale wind power

    Directory of Open Access Journals (Sweden)

    Abdul Basit

    2014-10-01

    Full Text Available Transmission system operators have an increased interest in the active participation of wind power plants (WPP in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different possible future scenarios, when wind power production in the power system is high and conventional production from CHPs is at a minimum level. The investigation results of the proposed control strategy have shown that the WPPs can actively help the AGC, and reduce the real-time power imbalance in the power system, by down regulating their production when CHPs are unable to provide the required response.

  20. Performance of automatic generation control mechanisms with large-scale wind power

    International Nuclear Information System (INIS)

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  1. Automatic Multi-GPU Code Generation applied to Simulation of Electrical Machines

    CERN Document Server

    Rodrigues, Antonio Wendell De Oliveira; Dekeyser, Jean-Luc; Menach, Yvonnick Le

    2011-01-01

    The electrical and electronic engineering has used parallel programming to solve its large scale complex problems for performance reasons. However, as parallel programming requires a non-trivial distribution of tasks and data, developers find it hard to implement their applications effectively. Thus, in order to reduce design complexity, we propose an approach to generate code for hybrid architectures (e.g. CPU + GPU) using OpenCL, an open standard for parallel programming of heterogeneous systems. This approach is based on Model Driven Engineering (MDE) and the MARTE profile, standard proposed by Object Management Group (OMG). The aim is to provide resources to non-specialists in parallel programming to implement their applications. Moreover, thanks to model reuse capacity, we can add/change functionalities or the target architecture. Consequently, this approach helps industries to achieve their time-to-market constraints and confirms by experimental tests, performance improvements using multi-GPU environmen...

  2. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    Science.gov (United States)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  3. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    Directory of Open Access Journals (Sweden)

    G. T. Kulakov

    2014-01-01

    Full Text Available The paper analyzes an operation of the standard three-impulse automatic control system (ACS for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals reasons of static regulation errors while solving problems of internal and external distortions caused by expenditure of over-heated steam in the standard automatic control system. An actual significance of modernization pertaining to automatic control system for steam generator water supply has been substantiated in the paper.

  4. Production optimization of 99Mo/99mTc zirconium molybate gel generators at semi-automatic device: DISIGEG

    International Nuclear Information System (INIS)

    DISIGEG is a synthesis installation of zirconium 99Mo-molybdate gels for 99Mo/99mTc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium 99Mo- molybdate gels on 99Mo/99mTc generator performance were evaluated as well as some physical–chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium 99Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting 99mTcO4− diffusion into the matrix. The 99mTcO4− eluates produced by 99Mo/99mTc zirconium 99Mo-molybdate gel generators prepared in DISIGEG, air dried at 80 °C for 5 h and using an air flow of 90 mm, satisfied all the Pharmacopoeias regulations: 99mTc yield between 70–75%, 99Mo breakthrough less than 3×10−3%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6. - Highlights: ► 99Mo/99mTc generators based on 99Mo-molybdate gels were synthesized at a semi-automatic device. ► Generator performances depend on synthesis conditions of the zirconium 99Mo-molybdate gel. ► 99mTcO4− diffusion and yield into generator depends on gel porosity and permeability. ► 99mTcO4− eluates satisfy Pharmacopoeias regulations and can be applied for clinical use.

  5. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  6. Expert system for the automatic analysis of the Eddy current signals from the monitoring of vapor generators of a PWR, type reactor

    International Nuclear Information System (INIS)

    The automatization of the monitoring of the steam generator tubes required some developments in the field of data processing. The monitoring is performed by means of Eddy current tests. Improvements in signal processing and in pattern recognition associated to the artificial intelligence techniques induced EDF (French Electricity Company) to develop an automatic signal processing system. The system, named EXTRACSION (French acronym for Expert System for the Processing and classification of Signals of Nuclear Nature), insures the coherence between the different fields of knowledge (metallurgy, measurement, signals) during data processing by applying an object oriented representation

  7. Performance analysis of automatic generation control of interconnected power systems with delayed mode operation of area control error

    Directory of Open Access Journals (Sweden)

    Janardan Nanda

    2015-05-01

    Full Text Available This study presents automatic generation control (AGC of interconnected power systems comprising of two thermal and one hydro area having integral controllers. Emphasis is given to a delay in the area control error for the actuation of the supplementary controller and to examine its impact on the dynamic response against no delay which is usually the practice. Analysis is based on 50% loading condition in all the areas. The system performance is examined considering 1% step load perturbation. Results reveal that delayed mode operation provides a better system dynamic performance compared with that obtained without delay and has several distinct merits for the governor. The delay is linked with reduction in wear and tear of the secondary controller and hence increases the life of the governor. The controller gains are optimised by particle swarm optimisation. The performance of delayed mode operation of AGC at other loading conditions is also analysed. An attempt has also been made to find the impact of weights for different components in a cost function used to optimise the controller gains. A modified cost function having different weights for different components when used for controller gain optimisation improves the system performance.

  8. Open64的MPI代码自动生成算法%Automatic Code Generation Algorithm of Open64 for MPI

    Institute of Scientific and Technical Information of China (English)

    向阳霞; 裴宏; 张惠民; 陈曼青

    2011-01-01

    针对开源编译器Open64存在MPI不能自动并行化的问题,对Open64中面向Cluster的MPI代码自动生成进行了研究。分析了MPI代码自动生成模块在Open64体系结构中的位置,提出了基于Open64的MPI代码自动生成算法,并对其进行了实验验证。实验结果表明:该算法不但能够有效降低MPI并行程序的通信开销,而且能够明显提高其加速比。%The MPI automatic code generation for Cluster based on Open64 is studied in relation to the problem that the open source compiler Open64 has no MPI automatic parallelizing function.Firstly,the location of MPI code automatic generation in the Open64 compiler architecture is analyzed,and then an Open64-based automatic generation algorithm for MPI code is presented,finally the experiments of testing the NPB benchmarks is conducted.The experimental results show that the algorithm can reduce communication overheads of MPI parallel programs effectively and increase their speedups obviously.

  9. Development of a new generation of high-resolution anatomical models for medical device evaluation: the Virtual Population 3.0

    Science.gov (United States)

    Gosselin, Marie-Christine; Neufeld, Esra; Moser, Heidi; Huber, Eveline; Farcito, Silvia; Gerber, Livia; Jedensjö, Maria; Hilber, Isabel; Di Gennaro, Fabienne; Lloyd, Bryn; Cherubini, Emilio; Szczerba, Dominik; Kainz, Wolfgang; Kuster, Niels

    2014-09-01

    The Virtual Family computational whole-body anatomical human models were originally developed for electromagnetic (EM) exposure evaluations, in particular to study how absorption of radiofrequency radiation from external sources depends on anatomy. However, the models immediately garnered much broader interest and are now applied by over 300 research groups, many from medical applications research fields. In a first step, the Virtual Family was expanded to the Virtual Population to provide considerably broader population coverage with the inclusion of models of both sexes ranging in age from 5 to 84 years old. Although these models have proven to be invaluable for EM dosimetry, it became evident that significantly enhanced models are needed for reliable effectiveness and safety evaluations of diagnostic and therapeutic applications, including medical implants safety. This paper describes the research and development performed to obtain anatomical models that meet the requirements necessary for medical implant safety assessment applications. These include implementation of quality control procedures, re-segmentation at higher resolution, more-consistent tissue assignments, enhanced surface processing and numerous anatomical refinements. Several tools were developed to enhance the functionality of the models, including discretization tools, posing tools to expand the posture space covered, and multiple morphing tools, e.g., to develop pathological models or variations of existing ones. A comprehensive tissue properties database was compiled to complement the library of models. The results are a set of anatomically independent, accurate, and detailed models with smooth, yet feature-rich and topologically conforming surfaces. The models are therefore suited for the creation of unstructured meshes, and the possible applications of the models are extended to a wider range of solvers and physics. The impact of these improvements is shown for the MRI exposure of an adult

  10. An image-based automatic mesh generation and numerical simulation for a population-based analysis of aerosol delivery in the human lungs

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2013-11-01

    The authors propose a method to automatically generate three-dimensional subject-specific airway geometries and meshes for computational fluid dynamics (CFD) studies of aerosol delivery in the human lungs. The proposed method automatically expands computed tomography (CT)-based airway skeleton to generate the centerline (CL)-based model, and then fits it to the CT-segmented geometry to generate the hybrid CL-CT-based model. To produce a turbulent laryngeal jet known to affect aerosol transport, we developed a physiologically-consistent laryngeal model that can be attached to the trachea of the above models. We used Gmsh to automatically generate the mesh for the above models. To assess the quality of the models, we compared the regional aerosol distributions in a human lung predicted by the hybrid model and the manually generated CT-based model. The aerosol distribution predicted by the hybrid model was consistent with the prediction by the CT-based model. We applied the hybrid model to 8 healthy and 16 severe asthmatic subjects, and average geometric error was 3.8% of the branch radius. The proposed method can be potentially applied to the branch-by-branch analyses of a large population of healthy and diseased lungs. NIH Grants R01-HL-094315 and S10-RR-022421, CT data provided by SARP, and computer time provided by XSEDE.

  11. Intelligent automatic generation control

    CERN Document Server

    Bevrani, Hassan

    2011-01-01

    ""I enjoyed reading the book and found it informative. It is certainly a book I would recommend to postgraduate students and researchers in the area of intelligent control systems and their application to power system control. My congratulations to the authors.""-Pouyan Pourbeik, IEEE Power and Energy Magazine

  12. The "automatic mode switch" function in successive generations of minute ventilation sensing dual chamber rate responsive pacemakers.

    Science.gov (United States)

    Provenier, F; Jordaens, L; Verstraeten, T; Clement, D L

    1994-11-01

    Automatic mode switch (AMS) from DDDR to VVIR pacing is a new algorithm, in response to paroxysmal atrial tachyarrhythmias. With the 5603 Programmer, the AMS in the Meta DDDR 1250 and 1250H (Telectronics Pacings Systems, Inc.) operates when VA is shorter than the adaptable PVARP. With the 9600 Programmer, an atrial protection interval can be defined after the PVARP. The latest generation, Meta DDDR 1254, initiates AMS when 5 or 11 heart cycles are > 150, 175, or 200 beats/min. From 1990 to 1993, 61 patients, mean age 61 years, received a Meta DDDR: in 24 a 1250, in 12 a 1250H and in the remaining 25 a 1254 model. Indication for pacing was heart block in 39, sick sinus syndrome in 15, the combination in 6, and hypertrophic obstructive cardiomyopathy in 1. Paroxysmal atrial tachyarrhythmias were present in 43. All patients had routine pacemaker surveillance, including 52 Holter recordings. In 32 patients, periods of atrial tachyarrhythmias were observed, with proper AMS to VVIR, except during short periods of 2:1 block for atrial flutter in 4. In two others, undersensing of the atrial arrhythmia disturbed correct AMS. With the 1250 and 1250H model, AMS was observed on several occasions during sinus rate accelerations in ten patients. This was never seen with the 1254 devices. Final programmation was VVIR in 2 (chronic atrial fibrillation), AAI in 1 (fracture of the ventricular lead), VDDR in 1 (atrial pacing during atrial fibrillation), DDD in 5, and DDDR in 53, 48 of whom had AMS programmed on.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7845791

  13. Modeling and simulation of the generation automatic control of electric power systems; Modelado y simulacion del control automatico de generacion de sistemas electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Caballero Ortiz, Ezequiel

    2002-12-01

    This work is devoted to the analysis of the Automatic Control of Electrical Systems Generation of power, as of the information that generates the loop with Load-Frequency Control and the Automatic Voltage Regulator loop. To accomplish the analysis, the control classical theory and feedback control systems concepts are applied. Thus also, the modern theory concepts are employed. The studies are accomplished in the digital computer through the MATLAB program and the available simulation technique in the SIMULINK tool. In this thesis the theoretical and physical concepts of the automatic control of generation are established; dividing it in load frequency control and automatic voltage regulator loops. The mathematical models of the two control loops are established. Later, the models of the elements are interconnected in order to integrate the loop with load frequency control and the digital simulation of the system is carried out. In first instance, the function of the primary control in are - machine, area - multi machine and multi area - multi machine power systems, is analyzed. Then, the automatic control of generation of the area and multi area power systems is studied. The economic dispatch concept is established and with this plan the power system multi area is simulated, there in after the energy exchange among areas in stationary stage is studied. The mathematical models of the component elements of the control loop of the automatic voltage regulator are interconnected. Data according to the nature of each component are generated and their behavior is simulated to analyze the system response. The two control loops are interconnected and a simulation is carry out with data generated previously, examining the performance of the automatic control of generation and the interaction between the two control loops. Finally, the Poles Positioning and the Optimum Control techniques of the modern control theory are applied to the automatic control of an area generation

  14. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    OpenAIRE

    William Barragán Zaque; Alexander Martínez Rivillas; Pablo Emilio Garzón Carreño

    2015-01-01

    The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The re...

  15. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: DePuy Orthopedics is continually advancing the standard of orthopedic patient care. In a few ...

  16. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... Orthopedics is continually advancing the standard of orthopedic patient care. In a few moments, you'll be ... and version variability which allows adaptability to a patient's unique anatomical makeup. Dr. Gerald R. Williams, Jr., ...

  17. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... to a patient's unique anatomical makeup. Dr. Gerald R. Williams, Jr., a shoulder specialist from the Rothman ... That might help. Could you raise the O.R. table, please? 00:28:35 WOMAN: Can you ...

  18. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available GLOBAL AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: DePuy Orthopedics is continually advancing the standard of orthopedic patient ...

  19. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F.

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted–achieved) were only  ‑0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,‑1.0  ±  1.6% for V 65, and  ‑0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly

  20. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-01

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  1. Dosimetric Evaluation of Automatic Segmentation for Adaptive IMRT for Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Purpose: Adaptive planning to accommodate anatomic changes during treatment requires repeat segmentation. This study uses dosimetric endpoints to assess automatically deformed contours. Methods and Materials: Sixteen patients with head-and-neck cancer had adaptive plans because of anatomic change during radiotherapy. Contours from the initial planning computed tomography (CT) were deformed to the mid-treatment CT using an intensity-based free-form registration algorithm then compared with the manually drawn contours for the same CT using the Dice similarity coefficient and an overlap index. The automatic contours were used to create new adaptive plans. The original and automatic adaptive plans were compared based on dosimetric outcomes of the manual contours and on plan conformality. Results: Volumes from the manual and automatic segmentation were similar; only the gross tumor volume (GTV) was significantly different. Automatic plans achieved lower mean coverage for the GTV: V95: 98.6 ± 1.9% vs. 89.9 ± 10.1% (p = 0.004) and clinical target volume: V95: 98.4 ± 0.8% vs. 89.8 ± 6.2% (p 3 of the spinal cord 39.9 ± 3.7 Gy vs. 42.8 ± 5.4 Gy (p = 0.034), but no difference for the remaining structures. Conclusions: Automatic segmentation is not robust enough to substitute for physician-drawn volumes, particularly for the GTV. However, it generates normal structure contours of sufficient accuracy when assessed by dosimetric end points.

  2. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    Science.gov (United States)

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.

    1978-01-01

    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  3. Ontology-based tolerance specification generated automatically%基于本体的公差规范的自动生成

    Institute of Scientific and Technical Information of China (English)

    钟艳如; 王冰清; 覃裕初; 高文祥

    2016-01-01

    针对目前公差规范依靠人工指定带来不确定性的问题,在基于本体的公差类型自动生成方法的基础上,研究基于本体的公差规范的自动生成.通过分析公差规范领域知识,提取其中涉及的概念和关系,以此构建公差规范本体,并采用Web本体语言(Web Ontology Language,OWL)编码实现该本体.在所实现本体的基础上,采用语义Web规则语言(Semantic Web Rule Language,SWRL)定义公差规范的生成规则,进而设计公差规范的自动生成算法.应用所设计算法,说明减速器中间传动轴的公差规范自动生成的过程.将为CAD系统中公差规范自动生成的研究提供有效的思路和方法.%To reduce the uncertainty in the current tolerance specification relying on artificial, the ontology-based toler-ance specification generated automatically is studied based on automatic generation methodology of assembly tolerance types on ontology. In order to implement this ontology tolerance specification, the related concepts and relationships are analysed and the OWL(Web Ontology Language)is used to code. On the base of the ontology which is implemented, the automatic generation algorithm of tolerance specification is designed, using the SWRL(Semantic Web Rule Language)to define the generating rules. Using this algorithm, the procedure is illustrated by intermediate office propeller shaft of the reducer. The effective ideas and methods will be provided for the study of tolerance specification generated automatically for the CAD system.

  4. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  5. An anatomically oriented breast model for MRI

    Science.gov (United States)

    Kutra, Dominik; Bergtholdt, Martin; Sabczynski, Jörg; Dössel, Olaf; Buelow, Thomas

    2015-03-01

    Breast cancer is the most common cancer in women in the western world. In the breast cancer care-cycle, MRIis e.g. employed in lesion characterization and therapy assessment. Reading of a single three dimensional image or comparing a multitude of such images in a time series is a time consuming task. Radiological reporting is done manually by translating the spatial position of a finding in an image to a generic representation in the form of a breast diagram, outlining quadrants or clock positions. Currently, registration algorithms are employed to aid with the reading and interpretation of longitudinal studies by providing positional correspondence. To aid with the reporting of findings, knowledge about the breast anatomy has to be introduced to translate from patient specific positions to a generic representation. In our approach we fit a geometric primitive, the semi-super-ellipsoid to patient data. Anatomical knowledge is incorporated by fixing the tip of the super-ellipsoid to the mammilla position and constraining its center-point to a reference plane defined by landmarks on the sternum. A coordinate system is then constructed by linearly scaling the fitted super-ellipsoid, defining a unique set of parameters to each point in the image volume. By fitting such a coordinate system to a different image of the same patient, positional correspondence can be generated. We have validated our method on eight pairs of baseline and follow-up scans (16 breasts) that were acquired for the assessment of neo-adjuvant chemotherapy. On average, the location predicted and the actual location of manually set landmarks are within a distance of 5.6 mm. Our proposed method allows for automatic reporting simply by uniformly dividing the super-ellipsoid around its main axis.

  6. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  7. MATHEMATICAL MODEL OF TRANSIENT PROCESSES PERTAIN-ING TO THREE-IMPULSE SYSTEM FOR AUTOMATIC CONTROL OF STEAM GENERATOR WATER SUPPLY ON LOAD RELIEF

    OpenAIRE

    G. T. Kulakov; A. T. Kulakov; A. N. Kukharenko

    2014-01-01

    The paper analyzes an operation of the standard three-impulse automatic control system (ACS) for steam generator water supply. Mathematical model for checking its operational ability on load relief has been developed in the paper and this model makes it possible to determine maximum deviations of water level without execution of actual tests and any corrections in the plants for starting-up of technological protection  systems in accordance with water level in the drum.  The paper reveals rea...

  8. Automatic evaluation of uterine cervix segmentations

    Science.gov (United States)

    Lotenberg, Shelly; Gordon, Shiri; Long, Rodney; Antani, Sameer; Jeronimo, Jose; Greenspan, Hayit

    2007-03-01

    In this work we focus on the generation of reliable ground truth data for a large medical repository of digital cervicographic images (cervigrams) collected by the National Cancer Institute (NCI). This work is part of an ongoing effort conducted by NCI together with the National Library of Medicine (NLM) at the National Institutes of Health (NIH) to develop a web-based database of the digitized cervix images in order to study the evolution of lesions related to cervical cancer. As part of this effort, NCI has gathered twenty experts to manually segment a set of 933 cervigrams into regions of medical and anatomical interest. This process yields a set of images with multi-expert segmentations. The objectives of the current work are: 1) generate multi-expert ground truth and assess the diffculty of segmenting an image, 2) analyze observer variability in the multi-expert data, and 3) utilize the multi-expert ground truth to evaluate automatic segmentation algorithms. The work is based on STAPLE (Simultaneous Truth and Performance Level Estimation), which is a well known method to generate ground truth segmentation maps from multiple experts' observations. We have analyzed both intra- and inter-expert variability within the segmentation data. We propose novel measures of "segmentation complexity" by which we can automatically identify cervigrams that were found difficult to segment by the experts, based on their inter-observer variability. Finally, the results are used to assess our own automated algorithm for cervix boundary detection.

  9. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... AP ANATOMIC TOTAL SHOULDER SYSTEM METHODIST HOSPITAL PHILADELPHIA, PA April 17, 2008 00:00:10 ANNOUNCER: DePuy ... you don't make a bunch of small passes at the lesser tuberosity and make it a ...

  10. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... Anatomic Total Shoulder surgery, which featured the latest innovation in shoulder surgery from DePuy Orthopedics. OR-Live makes it easy for you to learn more. Just click on the "Request Information" button on your webcast screen and open the door to informed medical care. 01:21: ...

  11. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... advancing the standard of orthopedic patient care. In a few moments, you'll be able to watch a live global AP anatomic total shoulder surgery from Methodist Hospital in Philadelphia. A revolution in shoulder orthopedics, the Global AP gives ...

  12. Anatomic Total Shoulder System

    Medline Plus

    Full Text Available ... by almost ten years, is shoulders. So by definition, the average shoulder-replacement patient is almost ten ... Anatomic Total Shoulder surgery, which featured the latest innovation in shoulder surgery from DePuy Orthopedics. OR-Live ...

  13. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  14. MO-G-BRE-04: Automatic Verification of Daily Treatment Deliveries and Generation of Daily Treatment Reports for a MR Image-Guided Treatment Machine

    Energy Technology Data Exchange (ETDEWEB)

    Yang, D; Li, X; Li, H; Wooten, H; Green, O; Rodriguez, V; Mutic, S [Washington University School of Medicine, St. Louis, MO (United States)

    2014-06-15

    Purpose: Two aims of this work were to develop a method to automatically verify treatment delivery accuracy immediately after patient treatment and to develop a comprehensive daily treatment report to provide all required information for daily MR-IGRT review. Methods: After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a novel MR-IGRT treatment machine, we designed a method to use 1) treatment plan files, 2) delivery log files, and 3) dosimetric calibration information to verify the accuracy and completeness of daily treatment deliveries. The method verifies the correctness of delivered treatment plans and beams, beam segments, and for each segment, the beam-on time and MLC leaf positions. Composite primary fluence maps are calculated from the MLC leaf positions and the beam-on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. We also designed the daily treatment delivery report by including all required information for MR-IGRT and physics weekly review - the plan and treatment fraction information, dose verification information, daily patient setup screen captures, and the treatment delivery verification results. Results: The parameters in the log files (e.g. MLC positions) were independently verified and deemed accurate and trustable. A computer program was developed to implement the automatic delivery verification and daily report generation. The program was tested and clinically commissioned with sufficient IMRT and 3D treatment delivery data. The final version has been integrated into a commercial MR-IGRT treatment delivery system. Conclusion: A method was developed to automatically verify MR-IGRT treatment deliveries and generate daily treatment reports. Already in clinical use since December 2013, the system is able to facilitate delivery error detection, and expedite physician daily IGRT review and physicist weekly chart

  15. Construction of a computational anatomical model of the peripheral cardiac conduction system.

    Science.gov (United States)

    Sebastian, Rafael; Zimmerman, Viviana; Romero, Daniel; Frangi, Alejandro F

    2011-12-01

    A methodology is presented here for automatic construction of a ventricular model of the cardiac conduction system (CCS), which is currently a missing block in many multiscale cardiac electromechanic models. It includes the His bundle, left bundle branches, and the peripheral CCS. The algorithm is fundamentally an enhancement of a rule-based method known as the Lindenmayer systems (L-systems). The generative procedure has been divided into three consecutive independent stages, which subsequently build the CCS from proximal to distal sections. Each stage is governed by a set of user parameters together with anatomical and physiological constrains to direct the generation process and adhere to the structural observations derived from histology studies. Several parameters are defined using statistical distributions to introduce stochastic variability in the models. The CCS built with this approach can generate electrical activation sequences with physiological characteristics. PMID:21896384

  16. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  17. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  18. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  19. Automatic Test Case Generator for Object-Z Specification%Object-Z规格说明测试用例的自动生成器

    Institute of Scientific and Technical Information of China (English)

    许庆国; 缪淮扣; 曹晓夏; 胡晓波

    2011-01-01

    Most research on test case generation from Object-Z specification focuses on theory. There is almost no tool to support generating test cases automatically. The Object-Z is a mathematics and logic based formal specification language. It uses schema composition and abbreviation format, which brings difficulty for extracting semantics and then generating test cases from specification automatically. This paper provides a solution in extracting semantics and generating test cases from Object-Z specification by unfolding the schema definition and improving its syntax in Object-Z. The process has three steps including parsing Object-Z language, extracting semantics, and generating test cases automatically.%对Object-Z形式规格说明构造测试用例的研究,目前主要集中在理论研究阶段,测试用例的自动生成几乎没有相应的工具支持.Object-Z是基于数学和逻辑的语言,并大量使用了模式复合和简写形式,这给计算机提取完整语义用以自动产生测试用例造成了困难.通过展开Object-Z规格说明中的模式定义,改进Object-Z的文法结构,给出了提取Object-Z规格说明语义的方法,研究了从Object-Z规格说明产生测试用例的自动化过程.这一过程主要包含3个阶段:Object-Z语言的自动解析、语义自动抽取和测试用例自动产生.通过介绍的工具原型,可以很容易得到规格说明中的各种语义;基于某些测试准则,能够方便自动产生可视化的抽象测试用例.

  20. Femtosecond laser-induced hard X-ray generation in air from a solution flow of Au nano-sphere suspension using an automatic positioning system.

    Science.gov (United States)

    Hsu, Wei-Hung; Masim, Frances Camille P; Porta, Matteo; Nguyen, Mai Thanh; Yonezawa, Tetsu; Balčytis, Armandas; Wang, Xuewen; Rosa, Lorenzo; Juodkazis, Saulius; Hatanaka, Koji

    2016-09-01

    Femtosecond laser-induced hard X-ray generation in air from a 100-µm-thick solution film of distilled water or Au nano-sphere suspension was carried out by using a newly-developed automatic positioning system with 1-µm precision. By positioning the solution film for the highest X-ray intensity, the optimum position shifted upstream as the laser power increased due to breakdown. Optimized positioning allowed us to control X-ray intensity with high fidelity. X-ray generation from Au nano-sphere suspension and distilled water showed different power scaling. Linear and nonlinear absorption mechanism are analyzed together with numerical modeling of light delivery. PMID:27607607

  1. Mathematical modeling of Automatic Control System (ACS) and synchronous generator in high reliability power supply systems in Kozloduy NPP - set up optimization of ACS

    International Nuclear Information System (INIS)

    The article presents the models of Automatic Control System (ACS) and synchronous generator of the reversible generator-engine groups of first category power supply section in the Kozloduy NPP units 1 to 4. The control parameter is the synchronous machine tension. The research aims are optimal ACS setups, property control guaranties in accordance with the technical requirements. The used synchronous machine model is included in Matlab5.x library. For optimization the instruments of optimization toolbox - NCD out port block and plant actuator and created basic models of variable Discrete PID-regulator and PWM system are utilized. The results are applied for the setup of the real ACS. The results precision of the created models gives a possibility for a real summary model development and the achieved models implementation in cases of fluctuations of AC/DC reversible electromechanical supply

  2. Some Behavioral Considerations on the GPS4GEF Cloud-Based Generator of Evaluation Forms with Automatic Feedback and References to Interactive Support Content

    Directory of Open Access Journals (Sweden)

    Daniel HOMOCIANU

    2015-01-01

    Full Text Available The paper introduces some considerations on a previously defined general purpose system used to dynamically generate online evaluation forms with automatic feedback immediately after submitting responses and working with a simple and well-known data source format able to store questions, answers and links to additional support materials in order to increase the productivity of evaluation and assessment. Beyond presenting a short description of the prototype’s components and underlining advantages and limitations of using it for any user involved in assessment and evaluation processes, this paper promotes the use of such a system together with a simple technique of generating and referencing interactive support content cited within this paper and defined together with the LIVES4IT approach. This type of content means scenarios having adhoc documentation and interactive simulation components useful when emulating concrete examples of working with real world objects, operating with devices or using software applications from any activity field.

  3. Early fetal anatomical sonography.

    LENUS (Irish Health Repository)

    Donnelly, Jennifer C

    2012-10-01

    Over the past decade, prenatal screening and diagnosis has moved from the second into the first trimester, with aneuploidy screening becoming both feasible and effective. With vast improvements in ultrasound technology, sonologists can now image the fetus in greater detail at all gestational ages. In the hands of experienced sonographers, anatomic surveys between 11 and 14 weeks can be carried out with good visualisation rates of many structures. It is important to be familiar with the normal development of the embryo and fetus, and to be aware of the major anatomical landmarks whose absence or presence may be deemed normal or abnormal depending on the gestational age. Some structural abnormalities will nearly always be detected, some will never be and some are potentially detectable depending on a number of factors.

  4. Reference Man anatomical model

    Energy Technology Data Exchange (ETDEWEB)

    Cristy, M.

    1994-10-01

    The 70-kg Standard Man or Reference Man has been used in physiological models since at least the 1920s to represent adult males. It came into use in radiation protection in the late 1940s and was developed extensively during the 1950s and used by the International Commission on Radiological Protection (ICRP) in its Publication 2 in 1959. The current Reference Man for Purposes of Radiation Protection is a monumental book published in 1975 by the ICRP as ICRP Publication 23. It has a wealth of information useful for radiation dosimetry, including anatomical and physiological data, gross and elemental composition of the body and organs and tissues of the body. The anatomical data includes specified reference values for an adult male and an adult female. Other reference values are primarily for the adult male. The anatomical data include much data on fetuses and children, although reference values are not established. There is an ICRP task group currently working on revising selected parts of the Reference Man document.

  5. Anatomical imaging for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Philip M [Joint Physics Department, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT (United Kingdom)], E-mail: phil.evans@icr.ac.uk

    2008-06-21

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  6. Anatomical imaging for radiotherapy

    International Nuclear Information System (INIS)

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  7. [Use of steam-oxygen tents with a universal steam generator and automatic control system in the treatment of acute stenosing laryngotracheitis in children].

    Science.gov (United States)

    Taĭts, B M

    1993-01-01

    To treat acute stenosing laryngotracheitis in acute respiratory viral infection in children an original method has been developed and used for 2 years in a special hospital department. The method implies treatment of children in steam-and-oxygen tents with a universal steam-moistening generator and automatic control system. A controlled study of 50 children with acute laryngeal stenosis degree I-III confirmed high efficacy of this method permitting improvement of blood oxygenation, gas composition, acid-base condition, reduction of acidosis, prevention of exicosis and brain edema. Warm humid atmosphere promoted better discharge of the secretion and better functioning of the ciliated epithelium. Combined treatment incorporating the tents in acute laryngeal stenoses reduced lethality in severe cases, number of intubations and tracheostomies, of complications resultant from parenteral administration of the drugs. PMID:8009767

  8. Development of an expert system for automatic mesh generation for S(N) particle transport method in parallel environment

    Science.gov (United States)

    Patchimpattapong, Apisit

    This dissertation develops an expert system for generating an effective spatial mesh distribution for the discrete ordinates particle transport method in a parallel environment. This expert system consists of two main parts: (1) an algorithm for generating an effective mesh distribution in a serial environment, and (2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. The mesh generation algorithm consists of four steps: creation of a geometric model as partitioned into coarse meshes, determination of an approximate flux shape, selection of appropriate differencing schemes, and generation of an effective fine mesh distribution. A geometric model was created using AutoCAD. A parallel code PENFC (Parallel Environment Neutral-Particle First Collision) has been developed to calculate an uncollided flux in a 3-D Cartesian geometry. The appropriate differencing schemes were selected based on the uncollided flux distribution using a least squares methodology. A menu-driven serial code PENXMSH has been developed to generate an effective spatial mesh distribution that preserves problem geometry and physics. The domain decomposition selection process involves evaluation of the four factors that affect parallel performance, which include number of processors and memory available per processor, load balance, granularity, and degree-of-coupling among processors. These factors are used to derive a parallel-performance-index that provides expected performance of a parallel algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems: the VENUS-3 experimental facility and the BWR core shroud.

  9. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  10. Automatic programming and generation of collision-free paths for the Mitsubishi Movemaster RV-M1 robot

    Directory of Open Access Journals (Sweden)

    K. Foit

    2011-07-01

    Full Text Available Purpose: of this paper: This paper discuss the possibility to develop and implementing the computer system, which could be able to generate a collision-free path and prepare the data for direct implementing in the robot’s program.Design/methodology/approach: The existing methods of planning of the collision-free paths are mainly limited to the 2D issue and implemented for the mobile robots. The existing methods for planning the trajectory in 3D are often complicated and time-consuming, so most of them are not introduced in reality, being only a theory. In the paper the 2½D method has been presented together with the method of smoothing the generated trajectory. Experiments have been carried out in the virtual environment as well as on the real robot.Findings: The developed PLANER application has been adapted for cooperation with the Mitsubishi Movemaster RV-M1 robot. The current tests, together with the previous one carried out on the Fanuc RJ3iB robot, have shown the versatility of the method and the possibility to adapt it for cooperation with any robotic system.Research limitations/implications: The further stage of research will be concentrated on the consolidation of trajectory generating and simulation phase with the program execution stage in such a way, that the determination of collision-free path could be realized in real time.Practical implications: This approach clearly simplifies the stage of defining the relevant points of the trajectory in order to avoid collisions with the technological objects located in the robot’s manipulator environment. Thereby it significantly reduces the time needed for implementation of the program to the production cycle.Originality/value: The method of generating the collision-free trajectories, which is described in the paper, combines some of the existing tools with the new approach to achieve the optimal performance of the algorithm.

  11. A Framework for automatic generation of answers to conceptual questions in Frequently Asked Question (FAQ) based Question Answering System

    OpenAIRE

    Saurabh Pal; Sudipta Bhattacharya; Indrani Datta; Arindam Chakravorty

    2012-01-01

    Question Answering System [QAS] generates answer to various questions imposed by users. The QAS uses documents or knowledge base for extracting the answers to factoid questions and conceptual questions. Use of Frequently Asked Question (FAQ) base gives a satisfying results to QAS, but the limitation with FAQ base system is in the preparation of Question and Answer set as most of the questions are not predetermined.QAS using FAQ base fails if no semantically related questions are found in base...

  12. Transoral Surgery: An Anatomic Study

    OpenAIRE

    Rock, Jack P.; Tomecek, Frank J.; Ross, Lawrence

    1993-01-01

    The transoral approaches have become commonplace in modern neurosurgical practice for treatment of ventral midline lesions of the clivus and upper cervical spine. Although the standard technique of transoral surgery is conceptually simple, anatomic relationships are not so readily appreciated. The present study was undertaken in an effort to define more clearly the midline anatomic relationships as they pertain to the standard transoral and transpalatine operations. The anatomic relationships...

  13. SU-E-J-141: Comparison of Dose Calculation On Automatically Generated MRBased ED Maps and Corresponding Patient CT for Clinical Prostate EBRT Plans

    International Nuclear Information System (INIS)

    Purpose: To analyze the effect of computing radiation dose on automatically generated MR-based simulated CT images compared to true patient CTs. Methods: Six prostate cancer patients received a regular planning CT for RT planning as well as a conventional 3D fast-field dual-echo scan on a Philips 3.0T Achieva, adding approximately 2 min of scan time to the clinical protocol. Simulated CTs (simCT) where synthesized by assigning known average CT values to the tissue classes air, water, fat, cortical and cancellous bone. For this, Dixon reconstruction of the nearly out-of-phase (echo 1) and in-phase images (echo 2) allowed for water and fat classification. Model based bone segmentation was performed on a combination of the DIXON images. A subsequent automatic threshold divides into cortical and cancellous bone. For validation, the simCT was registered to the true CT and clinical treatment plans were re-computed on the simCT in pinnacle3. To differentiate effects related to the 5 tissue classes and changes in the patient anatomy not compensated by rigid registration, we also calculate the dose on a stratified CT, where HU values are sorted in to the same 5 tissue classes as the simCT. Results: Dose and volume parameters on PTV and risk organs as used for the clinical approval were compared. All deviations are below 1.1%, except the anal sphincter mean dose, which is at most 2.2%, but well below clinical acceptance threshold. Average deviations are below 0.4% for PTV and risk organs and 1.3% for the anal sphincter. The deviations of the stratifiedCT are in the same range as for the simCT. All plans would have passed clinical acceptance thresholds on the simulated CT images. Conclusion: This study demonstrated the clinical usability of MR based dose calculation with the presented Dixon acquisition and subsequent fully automatic image processing. N. Schadewaldt, H. Schulz, M. Helle and S. Renisch are employed by Phlips Technologie Innovative Techonologies, a subsidiary of

  14. Design of Controller for Automatic Tracking Solar Power Generation%全天候太阳能自动跟踪系统装置的研究

    Institute of Scientific and Technical Information of China (English)

    郑锋; 王炜灵; 陈健强; 陈泽群; 张晓薇

    2014-01-01

    本文提出了一种全天候太阳能自动跟踪系统。在检测系统上,硬件方面使用实际的光电跟踪模型,软件上设置视日运动轨迹跟踪程序;在控制系统上,采用双轴跟踪的机械传动机构,通过驱动直流电机调整太阳能板的最佳位置,并通过传功装置实现单台电机带动整排太阳能电池板的联动;针对阴雨天和狂风天气控制系统做出一系列的预防措施。本装置旨在全天采光发电,结构简单、能耗低、效率高。%The principle and structure of an intelligent automatic solar tracker are proposed. For testing system, a modelofphotoelectric tracing as hardware is used to track light while the device sets up a program to analysis the movement of the light as software. For controlling system, the controller has a two-axis tracker for mechanical design,and promote the whole row of solar panel linked by linkage. The controller drives the stepping motor to adjust the position of the solar panel to follow the sunlight,and. Other actions are taken to avoid the rain and strong wind. And the intelligent energy-saving design is involved. The simple-designed and energy-saving automatic tracking solar power generation is expected to work in days with highly efficiency.

  15. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    Science.gov (United States)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  16. Automatic generation of boundary conditions using Demons non-rigid image registration for use in 3D modality-independent elastography

    Science.gov (United States)

    Pheiffer, Thomas S.; Ou, Jao J.; Miga, Michael I.

    2010-02-01

    Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm, and are often determined by time-consuming point correspondence methods requiring manual user input. Unfortunately, generation of accurate boundary conditions for the biomechanical model is often difficult due to the challenge of accurately matching points between the source and target surfaces and consequently necessitates the use of large numbers of fiducial markers. This study presents a novel method of automatically generating boundary conditions by non-rigidly registering two image sets with a Demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray computed tomography image data with known boundary conditions. These preliminary results have produced boundary conditions with accuracy of up to 80% compared to the known conditions. Finally, these boundary conditions were utilized within a 3D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.

  17. Automatable on-line generation of calibration curves and standard additions in solution-cathode glow discharge optical emission spectrometry

    International Nuclear Information System (INIS)

    Two methods are described that enable on-line generation of calibration standards and standard additions in solution-cathode glow discharge optical emission spectrometry (SCGD-OES). The first method employs a gradient high-performance liquid chromatography pump to perform on-line mixing and delivery of a stock standard, sample solution, and diluent to achieve a desired solution composition. The second method makes use of a simpler system of three peristaltic pumps to perform the same function of on-line solution mixing. Both methods can be computer-controlled and automated, and thereby enable both simple and standard-addition calibrations to be rapidly performed on-line. Performance of the on-line approaches is shown to be comparable to that of traditional methods of sample preparation, in terms of calibration curves, signal stability, accuracy, and limits of detection. Potential drawbacks to the on-line procedures include signal lag between changes in solution composition and pump-induced multiplicative noise. Though the new on-line methods were applied here to SCGD-OES to improve sample throughput, they are not limited in application to only SCGD-OES—any instrument that samples from flowing solution streams (flame atomic absorption spectrometry, ICP-OES, ICP-mass spectrometry, etc.) could benefit from them. - Highlights: • Describes rapid, on-line generation of calibration standards and standard additions • These methods enhance the ease of analysis and sample throughput with SCGD-OES. • On-line methods produce results comparable or superior to traditional calibration. • Possible alternative, null-point-based methods of calibration are described. • Methods are applicable to any system that samples from flowing liquid streams

  18. Production optimization of {sup 99}Mo/{sup 99m}Tc zirconium molybate gel generators at semi-automatic device: DISIGEG

    Energy Technology Data Exchange (ETDEWEB)

    Monroy-Guzman, F., E-mail: fabiola.monroy@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Rivero Gutierrez, T., E-mail: tonatiuh.rivero@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Lopez Malpica, I.Z.; Hernandez Cortes, S.; Rojas Nava, P.; Vazquez Maldonado, J.C. [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Vazquez, A. [Instituto Mexicano del Petroleo, Eje Central Norte Lazaro Cardenas 152, Col. San Bartolo Atepehuacan, 07730, Mexico D.F. (Mexico)

    2012-01-15

    DISIGEG is a synthesis installation of zirconium {sup 99}Mo-molybdate gels for {sup 99}Mo/{sup 99m}Tc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium {sup 99}Mo- molybdate gels on {sup 99}Mo/{sup 99m}Tc generator performance were evaluated as well as some physical-chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium {sup 99}Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting {sup 99m}TcO{sub 4}{sup -} diffusion into the matrix. The {sup 99m}TcO{sub 4}{sup -} eluates produced by {sup 99}Mo/{sup 99m}Tc zirconium {sup 99}Mo-molybdate gel generators prepared in DISIGEG, air dried at 80 Degree-Sign C for 5 h and using an air flow of 90 mm, satisfied all the Pharmacopoeias regulations: {sup 99m}Tc yield between 70-75%, {sup 99}Mo breakthrough less than 3 Multiplication-Sign 10{sup -3}%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6. - Highlights: Black-Right-Pointing-Pointer {sup 99}Mo/{sup 99m}Tc generators based on {sup 99}Mo-molybdate gels were synthesized at a semi-automatic device. Black-Right-Pointing-Pointer Generator performances depend on synthesis conditions of the zirconium {sup 99}Mo-molybdate gel. Black-Right-Pointing-Pointer {sup 99m}TcO{sub 4}{sup -} diffusion and yield into generator depends on gel porosity and permeability. Black

  19. Evaluating the Potential of Rtk-Uav for Automatic Point Cloud Generation in 3d Rapid Mapping

    Science.gov (United States)

    Fazeli, H.; Samadzadegan, F.; Dadrasjavan, F.

    2016-06-01

    During disaster and emergency situations, 3D geospatial data can provide essential information for decision support systems. The utilization of geospatial data using digital surface models as a basic reference is mandatory to provide accurate quick emergency response in so called rapid mapping activities. The recipe between accuracy requirements and time restriction is considered critical in this situations. UAVs as alternative platforms for 3D point cloud acquisition offer potentials because of their flexibility and practicability combined with low cost implementations. Moreover, the high resolution data collected from UAV platforms have the capabilities to provide a quick overview of the disaster area. The target of this paper is to experiment and to evaluate a low-cost system for generation of point clouds using imagery collected from a low altitude small autonomous UAV equipped with customized single frequency RTK module. The customized multi-rotor platform is used in this study. Moreover, electronic hardware is used to simplify user interaction with the UAV as RTK-GPS/Camera synchronization, and beside the synchronization, lever arm calibration is done. The platform is equipped with a Sony NEX-5N, 16.1-megapixel camera as imaging sensor. The lens attached to camera is ZEISS optics, prime lens with F1.8 maximum aperture and 24 mm focal length to deliver outstanding images. All necessary calibrations are performed and flight is implemented over the area of interest at flight height of 120 m above the ground level resulted in 2.38 cm GSD. Earlier to image acquisition, 12 signalized GCPs and 20 check points were distributed in the study area and measured with dualfrequency GPS via RTK technique with horizontal accuracy of σ = 1.5 cm and vertical accuracy of σ = 2.3 cm. results of direct georeferencing are compared to these points and experimental results show that decimeter accuracy level for 3D points cloud with proposed system is achievable, that is suitable

  20. A program for assisting automatic generation control of the ELETRONORTE using artificial neural network; Um programa para assistencia ao controle automatico de geracao da Eletronorte usando rede neuronal artificial

    Energy Technology Data Exchange (ETDEWEB)

    Brito Filho, Pedro Rodrigues de; Nascimento Garcez, Jurandyr do [Para Univ., Belem, PA (Brazil). Centro Tecnologico; Charone Junior, Wady [Centrais Eletricas do Nordeste do Brasil S.A. (ELETRONORTE), Belem, PA (Brazil)

    1994-12-31

    This work presents an application of artificial neural network as a support to decision making in the automatic generation control (AGC) of the ELETRONORTE. It uses a software to auxiliary in the decisions in real time of the AGC. (author) 2 refs., 6 figs., 1 tab.

  1. Design and development of a prototypical software for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small- and medium-sized enterprises (SME)

    Science.gov (United States)

    Möller, Thomas; Bellin, Knut; Creutzburg, Reiner

    2015-03-01

    The aim of this paper is to show the recent progress in the design and prototypical development of a software suite Copra Breeder* for semi-automatic generation of test methodologies and security checklists for IT vulnerability assessment in small and medium-sized enterprises.

  2. Development and Testing of Geo-Processing Models for the Automatic Generation of Remediation Plan and Navigation Data to Use in Industrial Disaster Remediation

    Science.gov (United States)

    Lucas, G.; Lénárt, C.; Solymosi, J.

    2015-08-01

    This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree) and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines). Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long), 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect). The second model shows only 1% difference with the variation of feature number; so this last is less interesting for planning

  3. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  4. 软件测试数据自动生成算法的仿真研究%Simulation Research on Automatically Generate Software Test Data Algorithm

    Institute of Scientific and Technical Information of China (English)

    黄丽芬

    2012-01-01

    Testing data is the most crucial part in software testing software, and it is important for the software test automation degree to improve the automatic software test data generation method. Aiming at the defects of genetic algorithm and ant colony algorithm, a new software test data generation algorithm was proposed in this paper based on genetic and ant colony algorithm. Firstly, genetic algorithm which has the global searching ability was used to find the optimal solution, and then the optimal solution was converted into the initial pheromone of ant colony algorithm. Finally, the best test data were found by ant colony algorithm positive feedback mechanism quickly. The experimental results show that the proposed method improves the efficiency of software test data generation and has very important using value.%研究软件质量优化问题,传统遗传算法存在局部最优、收敛速度慢,使软件测试数据自动生成效率低.为提高软件测试数据生成效率,对传统遗传算法进行改进,提出一种遗传-蚁群算法的软件测试数据生成算法.针对测试数据自动生成的特点,充分发挥遗传算法的全局搜索和蚁群算法的局部搜索优势,提高了测试数据的生成能力.实验结果表明,遗传-蚁群算法提高了软件测试数据生成效率,是一种较为理想的软件测试数据生成算法.

  5. Tracking in anatomic pathology.

    Science.gov (United States)

    Pantanowitz, Liron; Mackinnon, Alexander C; Sinard, John H

    2013-12-01

    Bar code-based tracking solutions, long present in clinical pathology laboratories, have recently made an appearance in anatomic pathology (AP) laboratories. Tracking of AP "assets" (specimens, blocks, slides) can enhance laboratory efficiency, promote patient safety, and improve patient care. Routing of excess clinical material into research laboratories and biorepositories are other avenues that can benefit from tracking of AP assets. Implementing tracking is not as simple as installing software and turning it on. Not all tracking solutions are alike. Careful analysis of laboratory workflow is needed before implementing tracking to assure that this solution will meet the needs of the laboratory. Such analysis will likely uncover practices that may need to be modified before a tracking system can be deployed. Costs that go beyond simply that of purchasing software will be incurred and need to be considered in the budgeting process. Finally, people, not technology, are the key to assuring quality. Tracking will require significant changes in workflow and an overall change in the culture of the laboratory. Preparation, training, buy-in, and accountability of the people involved are crucial to the success of this process. This article reviews the benefits, available technology, underlying principles, and implementation of tracking solutions for the AP and research laboratory. PMID:23634908

  6. Hybrid evolutionary algorithm based fuzzy logic controller for automatic generation control of power systems with governor dead band non-linearity

    Directory of Open Access Journals (Sweden)

    Omveer Singh

    2016-12-01

    Full Text Available A new intelligent Automatic Generation Control (AGC scheme based on Evolutionary Algorithms (EAs and Fuzzy Logic concept is developed for a multi-area power system. EAs i.e. Genetic Algorithm–Simulated Annealing (GA–SA are used to optimize the gains of Fuzzy Logic Algorithm (FLA-based AGC regulators for interconnected power systems. The multi-area power system model has three different types of plants i.e. reheat, non-reheat and hydro and are interconnected via Extra High Voltage Alternate Current transmission links. The dynamic model of the system is developed considering one of the most important Governor Dead Band (GDB non-linearity. The designed AGC regulators are implemented in the wake of 1% load perturbation in one of the control areas and the dynamic response plots are obtained for various system states. The investigations carried out in the study reveal that the system dynamic performance with hybrid GA–SA-tuned Fuzzy technique (GASATF-based AGC controller is appreciably superior as compared to that of integral and FLA-based AGC controllers. It is also observed that the incorporation of GDB non-linearity in the system dynamic model has resulted in degraded system dynamic performance.

  7. Automatic voltage regulation of synchronous generator using generalized predictive control; Ippanka yosoku seigyo wo mochiita doki hatsudenki no jido den`atsu chosei

    Energy Technology Data Exchange (ETDEWEB)

    Funabiki, S.; Yamakawa, S. [Okayama University, Okayama (Japan). Faculty of Engineering; Ito, T. [Nishishiba Electric Co. Ltd., Hyogo (Japan)

    1995-02-28

    For the automatic voltage regulator (AVR) of a synchronous generator, various applications of self-tuning digital control (STC) have been experimented which successively adjusts PID gains to cope with dynamic characteristics such as disturbances of a plant. As one of such applications, a proposal has been made in this paper for a stable and highly adaptable control system by using a generalized predictive control as the control law and the sequential least-square method as the identification method. An experiment was carried out by a simulation and an experimental AVR, and the effectiveness was confirmed of this control method. The following points may be listed in summarizing the characteristics of this AVR. The arithmetic time is short, and a highly accurate identification value is obtainable. Since an oblivion coefficient is determined by the supremum trace gain method, the adaptability is increased on the parameter identification value. A stable control is obtained even if a plant is a non-minimum phase system. 10 refs., 11 figs., 2 tabs.

  8. Automatic Generation of Instrument Sheet and Index Realization with Office VBA%利用Office VBA自动生成相关仪表设计文件

    Institute of Scientific and Technical Information of China (English)

    郭非; 范琳; 付荣申; 陈松华

    2012-01-01

    目前工程公司的仪表设计文件如仪表数据表、仪表索引等,多是人工手动填写或复制粘贴,速度慢、准确率低,一定程度上影响了设计文件的质量和工程进度。针对这一情况,介绍了利用VBA开发工具,开发出自动填写仪表数据表工艺参数和索引自动生成软件,工程实际应用表明该软件能够有效地减轻设计人员的劳动强度,提高设计成品的质量。%Currently, most of the instrument design files such as process data of instruments sheets and instrument index are filled manually or copied/pasted in the engineering companies with low speed and accuracy. As a result, the quality of design file and engineering progress are influenced somehow. Regarding to the situation, VBA development tool with the function of automatic generation of instrument sheet data and instruments index is introduced. According to the engineering application, labor intensity of the designers is effectively reduced and quality of design work is improved.

  9. Review of the Historical Evolution of Anatomical Terms

    Directory of Open Access Journals (Sweden)

    Algieri, Rubén D.

    2011-12-01

    English, listing which updates and supersedes all previous nomenclatures. In September 2001, the Spanish Anatomical Society translated this International Anatomical Terminology into Spanish language.The study of the historical backgrounds in the worldwide development of Anatomical Terms, give us valuable data about the origin and foundation of the names. It is necessary to raise awareness about the implementation of a unified, updated and uniform anatomical terminology, when conducting scientific communications and publications. As specialists in this discipline, we must study and know the existence of the official list of anatomical terms of use worldwide (International Anatomical Terminology, its equivalence with previous classifications, keeping us updated about its changes to teach it to new generations of health professionals.

  10. Automatic Generation of Neural Networks

    OpenAIRE

    A. Fiszelew; P. Britos; G. Perichisky; R. García-Martínez

    2003-01-01

    This work deals with methods for finding optimal neural network architectures to learn particular problems. A genetic algorithm is used to discover suitable domain specific architectures; this evolutionary algorithm applies direct codification and uses the error from the trained network as a performance measure to guide the evolution. The network training is accomplished by the back-propagation algorithm; techniques such as training repetition, early stopping and complex regulation are employ...

  11. A Mathematical Framework for Incorporating Anatomical Knowledge in DT-MRI Analysis

    OpenAIRE

    Maddah, Mahnaz; Zöllei, Lilla; Grimson, W. Eric L.; Westin, Carl-Fredrik; Wells, William M.

    2008-01-01

    We propose a Bayesian approach to incorporate anatomical information in the clustering of fiber trajectories. An expectation-maximization (EM) algorithm is used to cluster the trajectories, in which an atlas serves as the prior on the labels. The atlas guides the clustering algorithm and makes the resulting bundles anatomically meaningful. In addition, it provides the seed points for the tractography and initial settings of the EM algorithm. The proposed approach provides a robust and automat...

  12. ANATOMICAL PROPERTIES OF PLANTAGO ARENARIA

    OpenAIRE

    Nicoleta IANOVICI; SINITEAN, Adrian; Aurel FAUR

    2011-01-01

    Psammophytes are marked by a number of adaptations that enable them to exist in the hard environmental conditions of the sand habitats. In this study, the anatomical characteristics of Plantago arenaria were examined. Studies were conducted to assess the diversity of anatomical adaptations of vegetative organs in this taxa. Results are presented with original photographs. The analysis of leaf anatomy in P. arenaria showed that the leaves contained a contained xeromorphic traits. Arbuscular my...

  13. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  14. Research on Technologies of Service-oriented Test Program Automatic Generation%面向服务的测试程序自动生成技术研究

    Institute of Scientific and Technical Information of China (English)

    王成; 杨森; 孟晨

    2012-01-01

    Test program automatic generation technology is the key technology of the new generation ATS. The paper started from the service—oriented, firstly, the framework of service—oriented test program automatic generation is built; then, the test flow description language is introduced, and the test description XML is translated to test flow description language, then the test flow description language is translated to C language middle program using TFDL compiler; finally, the test program is automatic generated by cots compiler.%测试程序自动生成技术是新一代自动测试系统(ATS)关键技术之一.文中从面向服务的角度出发,首先建立了面向服务的测试程序自动生成总体框架;然后介绍了测试流程描述语言(Test Flow Description Language,TFDL),并通过XSLT模板将测试描述XML转化为测试流程描述语言,利用TFDL编译器将测试流程描述语言转化为C语言中间程序;最后通过商业编译器自动生成测试程序.

  15. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  16. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  17. Quantifying anatomical shape variations in neurological disorders.

    Science.gov (United States)

    Singh, Nikhil; Fletcher, P Thomas; Preston, J Samuel; King, Richard D; Marron, J S; Weiner, Michael W; Joshi, Sarang

    2014-04-01

    We develop a multivariate analysis of brain anatomy to identify the relevant shape deformation patterns and quantify the shape changes that explain corresponding variations in clinical neuropsychological measures. We use kernel Partial Least Squares (PLS) and formulate a regression model in the tangent space of the manifold of diffeomorphisms characterized by deformation momenta. The scalar deformation momenta completely encode the diffeomorphic changes in anatomical shape. In this model, the clinical measures are the response variables, while the anatomical variability is treated as the independent variable. To better understand the "shape-clinical response" relationship, we also control for demographic confounders, such as age, gender, and years of education in our regression model. We evaluate the proposed methodology on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline structural MR imaging data and neuropsychological evaluation test scores. We demonstrate the ability of our model to quantify the anatomical deformations in units of clinical response. Our results also demonstrate that the proposed method is generic and generates reliable shape deformations both in terms of the extracted patterns and the amount of shape changes. We found that while the hippocampus and amygdala emerge as mainly responsible for changes in test scores for global measures of dementia and memory function, they are not a determinant factor for executive function. Another critical finding was the appearance of thalamus and putamen as most important regions that relate to executive function. These resulting anatomical regions were consistent with very high confidence irrespective of the size of the population used in the study. This data-driven global analysis of brain anatomy was able to reach similar conclusions as other studies in Alzheimer's disease based on predefined ROIs, together with the identification of other new patterns of deformation. The

  18. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-IV. User's manual

    International Nuclear Information System (INIS)

    As a continuation of its efforts to provide methodologies and tools to Member States to carry out comparative assessment and analyse priority environmental issues related to the development of the electric power sector, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package WASP-IV for carrying out power generation expansion planning taking into consideration fuel availability and environmental constraints. This manual constitutes a part of this work and aims to provide users with a guide to use effectively the new version of the model WASP-IV. WASP was originally developed in 1972 by the Tennessee Valley Authority and the Oak Ridge National Laboratory in the USA to meet the IAEA needs to analyse the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. Previous versions of the model were used by Member States in many national and regional studies to analyse the electric power system expansion planning and the role of nuclear energy in particular. Experience gained from its application allowed development of WASP into a very comprehensive planning tool for electric power system expansion analysis. New, improved versions were developed, which took into consideration the needs expressed by the users of the programme in order to address important emerging issues being faced by the electric system planners. In 1979, WASP-IV was released and soon after became an indispensable tool in many Member States for generation expansion planning. The WASP-IV version was continually upgraded and the development of version WASP-III Plus commenced in 1992. By 1995, WASP-III Plus was completed, which followed closely the methodology of the WASP-III but incorporated new features. In order to meet the needs of electricity planners and following the recommendations of the Helsinki symposium, development of a new version of WASP was

  19. Methodology for Automatic Generation of Models for Large Urban Spaces Based on GIS Data/Metodología para la generación automática de modelos de grandes espacios urbanos desde información SIG/

    OpenAIRE

    Sergio Arturo Ordóñez Medina; Jhon Alejandro Triana Trujillo; Andrés Felipe Padilla Ramos; José Tiberio Hernández Peñaloza

    2012-01-01

    In the planning and evaluation stages of infrastructure projects, it is necessary to manage huge quantities of information. Cities are very complex systems, which need to be modeled when an intervention is required. Suchmodels allow us to measure the impact of infrastructure changes, simulating hypothetic scenarios and evaluating results. This paper describes a methodology for the automatic generation of urban space models from GIS sources. A Voronoi diagram is used to partition large urban r...

  20. A GPS-track-based method for automatically generating road-network vector map%基于GPS轨迹的矢量路网地图自动生成方法

    Institute of Scientific and Technical Information of China (English)

    孔庆杰; 史文欢; 刘允才

    2012-01-01

    A method is proposed for automatically generating large-scale road-network vector maps based on GPS-probe-vehicle tracks. This method does not need to employ the basic maps of road-network maps, except the tracks formed when GPS probe vehicles are moving in road networks, to reflect the real topology of road networks onto digital maps automatically. The proposed method consists of three steps: First, transform the earth longitude and latitude coordinates of the GPS-track data to the urban map coordinates! then, generate the skeleton map of the road network by using the transformed GPS-track dataj finally, perform vectorization processing to the generated skeleton map. The experiments for automatically generating the real road network with the real-world GPS-track data indicate that the proposed method is able to automatically generate road-network maps successfully, and that the generated vector digital map bears high accuracy and can satisfy the application requirement of automatically and timely updating the digital map in vehicle navigation systems, traffic guidance systems, etc.%提出一种基于GPS探测车轨迹的大规模矢量路网地图自动生成方法.该方法不需要路网地图的基图,可以只利用GPS探测车在路网中的行驶轨迹,自动将实际路网的真实拓扑结构反映在数字地图上.该方法分三个步骤:首先,实现GPS探测车轨迹数据的大地经纬度坐标到地图城建坐标的转换;然后,利用坐标转换后的GPS轨迹数据生成路网栅格地图;最后,将已生成的栅格路网地图进行矢量化处理.采用真实GPS探测车轨迹数据进行的实际路网自动生成实验表明,该方法能够成功地通过GPS轨迹自动生成路网地图,生成的矢量路网数字地图具有较高的精确度,可以满足交通诱导和汽车导航等系统中数字地图及时、自动更新的应用需求.

  1. Automatic radioactive waste recycling

    International Nuclear Information System (INIS)

    The production of a plutonium ingot by calcium reduction process at CEA/Valduc generates a residue called 'slag'. This article introduces the recycling unit which is dedicated to the treatment of slags. The aim is to separate and to recycle the plutonium trapped in this bulk on the one hand, and to generate a disposable waste from the slag on the other hand. After a general introduction of the facilities, some elements will be enlightened, particularly the dissolution step, the filtration and the drying equipment. Reflections upon technological constraints will be proposed, and the benefits of a fully automatic recycling unit of nuclear waste will also be stressed. (authors)

  2. ANATOMICAL PROPERTIES OF PLANTAGO ARENARIA

    Directory of Open Access Journals (Sweden)

    Nicoleta IANOVICI

    2011-01-01

    Full Text Available Psammophytes are marked by a number of adaptations that enable them to exist in the hard environmental conditions of the sand habitats. In this study, the anatomical characteristics of Plantago arenaria were examined. Studies were conducted to assess the diversity of anatomical adaptations of vegetative organs in this taxa. Results are presented with original photographs. The analysis of leaf anatomy in P. arenaria showed that the leaves contained a contained xeromorphic traits. Arbuscular mycorrhizal symbiosis seems to be critical for their survival.

  3. Pattern recognition of anatomical shapes in CT scans

    International Nuclear Information System (INIS)

    In medical image processing pattern recognition has become of major value in anatomical analysis and in computer aided information processing. Specifically, pattern recognition techniques simplify software development by means of which clinicians can manipulate anatomical relationships. As part of an overall CT pattern recognition system, a sequential edge tracking routine was devised together with a normalized Fourier descriptor analysis of identified shapes. A collection of shapes were extracted from CT scans of two patients and entered into an anatomic shape dictionary. This dictionary was employed in pattern matching experiments and in three-dimensional anatomical reconstruction. A sequential-edge tracking algorithm of high reliability, consistency, and image invariance, capable of utilizing heuristic and statistical rules, was demonstrated. Tests of pattern matching algorithms based on Fourier descriptors provided rapid and accurate body organ recognition of shapes extracted from de novo images using the shape dictionary. Results indicate that automated contour extraction and object recognition from cross-sectional images of human anatomy can be performed effectively, reliably, and rapidly. This abstract discusses an image processing environment that circumvents manual and subjective shape extraction, by substituting automatic and quantitative shape extraction, pattern matching and object recognition

  4. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    Energy Technology Data Exchange (ETDEWEB)

    Pescarini, M.; Orsi, R.; Martinelli, T. [ENEA, Ente per le Nuove Tecnologie, l' Energia e l' Ambiente, Centro Ricerche Ezio Clementel Bologna (Italy)

    2003-07-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  5. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    International Nuclear Information System (INIS)

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  6. GBM heterogeneity characterization by radiomic analysis of phenotype anatomical planes

    Science.gov (United States)

    Chaddad, Ahmad; Desrosiers, Christian; Toews, Matthew

    2016-03-01

    Glioblastoma multiforme (GBM) is the most common malignant primary tumor of the central nervous system, characterized among other traits by rapid metastatis. Three tissue phenotypes closely associated with GBMs, namely, necrosis (N), contrast enhancement (CE), and edema/invasion (E), exhibit characteristic patterns of texture heterogeneity in magnetic resonance images (MRI). In this study, we propose a novel model to characterize GBM tissue phenotypes using gray level co-occurrence matrices (GLCM) in three anatomical planes. The GLCM encodes local image patches in terms of informative, orientation-invariant texture descriptors, which are used here to sub-classify GBM tissue phenotypes. Experiments demonstrate the model on MRI data of 41 GBM patients, obtained from the cancer genome atlas (TCGA). Intensity-based automatic image registration is applied to align corresponding pairs of fixed T1˗weighted (T1˗WI) post-contrast and fluid attenuated inversion recovery (FLAIR) images. GBM tissue regions are then segmented using the 3D Slicer tool. Texture features are computed from 12 quantifier functions operating on GLCM descriptors, that are generated from MRI intensities within segmented GBM tissue regions. Various classifier models are used to evaluate the effectiveness of texture features for discriminating between GBM phenotypes. Results based on T1-WI scans showed a phenotype classification accuracy of over 88.14%, a sensitivity of 85.37% and a specificity of 96.1%, using the linear discriminant analysis (LDA) classifier. This model has the potential to provide important characteristics of tumors, which can be used for the sub-classification of GBM phenotypes.

  7. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  8. Development of a automatic positioning system of photovoltaic panels for electric energy generation; Desenvolvimento de um sistema de posicionamento automatico de placas fotovoltaicas para a geracao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Alceu F.; Cagnon, Odivaldo Jose [Universidade Estadual Paulista (DEE/FEB/UNESP), Bauru, SP (Brazil). Fac. de Engenharia. Dept. de Engenharia Eletrica; Seraphin, Odivaldo Jose [Universidade Estadual Paulista (DER/FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas. Dept. de Engenharia Rural

    2008-07-01

    This work presents an automatic positioning system for photovoltaic panels, in order to improve the conversion of solar energy to electric energy. A prototype with automatic movement was developed, and its efficiency in generating electric energy was compared to another one with the same characteristics, but fixed in space. Preliminary results point to a significant increase in efficiency, obtained from a simplified process of movement, in which sensors are not used to determine the apparent sun's position, but instead of it, the relative Sun-Earth's position equations are used. An innovative movement mechanical system is also presented, using two stepper motors to move the panel along two-axis, but with independent movement, contributing, this way, to save energy during the positioning times. The use of this proposed system in rural areas is suggested. (author)

  9. Geodesic atlas-based labeling of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Petersen, Jens; Owen, Megan; Lo, Pechin; Thomsen, Laura Hohwu; Wille, Mathilde Marie Winkler; Dirksen, Asger; de Bruijne, Marleen

    2015-01-01

    We present a fast and robust atlas-based algorithm for labeling airway trees, using geodesic distances in a geometric tree-space. Possible branch label configurations for an unlabeled airway tree are evaluated using distances to a training set of labeled airway trees. In tree-space, airway tree...... topology and geometry change continuously, giving a natural automatic handling of anatomical differences and noise. A hierarchical approach makes the algorithm efficient, assigning labels from the trachea and downwards. Only the airway centerline tree is used, which is relatively unaffected by pathology...

  10. 基于拓扑分层的配电网电气接线图自动生成算法%An Automatic Electrical Diagram Generation Method for Distribution Networks Based on Hierarchical Topology Model

    Institute of Scientific and Technical Information of China (English)

    廖凡钦; 刘东; 闫红漫; 于文鹏; 黄玉辉; 万顷波

    2014-01-01

    The automatic generation of the electrical diagram for a distribution network is a complex optimization problem.The nature of this process is to determine the relative positions of equipment of the distribution network in a 2-D plane.Based on a simplified hierarchical model,an automatic drawing algorithm is proposed to automatically generate the electrical diagram.The algorithm solves this problem by decomposing it into three steps,namely,preliminary layout,framework routing and complete drawing.The preliminary layout is obtained with gravitation-repulsion model.The power station drawing is completed through equipment classification and comparison of each outlet line”s dip angle.The routing priority is used to ensure there is no overlapping and crossing of routing.The overall automatic generation of the electrical diagram can then be obtained with respect to its original integrated topology structure.Finally,a practical case study of a city”s distribution network is given to show the effectiveness of the method and the automatic layout algorithm.%配电网电气接线图的自动生成是一个复杂的优化问题,其本质是在一个平面合理确定配电网拓扑中各设备间的相对坐标位置。文中提出一种基于拓扑分层的成图算法,该算法首先在原拓扑模型基础上构建3层不同程度简化的分层成图拓扑模型,在此基础上对应地将自动成图问题分解为初步布局、骨架布线和完整绘图这3个步骤求解。采用基于引力-斥力模型的布局算法完成初步布局,通过设备分类和比较电站出线的倾角大小实现电站成图,采用基于区分布线优先顺序的算法完成主干线的无重叠交叉布线,最终生成与原拓扑结构完全对应的配电网电气接线图。针对某市配电网的实例成图表明了所提算法的有效性。

  11. Derivation of high-resolution MRI atlases of the human cerebellum at 3T and segmentation using multiple automatically generated templates.

    Science.gov (United States)

    Park, Min Tae M; Pipitone, Jon; Baer, Lawrence H; Winterburn, Julie L; Shah, Yashvi; Chavez, Sofia; Schira, Mark M; Lobaugh, Nancy J; Lerch, Jason P; Voineskos, Aristotle N; Chakravarty, M Mallar

    2014-07-15

    The cerebellum has classically been linked to motor learning and coordination. However, there is renewed interest in the role of the cerebellum in non-motor functions such as cognition and in the context of different neuropsychiatric disorders. The contribution of neuroimaging studies to advancing understanding of cerebellar structure and function has been limited, partly due to the cerebellum being understudied as a result of contrast and resolution limitations of standard structural magnetic resonance images (MRI). These limitations inhibit proper visualization of the highly compact and detailed cerebellar foliations. In addition, there is a lack of robust algorithms that automatically and reliably identify the cerebellum and its subregions, further complicating the design of large-scale studies of the cerebellum. As such, automated segmentation of the cerebellar lobules would allow detailed population studies of the cerebellum and its subregions. In this manuscript, we describe a novel set of high-resolution in vivo atlases of the cerebellum developed by pairing MR imaging with a carefully validated manual segmentation protocol. Using these cerebellar atlases as inputs, we validate a novel automated segmentation algorithm that takes advantage of the neuroanatomical variability that exists in a given population under study in order to automatically identify the cerebellum, and its lobules. Our automatic segmentation results demonstrate good accuracy in the identification of all lobules (mean Kappa [κ]=0.731; range 0.40-0.89), and the entire cerebellum (mean κ=0.925; range 0.90-0.94) when compared to "gold-standard" manual segmentations. These results compare favorably in comparison to other publically available methods for automatic segmentation of the cerebellum. The completed cerebellar atlases are available freely online (http://imaging-genetics.camh.ca/cerebellum) and can be customized to the unique neuroanatomy of different subjects using the proposed

  12. Unifying the analyses of anatomical and diffusion tensor images using volume-preserved warping

    DEFF Research Database (Denmark)

    Xu, Dongrong; Hao, Xuejun; Bansal, Ravi;

    2007-01-01

    PURPOSE: To introduce a framework that automatically identifies regions of anatomical abnormality within anatomical MR images and uses those regions in hypothesis-driven selection of seed points for fiber tracking with diffusion tensor (DT) imaging (DTI). MATERIALS AND METHODS: Regions of interest...... (ROIs) are first extracted from MR images using an automated algorithm for volume-preserved warping (VPW) that identifies localized volumetric differences across groups. ROIs then serve as seed points for fiber tracking in coregistered DT images. Another algorithm automatically clusters and compares...... morphologies of detected fiber bundles. We tested our framework using datasets from a group of patients with Tourette's syndrome (TS) and normal controls. RESULTS: Our framework automatically identified regions of localized volumetric differences across groups and then used those regions as seed points for...

  13. Digital photography in anatomical pathology

    OpenAIRE

    Leong F; Leong A

    2004-01-01

    Digital imaging has made major inroads into the routine practice of anatomical pathology and replaces photographic prints and Kodachromes for reporting and conference purposes. More advanced systems coupled to computers allow greater versatility and speed of turnaround as well as lower costs of incorporating macroscopic and microscopic pictures into pathology reports and publications. Digital images allow transmission to remote sites via the Internet for consultation, quality assurance and ed...

  14. Anatomic Optical Coherence Tomography of Upper Airways

    Science.gov (United States)

    Chin Loy, Anthony; Jing, Joseph; Zhang, Jun; Wang, Yong; Elghobashi, Said; Chen, Zhongping; Wong, Brian J. F.

    The upper airway is a complex and intricate system responsible for respiration, phonation, and deglutition. Obstruction of the upper airways afflicts an estimated 12-18 million Americans. Pharyngeal size and shape are important factors in the pathogenesis of airway obstructions. In addition, nocturnal loss in pharyngeal muscular tone combined with high pharyngeal resistance can lead to collapse of the airway and periodic partial or complete upper airway obstruction. Anatomical optical coherence tomography (OCT) has the potential to provide high-speed three-dimensional tomographic images of the airway lumen without the use of ionizing radiation. In this chapter we describe the methods behind endoscopic OCT imaging and processing to generate full three dimensional anatomical models of the human airway which can be used in conjunction with numerical simulation methods to assess areas of airway obstruction. Combining this structural information with flow dynamic simulations, we can better estimate the site and causes of airway obstruction and better select and design surgery for patients with obstructive sleep apnea.

  15. Partial representation of a multi area power system for didactic simulation of automatic generation control; Representacao parcial de sistemas de potencia multi-area para simulacao didatica do controle automatico de geracao

    Energy Technology Data Exchange (ETDEWEB)

    Camacho, Jose Roberto

    1987-08-01

    The dynamics of the automatic generation control (AGC) through partial representation of a multi-area power system was studied. A computer model has been developed to analyze the generation control of a power system taking into account several inherent aspects to the system such as dead band of speed governors, upper and lower generator limits for hydro and thermal units, sampling and zero order hold of the area control errors. Several control strategies have been studied such as integral control, proportional-integral control as well as variable structure control as a new approach to hydro-thermal system control. The performance of the proposed AGC model has been assessed with a sample for area hydro-thermal system containing several typical parameters of Brazilian utilities. (author). 46 refs., 94 figs., 3 tabs

  16. Automatic inference of specifications using matching logic

    OpenAIRE

    Alpuente Frasnedo, María; Feliú Gabaldón, Marco Antonio; Villanueva García, Alicia

    2013-01-01

    Formal specifications can be used for various software engineering activities ranging from finding errors to documenting software and automatic test-case generation. Automatically discovering specifications for heap-manipulating programs is a challenging task. In this paper, we propose a technique for automatically inferring formal specifications from C code which is based on the symbolic execution and automated reasoning tandem "MATCHING LOGIC /K framework". We implemented our technique for ...

  17. Using automatic programming for simulating reliability network models

    Science.gov (United States)

    Tseng, Fan T.; Schroer, Bernard J.; Zhang, S. X.; Wolfsberger, John W.

    1988-01-01

    This paper presents the development of an automatic programming system for assisting modelers of reliability networks to define problems and then automatically generate the corresponding code in the target simulation language GPSS/PC.

  18. An interoperable standard system for the automatic generation and publication of the fire risk maps based on Fire Weather Index (FWI)

    Science.gov (United States)

    Julià Selvas, Núria; Ninyerola Casals, Miquel

    2015-04-01

    It has been implemented an automatic system to predict the fire risk in the Principality of Andorra, a small country located in the eastern Pyrenees mountain range, bordered by Catalonia and France, due to its location, his landscape is a set of a rugged mountains with an average elevation around 2000 meters. The system is based on the Fire Weather Index (FWI) that consists on different components, each one, measuring a different aspect of the fire danger calculated by the values of the weather variables at midday. CENMA (Centre d'Estudis de la Neu i de la Muntanya d'Andorra) has a network around 10 automatic meteorological stations, located in different places, peeks and valleys, that measure weather data like relative humidity, wind direction and speed, surface temperature, rainfall and snow cover every ten minutes; this data is sent daily and automatically to the system implemented that will be processed in the way to filter incorrect measurements and to homogenizer measurement units. Then this data is used to calculate all components of the FWI at midday and for the level of each station, creating a database with the values of the homogeneous measurements and the FWI components for each weather station. In order to extend and model this data to all Andorran territory and to obtain a continuous map, an interpolation method based on a multiple regression with spline residual interpolation has been implemented. This interpolation considerer the FWI data as well as other relevant predictors such as latitude, altitude, global solar radiation and sea distance. The obtained values (maps) are validated using a cross-validation leave-one-out method. The discrete and continuous maps are rendered in tiled raster maps and published in a web portal conform to Web Map Service (WMS) Open Geospatial Consortium (OGC) standard. Metadata and other reference maps (fuel maps, topographic maps, etc) are also available from this geoportal.

  19. An automatic dose verification system for adaptive radiotherapy for helical tomotherapy

    Science.gov (United States)

    Mo, Xiaohu; Chen, Mingli; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel; Lu, Weiguo

    2014-03-01

    Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic dose

  20. An automatic dose verification system for adaptive radiotherapy for helical tomotherapy

    International Nuclear Information System (INIS)

    Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic

  1. Evaluation of the accuracy of a method for automatic portal image registration

    International Nuclear Information System (INIS)

    test the accuracy and precision of our registration method. For example, a series of 20 simulated portal images with random field positioning errors of ±5 mm in translation and ±5 deg. in rotation were used to evaluate the accuracy of core-based portal image registration for a patient undergoing treatment for prostate cancer. In a one-time preprocessing step, fiducial cores were generated for user-selected anatomical structures (bones) in a reference portal image and were automatically generated for each of the images containing known positioning errors via a core-based object recognition approach that takes into account both position and width information given by the core. Results: For the pelvic study, in all cases the reported translation was within 1 mm of the actual translation with mean absolute errors in translation of 0.3 mm and standard deviations of 0.3 mm. In all cases the reported rotation was within 0.6 deg. of the actual rotation with a mean absolute error of 0.18 deg. and a standard deviation of 0.23 deg. . While this study does not directly extrapolate to the clinical setting, since no non-rigid patient motion was allowed, the accuracy and precision serve as a benchmark to measure registration performance and to compare different means (both interactive and automatic) for verifying treatment setup. Conclusions: Both the accuracy and precision of our automatic portal registration technique have been measured for a number of cases including a prostate AP field using clinically-realistic, simulated portal radiographs with exactly-known positioning errors. Automatic analysis of other treatment sites using simulated portal images generated from the NLM Visible Human datasets have demonstrated comparable results indicating acceptable performance of our methods for routine clinical use

  2. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: General formulation and removal of singularities

    Energy Technology Data Exchange (ETDEWEB)

    Ndong, Mamadou; Lauvergnat, David [CNRS, Laboratoire de Chimie Physique (UMR 8000), Université Paris-Sud, F-91405 Orsay (France); Nauts, André [Institut de la Matière Condensée et des Nanosciences, Université Catholique de Louvain, Chemin du Cyclotron 2, 1348-Louvain-la-Neuve, Belgium and Laboratoire de Chimie Physique (UMR 8000), Université Paris-Sud, F-91405 Orsay (France); Joubert-Doriol, Loïc; Gatti, Fabien [CTMM, Institut Charles Gerhardt (UMR 5232-CNRS), CC 1501, Université de Montpellier II, F-34095 Montpellier cedex 05 (France); Meyer, Hans-Dieter [Theoretische Chemie, Universität Heidelberg, Im Neuenheimer Feld 229, D-69120 Heidelberg (Germany)

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of the code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.

  3. Automatic computer procedure for generating exact and analytical kinetic energy operators based on the polyspherical approach: general formulation and removal of singularities.

    Science.gov (United States)

    Ndong, Mamadou; Nauts, André; Joubert-Doriol, Loïc; Meyer, Hans-Dieter; Gatti, Fabien; Lauvergnat, David

    2013-11-28

    We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of the code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package. PMID:24289344

  4. Automatic control of nuclear power plants

    International Nuclear Information System (INIS)

    The fundamental concepts in automatic control are surveyed, and the purpose of the automatic control of pressurized water reactors is given. The response characteristics for the main components are then studied and block diagrams are given for the main control loops (turbine, steam generator, and nuclear reactors)

  5. Automatic Association of News Items.

    Science.gov (United States)

    Carrick, Christina; Watters, Carolyn

    1997-01-01

    Discussion of electronic news delivery systems and the automatic generation of electronic editions focuses on the association of related items of different media type, specifically photos and stories. The goal is to be able to determine to what degree any two news items refer to the same news event. (Author/LRW)

  6. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    International Nuclear Information System (INIS)

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result

  7. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  8. Application of Automatic Generation Control in Yixing Pumped Storage Power Station%自动发电控制在宜兴抽水蓄能电站的应用

    Institute of Scientific and Technical Information of China (English)

    李海波; 仇岚

    2012-01-01

    The paper introduces the actual condition of generation units in the East China Yixing Pumped Storage Power Station.After expounding the 2 controlling methods,i.e.single unit and group units AGC(automatic generation control),the paper emphasizes the probabilities of hierarchy control in power station and presents the application of AGC in the power plant side.%介绍了华东宜兴抽水蓄能电站机组的实际情况,阐述了单机自动发电控制和成组控制2种控制方式,说明了电站分层控制的概念,介绍了自动发电控制在电厂侧的应用情况。

  9. A Detection Method of FAQ Matching Inquiry E-mails by Automatic Generation of Characteristic Word Groups from Past Inquiry E-mails

    Science.gov (United States)

    Sakumichi, Yuki; Akiyoshi, Masanori; Samejima, Masaki; Oka, Hironori

    This paper discusses how to detect the inquiry e-mails corresponding to pre-defined FAQs (Frequently Asked Questions). Web-based interactions such as order and registration form on a Web page are usually provided with their FAQ pages for helping a user. However, most users submit their inquiry e-mails without checking such pages. This causes a help desk operator to process lots of e-mails even if some contents match FAQs. Automatic detecting of such e-mails is proposed based on SVM (Support Vector Machine) and specific Jaccard coefficient based on positive and negative already-received inquiry e-mails. Experimental results show its effectiveness, and we also discuss future work to improve our method.

  10. Fully automatic and robust 3D registration of serial-section microscopic images

    OpenAIRE

    Ching-Wei Wang; Eric Budiman Gosno; Yen-Sheng Li

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robu...

  11. Digital imaging in anatomic pathology.

    Science.gov (United States)

    O'Brien, M J; Sotnikov, A V

    1996-10-01

    Advances in computer technology continue to bring new innovations to departments of anatomic pathology. This article briefly reviews the present status of digital optical imaging, and explores the directions that this technology may lead over the next several years. Technical requirements for digital microscopic and gross imaging, and the available options for image archival and retrieval are summarized. The advantages of digital images over conventional photography in the conference room, and the usefulness of digital imaging in the frozen section suite and gross room, as an adjunct to surgical signout and as a resource for training and education, are discussed. An approach to the future construction of digital histologic sections and the computer as microscope is described. The digital technologic applications that are now available as components of the surgical pathologist's workstation are enumerated. These include laboratory information systems, computerized voice recognition, and on-line or CD-based literature searching, texts and atlases and, in some departments, on-line image databases. The authors suggest that, in addition to these resources that are already available, tomorrow's surgical pathology workstation will include network-linked digital histologic databases, on-line software for image analysis and 3-D image enhancement, expert systems, and ultimately, advanced pattern recognition capabilities. In conclusion, the authors submit that digital optical imaging is likely to have a significant and positive impact on the future development of anatomic pathology. PMID:8853053

  12. Research of the Optimal Automatic Generation Control Strategy of Interconnected Power Grid Based on CPS Standard%基于CPS标准的互联电网最优自动发电控制策略研究

    Institute of Scientific and Technical Information of China (English)

    田启东; 翁毅选

    2015-01-01

    研究了考虑CPS的互联电网自动发电控制问题,提出了基于CPS和最优动态闭环控制的自动发电控制策略,在区域间互联电网中建立自动发电控制状态空间数学模型,采用了外点罚函数法进行目标函数的求解,并计及了CPS的新动态性能指标.与传统A标准下的PI控制策略相比,文中提出的方法考虑了区域控制偏差对系统频率恢复的贡献,明显地提高了CPS考核指标,降低了AGC机组的调节次数,减少了发电成本.同时,结合了最优化控制的良好的内部感知能力和动态适应性的优点.通过仿真分析,并与传统的控制策略相比较,结果表明文中提出的控制策略具有更优异的动态特性和更好的调节性能.%This paper studies the automatic generation control of the interconnected power grid considering CPS,and puts forward an automatic generation control strategy based on the CPS standard and optimal dynamic closed-loop control. It establishes the state space mathematic model of the automatic generation control of a two-region power grid,and introduces a new dynamic performance index which takes CPS into account and uses the outer point penalty function method to solve the objective function. Compared with the traditional PI control strategy based on A standard,the proposed method in the paper considers the contribution of ACE for frequency recovery,and obviously improves the CPS Assessment index,and reduces the number of regulation orders of AGC units and the cost of power generation. It combines the good internal perception and dynamic adaptability advantages of optimal control. Through simulation analysis,compared with conventional control stra-tegy,the proposed control strategy has more excellent dynamic characteristics and better regulation performance.

  13. 光伏发电与公用电网互补自动切换系统的研究与设计%The Research and Design of Automatic Switching System between Photovoltaic Power Generation and Public Power Grid

    Institute of Scientific and Technical Information of China (English)

    李雅丽; 薛同莲

    2012-01-01

    Solar photovoltaic power generation is one of the main forms of exploitation and utilization of new energy,but solar power generation is affected by environment factors such as sunrise and sunset,sunny and cloudy,etc,existing intermittent problems.As lack of power will bring inconvenience to production and life in cloudy and rainy days.In order to take full advantage of solar power,making it effective to work with the public power grid complementary,the automatic switching system between photovoltaic power generation and public power grid based on household is designed in this article.This system is composed of signal comparative circuit and switching controlling circuit.The signal comparison is used in detecting the minimum working voltage of storage battery,when it detects output voltage is lower than the minimum working voltage,the system automatically switches to public power grid;when it detects battery output voltage is higher than or equal to the minimum working voltage,the system will automatically switch to solar power generation system.%太阳能光伏发电是开发利用新能源的主要形式之一,但是太阳能发电受日出和日落、晴天和阴天等环境因素影响,存在间歇问题,在阴天和雨天因其发电不足会给生产和生活带来诸多不便。为了充分利用太阳能发电,使其能够与公用电网有效互补,文中设计了一种基于户型光伏发电与公用电网互补自动切换系统。该系统由信号比较和开关控制两部分电路组成。信号比较电路用于检测蓄电池最低工作电压,当检测到蓄电池的输出电压低于最低工作电压,系统就自动切换到公用电网;当检测到蓄电池的输出电压高于或等于最低工作电压时,系统就自动切换到太阳能发电系统。

  14. A Polygon Data Automatic Generation Algorithm Based on Topology Information%一种基于拓扑信息的多边形数据自动生成算法

    Institute of Scientific and Technical Information of China (English)

    卢浩; 钟耳顺; 王天宝; 王少华

    2012-01-01

    It is essential to GIS for automatic generation of polygon data,creation and maintenance of polygon topology information as many GIS operations are based on them. In this paper, the current polygon data automatic generation algorithms are summarized and analyzed,as well as polygon topology information generation algorithms with other scholars,a more efficient polygon data automatic generation algorithm based on topology information is proposed. Firstly, the core contents of the algorithm data structure are presented,describing the three core process including arc adjacency, polygon search and topology relationship determination. Secondly,the topology information creation by the polygon search process is described, which can accelerate the process of topology relationship determine. Finally, the algorithm time complexity analysis is presented,as well as the experimental verification.%在GIS的众多应用中,多边形数据的自动生成和多边形数据拓扑关系的构建与维护都是一种高频率的操作.该文在分析和总结已有多边形数据自动生成算法和拓扑关系生成算法基础上,提出了一种基于拓扑信息的多边形数据自动生成算法(PG-TI).介绍了该算法的数据结构以及弧段邻接关系确定、多边形搜索和拓扑关系确定3个核心过程,重点探讨了使用多边形搜索过程中建立的拓扑信息来提升拓扑关系确定过程性能,在此基础上与传统算法和ArcGIS中对应算法的时间复杂度进行了对比分析和验证.

  15. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial

    International Nuclear Information System (INIS)

    This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients (‘benchmarking cases’), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 “RADAR” trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation

  16. Making anatomical dynamic film using the principle of linear motion

    Institute of Scientific and Technical Information of China (English)

    Sun Guosheng

    2015-01-01

    Objective:The aim of this study was to develop the dynamic aids to help students to combine human morphology and function during study, and to understand and memorize important and difficult contents u-sing physiological function of analog organs and system. Methods:The design of the aids was based on our innova-tion. The linear movement is derived from the number of lines, the thickness of a line, distance and angle between lines. Therefore, according to the effect of line stripes, the stripes were divided into two types: ( 1 ) the parallel straight lines which meet the following criteria - 12 stripes per cm, the equal thickness of the stripes, the equal distance between adjacent stripes and printable on a transparent film;(2)the straight line and curved stripes which meet the following criteria -an equal or unequal linear fringe space between the stripes, the curve stripes being drawn by a mathematical equation, and being digitalized and stored in a computer. Results:(1) Demonstrating a dynamic effect:The parallel straight stripes with a 12 percentimeter space between the stripes were printed on a transparent film. The film was termed"the moving film" as its effect was displayed while moving the film. Another static film was made. The static film shown different directions. After the moving film was overlaid on the static film, slowly moving the film produced a wave-like spread. (2)Producing a dynamic film:The quality of a dynamic film was determined by the quality of the "static film". The first was to design and draw the drawings, and leave space for generating dynamic sense to prepare the paste, with the detection of dynamic effects until satisfaction. It appeared impossible to draw the difficult curvilinear motion in fringes by hands. We input mathematical equations into the computer and connected the automatic plotter to draw. A variety of drawn"static diagram fringe pattern as the library was stored in a computer to access at any time. Conclusions

  17. Research on Automatic Code Generation Based on SDL%基于SDL语言代码自动生成技术研究

    Institute of Scientific and Technical Information of China (English)

    吴琦; 熊光泽

    2003-01-01

    As one of the key technology of CASE tools,code auto-generation has a wide application future. However, at present, some of problems limit its application in the practical project, such as executive efficiency of code generation, the combination with the hardware and software and etc. In thus paper, the main factors of code autogeneration are introduced in details. The main parts of the code auto-generation based on SDL and the main factors which will effect the ultimately code performance are analyzed. The improved methods aiming at the different software and hardware platform and application performance are presented.

  18. Anatomic consideration for preventive implantation.

    Science.gov (United States)

    Denissen, H W; Kalk, W; Veldhuis, H A; van Waas, M A

    1993-01-01

    The aim of preventive implant therapy is to prevent or delay loss of alveolar ridge bone mass. For use in an anatomic study of 60 mandibles, resorption of the alveolar ridge was classified into four preventive stages: (1) after extraction of teeth; (2) after initial resorption; (3) when the ridge has atrophied to a knife-edge shape; and (4) when only basal bone remains. Implantation in stage 3 necessitates removal of the knife-edge ridge to create space for cylindrical implants. Therefore, implantation in stage 2 is advocated to prevent the development of stage 3. The aim of implantation in stage 4 is to prevent total loss of function of the atrophic mandible. PMID:8359876

  19. Automatically Generated Model of Book Acquisitioning Recommendation List Using Text-Mining%基于文本挖掘的图书采访推荐清单自动生成模型

    Institute of Scientific and Technical Information of China (English)

    张成林

    2013-01-01

      由于大多数图书馆的采编馆员人数有限,依靠传统的手工统计表单方式采访图书往往很难令读者满意。运用Web的文本挖掘技术提取读者历史查询关键词,构造一种图书采访推荐清单的自动生成模型。实验数据证明,该模型有着较好的召回率和精准率,能有效地执行清单的自动生成。%For most libraries, there is only limited number of acquisitioning and cataloging librarians. Using traditional manual statistics form methods, the acquisitioning result is often far from satisfying the readers’ needs. Based on web text mining technology, by extracting history query keywords used by readers, the paper gives an automatic model of book-acquisitioning recommendation list. Experiments show that the model has better recall rates and accuracy rates, and the effective book-acquisitioning list can be automatically generated.

  20. An Automatic Mosaicking Algorithm for the Generation of a Large-Scale Forest Height Map Using Spaceborne Repeat-Pass InSAR Correlation Magnitude

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2015-05-01

    Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.

  1. Implementation of an Automatic System for the Monitoring of Start-up and Operating Regimes of the Cooling Water Installations of a Hydro Generator

    Directory of Open Access Journals (Sweden)

    Ioan Pădureanu

    2015-07-01

    Full Text Available The safe operation of a hydro generator depends on its thermal regime, the basic conditions being that the temperature in the stator winding fall within the limits of the insulation class. As the losses in copper depend on the square current in the stator winding, it is necessary that the cooling water debit should be adapted to the values of these losses, so that the winding temperature falls within the range of the values prescribed in the specifications. This paper presents an efficient solution of commanding and monitoring the water cooling installations of two high-power hydro generators.

  2. 基于同步约束解除的零件爆炸图自动生成方法%Method for Automatic Generation of Exploded View Based on Synchronous Constraint Release

    Institute of Scientific and Technical Information of China (English)

    赵鸿飞; 张琦; 王海涛; 赵洋; 方宝山

    2015-01-01

    面向结构教学及维修人员培训,提出了一种基于零件几何约束关系同步解除的爆炸图自动生成方法。在定义零件拆卸轴向的基础上,建立了零件邻接拆卸约束关系矩阵及约束类型矩阵,按照可同步解除几何约束的顺序对零件进行分层,并利用判断规则识别子装配体。结合应用 OBB和 FDH 两种包围盒,提出了一种“由外向内”的等速率分层牵引零件爆炸分离方法,实现了装配体组成零件爆炸图的自动生成。%To improve learning of structure design and training maintenance persons,a method for generating exploded view automatically was proposed based on synchronism release of parts geo-metric constraint relations.Part adj acency restriction relation matrix and restriction type matrix were built by defining part disassembly axial.Parts were stratifed according to the sequence of geometric constraint synchronism release,and sub-assembly was identified by defining rules.A method for parts isometric rate explosive separation form outside to inside was constructed.OBB(oriented boun-ding box)and FDH(fixed directions hulls)bounding boxes were used to realize the automatic genera-tion of assembly component parts exploded view.

  3. Brain Morphometry Using Anatomical Magnetic Resonance Imaging

    Science.gov (United States)

    Bansal, Ravi; Gerber, Andrew J.; Peterson, Bradley S.

    2008-01-01

    The efficacy of anatomical magnetic resonance imaging (MRI) in studying the morphological features of various regions of the brain is described, also providing the steps used in the processing and studying of the images. The ability to correlate these features with several clinical and psychological measures can help in using anatomical MRI to…

  4. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  5. Automatic Schema Evolution in Root

    Institute of Scientific and Technical Information of China (English)

    ReneBrun; FonsRademakers

    2001-01-01

    ROOT version 3(spring 2001) supports automatic class schema evolution.In addition this version also produces files that are self-describing.This is achieved by storing in each file a record with the description of all the persistent classes in the file.Being self-describing guarantees that a file can always be read later,its structure browsed and objects inspected.also when the library with the compiled code of these classes is missing The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session.ROOT supports the automatic generation of C++ code describing the data objects in a file.

  6. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  7. An Investigation into Automatically Captured Autobiographical Metadata, and the Support for Autobiographical Narrative Generation. Mini-Thesis: PhD upgrade report

    OpenAIRE

    Tuffield, Mischa M; Millard, David E.; Shadbolt, Nigel R.

    2006-01-01

    Personal information and the act of publishing multimedia artifacts to the World Wide Web is becoming more and more observable. This report presents an infrastructure for the capturing and exploitation of personal metadata to drive research into context aware systems. I aim to expose ongoing research in the areas of capture of personal experiences, context aware systems, multimedia annotation systems, narrative generation, and that of Semantic Web enabling technologies. This report details th...

  8. SimWorld – Automatic Generation of realistic Landscape models for Real Time Simulation Environments – a Remote Sensing and GIS-Data based Processing Chain

    OpenAIRE

    Sparwasser, Nils; Stöbe, Markus; Friedl, Hartmut; Krauß, Thomas; Meisner, Robert

    2007-01-01

    The interdisciplinary project “SimWorld” - initiated by the German Aerospace Center (DLR) - aims to improve and to facilitate the generation of virtual landscapes for driving simulators. It integrates the expertise of different research institutes working in the field of car simulation and remote sensing technology. SimWorld will provide detailed virtual copies of the real world derived from air- and satellite-borne remote sensing data, using automated geo-scientific analysis techniques for m...

  9. Design and use of numerical anatomical atlases for radiotherapy

    International Nuclear Information System (INIS)

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  10. Approaches to Automatic Text Structuring

    OpenAIRE

    Erbs, Nicolai

    2015-01-01

    Structured text helps readers to better understand the content of documents. In classic newspaper texts or books, some structure already exists. In the Web 2.0, the amount of textual data, especially user-generated data, has increased dramatically. As a result, there exists a large amount of textual data which lacks structure, thus making it more difficult to understand. In this thesis, we will explore techniques for automatic text structuring to help readers to fulfill their information need...

  11. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  12. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-02-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  13. 考虑光伏组件发电性能的自动除尘系统运行时间优化%Optimization of running time of automatic dedusting system considered generating performance of PV mudules

    Institute of Scientific and Technical Information of China (English)

    郭枭; 澈力格尔; 韩雪; 田瑞

    2015-01-01

    Low power generation efficiency is one of the main obstacles to apply PV (photovoltaic) modules in large scale, and therefore studying the influence factors is of great significance. This article has independently developed a kind of automatic dedusting system of PV modules, which has the advantage of simple structure, low installation cost, reliable operation, without the use of water in the ash deposition, continuous and effective dedusting. The system has been applied to 3 kinds of occasions, including supplying power separately by the PV conversion cell with temperature in the range of -45℃−35℃, having various experimental tests of the assemble angles by the PV module cells and a large area of the PV power system. The dedusting effect of the automatic dedusting system is tested with temperature in the range of -10℃−5℃ when applied in the power separately by the PV conversion cell. Adopting the automatic dedusting system, the dynamic occlusion in the operation process has been simulated and the influence law of the output parameter for PV modules has been researched; the effect of dedusting has been analyzed under different amounts of the ash deposition; the effect of dedusting changing with the amount of the ash deposition has been summarized, and the opening time and the running period have been determined. The experimental PV modules are placed in outdoor open ground at an angle of 45°for 3, 7, 20 days and the amounts of the ash deposition are 0.1274, 0.2933, 0.8493 g/m2separately. The correction coefficient of PV modules involved in the experiments is 0.9943. The results show that, when the system is in the horizontal and vertical cycle, the cleaning brush makes the output parameters of the PV modules, including the output power, the electric current and the voltage, change according to the V-shaped law as it crosses a row of battery. Compared with the process of downlink, the output parameters of PV modules in the process of uplink fluctuate

  14. Biofabrication of multi-material anatomically shaped tissue constructs

    International Nuclear Information System (INIS)

    Additive manufacturing in the field of regenerative medicine aims to fabricate organized tissue-equivalents. However, the control over shape and composition of biofabricated constructs is still a challenge and needs to be improved. The current research aims to improve shape, by converging a number of biocompatible, quality construction materials into a single three-dimensional fiber deposition process. To demonstrate this, several models of complex anatomically shaped constructs were fabricated by combined deposition of poly(vinyl alcohol), poly(ε-caprolactone), gelatin methacrylamide/gellan gum and alginate hydrogel. Sacrificial components were co-deposited as temporary support for overhang geometries and were removed after fabrication by immersion in aqueous solutions. Embedding of chondrocytes in the gelatin methacrylamide/gellan component demonstrated that the fabrication and the sacrificing procedure did not affect cell viability. Further, it was shown that anatomically shaped constructs can be successfully fabricated, yielding advanced porous thermoplastic polymer scaffolds, layered porous hydrogel constructs, as well as reinforced cell-laden hydrogel structures. In conclusion, anatomically shaped tissue constructs of clinically relevant sizes can be generated when employing multiple building and sacrificial materials in a single biofabrication session. The current techniques offer improved control over both internal and external construct architecture underscoring its potential to generate customized implants for human tissue regeneration. (paper)

  15. Wien Automatic System Package (WASP). A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 2: Appendices

    International Nuclear Information System (INIS)

    With several Member States, the IAEA has completed a new version of the WASP program, which has been called WASP-Ill Plus since it follows quite closely the methodology of the WASP-Ill model. The major enhancements in WASP-Ill Plus with respect to the WASP-Ill version are: increase in the number of thermal fuel types (from 5 to 10); verification of which configurations generated by CONGEN have already been simulated in previous iterations with MERSIM; direct calculation of combined Loading Order of FIXSYS and VARSYS plants; simulation of system operation includes consideration of physical constraints imposed on some fuel types (i.e., fuel availability for electricity generation); extended output of the resimulation of the optimal solution; generation of a file that can be used for graphical representation of the results of the resimulation of the optimal solution and cash flows of the investment costs; calculation of cash flows allows to include the capital costs of plants firmly committed or in construction (FIXSYS plants); user control of the distribution of capital cost expenditures during the construction period (if required to be different from the general 'S' curve distribution used as default). This second volume of the document to support use of the WASP-Ill Plus computer code consists of 5 appendices giving some additional information about the WASP-Ill Plus program. Appendix A is mainly addressed to the WASP-Ill Plus system analyst and supplies some information which could help in the implementation of the program on the user computer facilities. This appendix also includes some aspects about WASP-Ill Plus that could not be treated in detail in Chapters 1 to 11. Appendix B identifies all error and warning messages that may appear in the WASP printouts and advises the user how to overcome the problem. Appendix C presents the flow charts of the programs along with a brief description of the objectives and structure of each module. Appendix D describes the

  16. Reinforcement Based Fuzzy Neural Network Control with Automatic Rule Generation%基于增强型算法并能自动生成规则的模糊神经网络控制器

    Institute of Scientific and Technical Information of China (English)

    吴耿锋; 傅忠谦

    2001-01-01

    A reinforcement based fuzzy neural network controller (RBFNNC) is proposed. A set of optimised fuzzy control rules can be automatically generated through reinforcement learning based on the state variables of object system. RBFNNC was applied to a cart-pole balancing system and shows significant improvements on the rule generation.%给出了一种基于增强型算法并能自动生成控制规则的模糊神经网络控制器RBFNNC(reinforcements based fuzzy neural network controller).该控制器能根据被控对象的状态通过增强型学习自动生成模糊控制规则.RBFNNC用于倒立摆小车平衡系统控制的仿真实验表明了该系统的结构及增强型学习算法是有效和成功的.

  17. Towards fully automatic object detection and segmentation

    Science.gov (United States)

    Schramm, Hauke; Ecabert, Olivier; Peters, Jochen; Philomin, Vasanth; Weese, Juergen

    2006-03-01

    An automatic procedure for detecting and segmenting anatomical objects in 3-D images is necessary for achieving a high level of automation in many medical applications. Since today's segmentation techniques typically rely on user input for initialization, they do not allow for a fully automatic workflow. In this work, the generalized Hough transform is used for detecting anatomical objects with well defined shape in 3-D medical images. This well-known technique has frequently been used for object detection in 2-D images and is known to be robust and reliable. However, its computational and memory requirements are generally huge, especially in case of considering 3-D images and various free transformation parameters. Our approach limits the complexity of the generalized Hough transform to a reasonable amount by (1) using object prior knowledge during the preprocessing in order to suppress unlikely regions in the image, (2) restricting the flexibility of the applied transformation to only scaling and translation, and (3) using a simple shape model which does not cover any inter-individual shape variability. Despite these limitations, the approach is demonstrated to allow for a coarse 3-D delineation of the femur, vertebra and heart in a number of experiments. Additionally it is shown that the quality of the object localization is in nearly all cases sufficient to initialize a successful segmentation using shape constrained deformable models.

  18. Automatic input rectification

    OpenAIRE

    Long, Fan; Ganesh, Vijay; Carbin, Michael James; Sidiroglou, Stelios; Rinard, Martin

    2012-01-01

    We present a novel technique, automatic input rectification, and a prototype implementation, SOAP. SOAP learns a set of constraints characterizing typical inputs that an application is highly likely to process correctly. When given an atypical input that does not satisfy these constraints, SOAP automatically rectifies the input (i.e., changes the input so that it satisfies the learned constraints). The goal is to automatically convert potentially dangerous inputs into typical inputs that the ...

  19. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  20. Computerised 3-D anatomical modelling using plastinates: an example utilising the human heart.

    Science.gov (United States)

    Tunali, S; Kawamoto, K; Farrell, M L; Labrash, S; Tamura, K; Lozanoff, S

    2011-08-01

    Computerised modelling methods have become highly useful for generating electronic representations of anatomical structures. These methods rely on crosssectional tissue slices in databases such as the Visible Human Male and Female, the Visible Korean Human, and the Visible Chinese Human. However, these databases are time consuming to generate and require labour-intensive manual digitisation while the number of specimens is very limited. Plastinated anatomical material could provide a possible alternative to data collection, requiring less time to prepare and enabling the use of virtually any anatomical or pathological structure routinely obtained in a gross anatomy laboratory. The purpose of this study was to establish an approach utilising plastinated anatomical material, specifically human hearts, for the purpose computerised 3-D modelling. Human hearts were collected following gross anatomical dissection and subjected to routine plastination procedures including dehydration (-25(o)C), defatting, forced impregnation, and curing at room temperature. A graphics pipeline was established comprising data collection with a hand-held scanner, 3-D modelling, model polishing, file conversion, and final rendering. Representative models were viewed and qualitatively assessed for accuracy and detail. The results showed that the heart model provided detailed surface information necessary for gross anatomical instructional purposes. Rendering tools facilitated optional model manipulation for further structural clarification if selected by the user. The use of plastinated material for generating 3-D computerised models has distinct advantages compared to cross-sectional tissue images. PMID:21866531

  1. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  2. Automatic generation of time resolved motion vector fields of coronary arteries and 4D surface extraction using rotational x-ray angiography

    International Nuclear Information System (INIS)

    Rotational coronary angiography provides a multitude of x-ray projections of the contrast agent enhanced coronary arteries along a given trajectory with parallel ECG recording. These data can be used to derive motion information of the coronary arteries including vessel displacement and pulsation. In this paper, a fully automated algorithm to generate 4D motion vector fields for coronary arteries from multi-phase 3D centerline data is presented. The algorithm computes similarity measures of centerline segments at different cardiac phases and defines corresponding centerline segments as those with highest similarity. In order to achieve an excellent matching accuracy, an increasing number of bifurcations is included as reference points in an iterative manner. Based on the motion data, time-dependent vessel surface extraction is performed on the projections without the need of prior reconstruction. The algorithm accuracy is evaluated quantitatively on phantom data. The magnitude of longitudinal errors (parallel to the centerline) reaches approx. 0.50 mm and is thus more than twice as large as the transversal 3D extraction errors of the underlying multi-phase 3D centerline data. It is shown that the algorithm can extract asymmetric stenoses accurately. The feasibility on clinical data is demonstrated on five different cases. The ability of the algorithm to extract time-dependent surface data, e.g. for quantification of pulsating stenosis is demonstrated.

  3. Ultrasound Anatomical Visualization of the rabbit liver

    OpenAIRE

    Kamelia Dimcheva Stamatova-Yovcheva; Rosen Dimitrov; David Yovchev; Krassimira Uzunova; Rumen Binev

    2014-01-01

    The topic was to investigate the anatomical features of the rabbit liver by two- and three-dimensional ultrasonography. Eighteen sexually mature healthy clinically New Zealand rabbits aged eight months were studied. Two-dimensional ultarsonographic anatomical image of the rabbit liver presented it in the cranial abdominal region as a relatively hypoechoic finding. Its contours were regular and in close contact with the hyperechoic diaphragm. Liver parenchyma was heterogeneous. The gall bladde...

  4. [Establishment of anatomical terminology in Japan].

    Science.gov (United States)

    Shimada, Kazuyuki

    2008-12-01

    The history of anatomical terminology in Japan began with the publication of Waran Naikei Ihan-teimŏ in 1805 and Chŏtei Kaitai Shinsho in 1826. Although the establishment of Japanese anatomical terminology became necessary during the Meiji era when many western anatomy books imported into Janan were translated, such terminology was not unified during this period and varied among translators. In 1871, Tsukumo Ono's Kaibŏgaku Gosen was published by the Ministry of Education. Although this book is considered to be the first anatomical glossary terms in Japan, its contents were incomplete. Overseas, the German Anatomical Society established a unified anatomical terminology in 1895 called the Basle Nomina Anatomica (B.N.A.). Based on this development, Kaibŏgaku Meishŭ which follows the BNA, by Buntarŏ Suzuki was published in 1905. With the subsequent establishment in 1935 of Jena Nomina Anatomica (J.N.A.), the unification of anatomical terminology was also accelerated in Japan, leading to the further development of terminology. PMID:19108488

  5. 基于K均值PSOABC的测试用例自动生成方法%Automatic Testcase Generation Method Based on PSOABC and K-means Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    贾冀婷

    2015-01-01

    To improve the automation ability of testcase generation in software testing is very important to guarantee the quality of soft-ware and reduce the cost of software. In this paper,propose an automatic testcase generation method based on particle swarm optimiza-tion,artificial bee colony algorithm and K-means clustering algorithm,and carry out the simulation experiments. The results show that the improved algorithm’ s efficiency is better and convergence ability is stronger than other algorithms such as particle swarm optimization and genetic algorithm in the automation ability of testcase generation.%软件测试中测试用例自动生成技术对于确保软件质量与降低开发成本都是非常重要的。文中基于K均值聚类算法与粒子群算法和人工蜂群算法相结合的混合算法,提出了一种测试用例自动生成方法,并且对此方法进行了仿真实验。实验结果表明,与基本的粒子群算法、遗传算法的测试用例自动生成方法相比较,基于文中改进算法的测试用例自动生成方法具有测试用例自动生成效率高、收敛能力强等优点。

  6. Automatic target validation based on neuroscientific literature mining for tractography

    OpenAIRE

    Xavier Vasques; Renaud Richardet; Etienne Pralong; LAURA CIF

    2015-01-01

    Target identification for tractography studies requires solid anatomical knowledge validated by an extensive literature review across species for each seed structure to be studied. Manual literature review to identify targets for a given seed region is tedious and potentially subjective. Therefore, complementary approaches would be useful. We propose to use text-mining models to automatically suggest potential targets from the neuroscientific literature, full-text articles and abstracts, so t...

  7. An automatic segmentation method for fast imaging in PET

    International Nuclear Information System (INIS)

    A new segmentation method has been developed for PET fast imaging. The technique automatically segments the transmission images into different anatomical regions, it efficiently reduced the PET transmission scan time. The result shows that this method gives only 3 min-scan time which is perfect for attenuation correction of the PET images instead of the original 15-30 min-scan time. This approach has been successfully tested both on phantom and clinical data

  8. SubClonal Hierarchy Inference from Somatic Mutations: Automatic Reconstruction of Cancer Evolutionary Trees from Multi-region Next Generation Sequencing.

    Directory of Open Access Journals (Sweden)

    Noushin Niknafs

    2015-10-01

    Full Text Available Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones--cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8 can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can

  9. Detection and analysis of statistical differences in anatomical shape.

    Science.gov (United States)

    Golland, Polina; Grimson, W Eric L; Shenton, Martha E; Kikinis, Ron

    2005-02-01

    We present a computational framework for image-based analysis and interpretation of statistical differences in anatomical shape between populations. Applications of such analysis include understanding developmental and anatomical aspects of disorders when comparing patients versus normal controls, studying morphological changes caused by aging, or even differences in normal anatomy, for example, differences between genders. Once a quantitative description of organ shape is extracted from input images, the problem of identifying differences between the two groups can be reduced to one of the classical questions in machine learning of constructing a classifier function for assigning new examples to one of the two groups while making as few misclassifications as possible. The resulting classifier must be interpreted in terms of shape differences between the two groups back in the image domain. We demonstrate a novel approach to such interpretation that allows us to argue about the identified shape differences in anatomically meaningful terms of organ deformation. Given a classifier function in the feature space, we derive a deformation that corresponds to the differences between the two classes while ignoring shape variability within each class. Based on this approach, we present a system for statistical shape analysis using distance transforms for shape representation and the support vector machines learning algorithm for the optimal classifier estimation and demonstrate it on artificially generated data sets, as well as real medical studies. PMID:15581813

  10. 地闪密度图与相应MIF文件自动生成%Automatic Generation of Cloud-to-Ground Flash Density Map and Corresponding MIF File

    Institute of Scientific and Technical Information of China (English)

    董兴朋; 李胜乐; 彭愿; 苏融; 刘珠妹; 刘坚

    2012-01-01

    Cloud-to-ground flash density map could reflect distribution of activity and variation characteristics of thunder and lightning, and provide basic data for lightning protection. We developed a Visual C++ program, which is capable of automatically generating cloud-to-ground flash density map and corresponding MIF file. The MIF file can be converted to the Maplnfo Tab diagram by using the Maplnfo software for further analysis. It avoids Mapbasic language and only uses general Visual C++ in the process of map-making based on Maplnfo. At last, we make dynamic link library with program, in order to save the system resources and improve the operation efficiency.%地闪密度分布图能够反映雷电的活动分布和变化特征,可为电网防雷提供基础资料.编写Visual C++程序,能够自动生成地闪密度分布图和相应的MIF文件,生成的MIF文件可以在MapInfo中转换为MapInfo Tab图,并做进一步分析.这就避免了专业的Mapbasic语言,只需通用的Visual C++语言即可在MapInfo中成图.最后将程序做成动态链接库,节省了系统资源,提高了运行效率.

  11. Automatic Sea-route Generation Based on the Combination of Ant Colony Searching and Genetic Optimization%蚁群搜索与遗传优化结合的航线自动生成

    Institute of Scientific and Technical Information of China (English)

    李启华; 李晓阳; 吴国华

    2014-01-01

    This article researches an automatic sea-route generation problem,introduces grid settings and attribute structure of navigation area,discusses computational method and model of grid attribute,some issues including ant colony search strategy,genetic optimizing method and computation model of sea-route performance,route smoothing method and model are discussed. The feasibility of the method and the accuracy of the model are verified by a series of examples.%本文研究航线自动生成问题,介绍了海图网格的设置和网格的属性结构,讨论了网格属性的计算方法、模型,论述了蚁群搜索的策略,遗传优化的方法和航线性能计算模型,航线平滑的方法和模型,用示例验证了方法的可行性和模型的准确性。

  12. The Simulation Platform Design for Automatic Generation Control System Based Matlab GUI%基于 Matlab GUI的电力系统自动发电控制仿真平台设计

    Institute of Scientific and Technical Information of China (English)

    张春慧; 国中琦; 张永

    2014-01-01

    利用Matlab中的图形用户界面( GUI),设计了一个电力系统自动发电控制( AGC)仿真平台。该平台应用于电力系统单区域一次、互联电网二次调频,同时加入控制策略模块,用于经典PID和模糊PID控制策略的分析。该仿真平台可直观地反映AGC系统调节下的频率变化趋势,以及两种控制策略的性能优劣,同时可便捷地设置参数。%Graphical User Interface ( GUI) of Matlab was used to design a simulation platform of Automatic Gener-ation Control system .This platform was used in single area frequency regulation and primary and secondary frequency regulation of the interconnected power grid , at the same time , it compares the performance of classical PID and the fuzzy self-adjustive PID control strategy .At last, the simulation results showed that the effectiveness of the platform which can reflect the frequency change trend under the AGC system and modify the parameter easily .

  13. Spinning gland transcriptomics from two main clades of spiders (order: Araneae--insights on their molecular, anatomical and behavioral evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    Full Text Available Characterized by distinctive evolutionary adaptations, spiders provide a comprehensive system for evolutionary and developmental studies of anatomical organs, including silk and venom production. Here we performed cDNA sequencing using massively parallel sequencers (454 GS-FLX Titanium to generate ∼80,000 reads from the spinning gland of Actinopus spp. (infraorder: Mygalomorphae and Gasteracantha cancriformis (infraorder: Araneomorphae, Orbiculariae clade. Actinopus spp. retains primitive characteristics on web usage and presents a single undifferentiated spinning gland while the orbiculariae spiders have seven differentiated spinning glands and complex patterns of web usage. MIRA, Celera Assembler and CAP3 software were used to cluster NGS reads for each spider. CAP3 unigenes passed through a pipeline for automatic annotation, classification by biological function, and comparative transcriptomics. Genes related to spider silks were manually curated and analyzed. Although a single spidroin gene family was found in Actinopus spp., a vast repertoire of specialized spider silk proteins was encountered in orbiculariae. Astacin-like metalloproteases (meprin subfamily were shown to be some of the most sampled unigenes and duplicated gene families in G. cancriformis since its evolutionary split from mygalomorphs. Our results confirm that the evolution of the molecular repertoire of silk proteins was accompanied by the (i anatomical differentiation of spinning glands and (ii behavioral complexification in the web usage. Finally, a phylogenetic tree was constructed to cluster most of the known spidroins in gene clades. This is the first large-scale, multi-organism transcriptome for spider spinning glands and a first step into a broad understanding of spider web systems biology and evolution.

  14. Automatic segmentation and co-registration of gated CT angiography datasets: measuring abdominal aortic pulsatility

    Science.gov (United States)

    Wentz, Robert; Manduca, Armando; Fletcher, J. G.; Siddiki, Hassan; Shields, Raymond C.; Vrtiska, Terri; Spencer, Garrett; Primak, Andrew N.; Zhang, Jie; Nielson, Theresa; McCollough, Cynthia; Yu, Lifeng

    2007-03-01

    Purpose: To develop robust, novel segmentation and co-registration software to analyze temporally overlapping CT angiography datasets, with an aim to permit automated measurement of regional aortic pulsatility in patients with abdominal aortic aneurysms. Methods: We perform retrospective gated CT angiography in patients with abdominal aortic aneurysms. Multiple, temporally overlapping, time-resolved CT angiography datasets are reconstructed over the cardiac cycle, with aortic segmentation performed using a priori anatomic assumptions for the aorta and heart. Visual quality assessment is performed following automatic segmentation with manual editing. Following subsequent centerline generation, centerlines are cross-registered across phases, with internal validation of co-registration performed by examining registration at the regions of greatest diameter change (i.e. when the second derivative is maximal). Results: We have performed gated CT angiography in 60 patients. Automatic seed placement is successful in 79% of datasets, requiring either no editing (70%) or minimal editing (less than 1 minute; 12%). Causes of error include segmentation into adjacent, high-attenuating, nonvascular tissues; small segmentation errors associated with calcified plaque; and segmentation of non-renal, small paralumbar arteries. Internal validation of cross-registration demonstrates appropriate registration in our patient population. In general, we observed that aortic pulsatility can vary along the course of the abdominal aorta. Pulsation can also vary within an aneurysm as well as between aneurysms, but the clinical significance of these findings remain unknown. Conclusions: Visualization of large vessel pulsatility is possible using ECG-gated CT angiography, partial scan reconstruction, automatic segmentation, centerline generation, and coregistration of temporally resolved datasets.

  15. The application of automatic tracking control method based on PLC photovoltaic generation in Jiuquan%基于PLC光伏发电自动跟踪控制方法在酒泉应用

    Institute of Scientific and Technical Information of China (English)

    秦天像

    2014-01-01

    As the position of the sun changes with time, the light intensity of the solar cell array of photovoltaic power generation system is not stable, therefore the efficiency of photovoltaic battery is re-duced. So, the design of automatic solar tracker is the effective measures to improve the efficiency of photovoltaic power generation system. Aiming at the existing defects and shortcomings of the photovoltaic tracking control method, the author takes into account the prediction and control of motors in the rotation time variation of solar position angle and tracking error range, proposes a tracking control method using PLC, and has tested its feasibility by theoretical analysis and simulation results in Matlab/Simulink.%由于太阳位置随时间而变化,使光伏发电系统的太阳能电池阵列受光照强度不稳定,从而降低了光伏电池的效率,因此,设计太阳自动跟踪器是提高光伏发电系统工作效率的有效措施。该文针对已有的光伏跟踪控制方法的缺陷与不足,考虑到执行电机在转动时间内对太阳位置角度的变化与跟踪误差范围的预测与控制,提出了一种采用PLC的跟踪控制方法,并通过理论分析与Matlab/Simulink仿真结果验证了其可行性,具有很高的推广应用价值。

  16. Lateral laryngopharyngeal diverticulum: anatomical and videofluoroscopic study

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Milton Melciades Barbosa [Universidade Federal do Rio de Janeiro ICB/CCS/UFRJ, Laboratorio de Motilidade Digestiva e Imagem, S. F1-008, Departamento de Anatomia, Rio de Janeiro (Brazil); Koch, Hilton Augusto [Universidade Federal do Rio de Janeiro ICB/CCS/UFRJ, Departamento de Radiologia, Rio de Janeiro (Brazil)

    2005-07-01

    The aims were to characterize the anatomical region where the lateral laryngopharyngeal protrusion occurs and to define if this protrusion is a normal or a pathological entity. This protrusion was observed on frontal contrasted radiographs as an addition image on the upper portion of the laryngopharynx. We carried out a plane-by-plane qualitative anatomical study through macroscopic and mesoscopic surgical dissection on 12 pieces and analyzed through a videofluoroscopic method on frontal incidence the pharyngeal phase of the swallowing process of 33 patients who had a lateral laryngopharyngeal protrusion. The anatomical study allowed us to identify the morphological characteristics that configure the high portion of the piriform recess as a weak anatomical point. The videofluoroscopic study allowed us to observe the laryngopharyngeal protrusion and its relation to pharyngeal repletion of the contrast medium. All kinds of the observed protrusions could be classified as ''lateral laryngopharyngeal diverticula.'' The lateral diverticula were more frequent in older people. These lateral protrusions can be found on one or both sides, usually with a small volume, without sex or side prevalence. This formation is probably a sign of a pharyngeal transference difficulty associated with a deficient tissue resistance in the weak anatomical point of the high portion of the piriform recess. (orig.)

  17. BODY DONATION: AN ANATOMICAL GIFT TO HELP FUTURE GENERATIONS

    Directory of Open Access Journals (Sweden)

    Shiksha

    2015-05-01

    Full Text Available Dear Editor, “ Body donation is the act of giving away one`s body after death without any conditions or rewards for the sake of education and research in medicine . ” 1 A sound knowledge of anatomy is essential from the beginning of a medical education and knowledge obtained through dissection of human body is an indispensable part of the education of health ca re professionals. It is the first step to become a doctor. Pool of source for cadaver used to be unclaimed bodies and few donated bodies. The Anatomy Act , 2 provides for the supply of unclaimed bodies to teaching institutions and hospitals for the purpose o f dissection and research work. With the mushrooming of medical Institutions in the country and need of human tissue for medical research and science is great, scarcity of bodies is felt all over the world . 3,4 The situation is equally affected in India too . 2 Too fulfill the requirement there came the ideology of voluntary body donation.

  18. BODY DONATION: AN ANATOMICAL GIFT TO HELP FUTURE GENERATIONS

    OpenAIRE

    Shiksha

    2015-01-01

    Dear Editor, “ Body donation is the act of giving away one`s body after death without any conditions or rewards for the sake of education and research in medicine . ” 1 A sound knowledge of anatomy is essential from the beginning of a medical education and knowledge obtained through dissection of human body is an indispensable part of the education of health ca re professionals. It is the first step to become a doctor. Pool of source for cadaver used to be ...

  19. Anatomical pathways involved in generating and sensing rhythmic whisker movements

    Directory of Open Access Journals (Sweden)

    Laurens W.J. Bosman

    2011-10-01

    Full Text Available The rodent whisker system is widely used as a model system for investigating sensorimotor integration, neural mechanisms of complex cognitive tasks, neural development, and robotics. The whisker pathways to the barrel cortex have received considerable attention. However, many subcortical structures are paramount to the whisker system. They contribute to important processes, like filtering out salient features, integration with other senses and adaptation of the whisker system to the general behavioral state of the animal. We present here an overview of the brain regions and their connections involved in the whisker system. We do not only describe the anatomy and functional roles of the cerebral cortex, but also those of subcortical structures like the striatum, superior colliculus, cerebellum, pontomedullary reticular formation, zona incerta and anterior pretectal nucleus as well as those of level setting systems like the cholinergic, histaminergic, serotonergic and noradrenergic pathways. We conclude by discussing how these brain regions may affect each other and how they together may control the precise timing of whisker movements and coordinate whisker perception.

  20. Automatic metadata generation for learning objects

    OpenAIRE

    Ramšak, Maja

    2011-01-01

    One of the results of modern era is a massive production and usage of manifold electronic resources. Number of digital collections, digital libraries and repositories who offer these resources to users, usually by search mechanisms, are increasing. This is especially evident in scientific research and education area. Above mentioned services for managing electronic resources use metadata and metadata records, respectively. Many authors present metadata as data about data or information a...

  1. Automatic Hardware Generation for Reconfigurable Architectures

    NARCIS (Netherlands)

    Nane, R.

    2014-01-01

    Reconfigurable Architectures (RA) have been gaining popularity rapidly in the last decade for two reasons. First, processor clock frequencies reached threshold values past which power dissipation becomes a very difficult problem to solve. As a consequence, alternatives were sought to keep improving

  2. Automatic generation of hardware/software interfaces

    OpenAIRE

    King, Myron Decker; Dave, Nirav H.; Mithal, Arvind

    2012-01-01

    Enabling new applications for mobile devices often requires the use of specialized hardware to reduce power consumption. Because of time-to-market pressure, current design methodologies for embedded applications require an early partitioning of the design, allowing the hardware and software to be developed simultaneously, each adhering to a rigid interface contract. This approach is problematic for two reasons: (1) a detailed hardware-software interface is difficult to specify until one is de...

  3. Pseudo-Urban automatic pattern generation

    OpenAIRE

    Saleri, Renato

    2005-01-01

    Notre but dans ce travail est de rechercher et d'expérimenter des méthodes de production automatique de morphologies urbaines ou architecturales. Nous avons jusqu'ici implémenté et fait converger des dispositifs s'appuyant sur une heuristique couplant un moteur de production de séquences pseudo-aléatoires avec un formalisme graphtal, de type L-System (Lindenmayer System). L'objectif étant dans un premier temps de produire simplement et “à moindres frais“ des environnements géometriques textur...

  4. Automatic query formulations in information retrieval.

    Science.gov (United States)

    Salton, G; Buckley, C; Fox, E A

    1983-07-01

    Modern information retrieval systems are designed to supply relevant information in response to requests received from the user population. In most retrieval environments the search requests consist of keywords, or index terms, interrelated by appropriate Boolean operators. Since it is difficult for untrained users to generate effective Boolean search requests, trained search intermediaries are normally used to translate original statements of user need into useful Boolean search formulations. Methods are introduced in this study which reduce the role of the search intermediaries by making it possible to generate Boolean search formulations completely automatically from natural language statements provided by the system patrons. Frequency considerations are used automatically to generate appropriate term combinations as well as Boolean connectives relating the terms. Methods are covered to produce automatic query formulations both in a standard Boolean logic system, as well as in an extended Boolean system in which the strict interpretation of the connectives is relaxed. Experimental results are supplied to evaluate the effectiveness of the automatic query formulation process, and methods are described for applying the automatic query formulation process in practice. PMID:10299297

  5. Standardized anatomic space for abdominal fat quantification

    Science.gov (United States)

    Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.

    2014-03-01

    The ability to accurately measure subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) from images is important for improved assessment and management of patients with various conditions such as obesity, diabetes mellitus, obstructive sleep apnea, cardiovascular disease, kidney disease, and degenerative disease. Although imaging and analysis methods to measure the volume of these tissue components have been developed [1, 2], in clinical practice, an estimate of the amount of fat is obtained from just one transverse abdominal CT slice typically acquired at the level of the L4-L5 vertebrae for various reasons including decreased radiation exposure and cost [3-5]. It is generally assumed that such an estimate reliably depicts the burden of fat in the body. This paper sets out to answer two questions related to this issue which have not been addressed in the literature. How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? At what anatomic location do the volumes of SAT and VAT correlate maximally with the corresponding single-slice area measures? To answer these questions, we propose two approaches for slice localization: linear mapping and non-linear mapping which is a novel learning based strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. We then study the volume-to-area correlations and determine where they become maximal. We demonstrate on 50 abdominal CT data sets that this mapping achieves significantly improved consistency of anatomic localization compared to current practice. Our results also indicate that maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized.

  6. Congenital neck masses: embryological and anatomical perspectives

    Directory of Open Access Journals (Sweden)

    Zahida Rasool

    2013-08-01

    Full Text Available Neck masses are a common problem in paediatric age group. They tend to occur frequently and pose a diagnostic dilemma to the ENT surgeons. Although the midline and lateral neck masses differ considerably in their texture and presentation but the embryological perspective of these masses is not mostly understood along with the fundamental anatomical knowledge. The article tries to correlate the embryological, anatomical and clinical perspectives for the same. [Int J Res Med Sci 2013; 1(4.000: 329-332

  7. Automatic Implantable Cardiac Defibrillator

    Medline Plus

    Full Text Available Automatic Implantable Cardiac Defibrillator February 19, 2009 Halifax Health Medical Center, Daytona Beach, FL Welcome to Halifax Health Daytona Beach, Florida. Over the next hour you' ...

  8. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  9. Automatic Arabic Text Classification

    OpenAIRE

    Al-harbi, S; Almuhareb, A.; Al-Thubaity , A; Khorsheed, M. S.; Al-Rajeh, A.

    2008-01-01

    Automated document classification is an important text mining task especially with the rapid growth of the number of online documents present in Arabic language. Text classification aims to automatically assign the text to a predefined category based on linguistic features. Such a process has different useful applications including, but not restricted to, e-mail spam detection, web page content filtering, and automatic message routing. This paper presents the results of experiments on documen...

  10. Automated Analysis of {sup 123}I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-03-15

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-{sup 123}I-iodophenyl)tropane ({sup 123}I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional {sup 123}I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease.

  11. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4-123I-iodophenyl)tropane (123I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  12. Fault injection system for automatic testing system

    Institute of Scientific and Technical Information of China (English)

    王胜文; 洪炳熔

    2003-01-01

    Considering the deficiency of the means for confirming the attribution of fault redundancy in the re-search of Automatic Testing System(ATS) , a fault-injection system has been proposed to study fault redundancyof automatic testing system through compurison. By means of a fault-imbeded environmental simulation, thefaults injected at the input level of the software are under test. These faults may induce inherent failure mode,thus bringing about unexpected output, and the anticipated goal of the test is attained. The fault injection con-sists of voltage signal generator, current signal generator and rear drive circuit which are specially developed,and the ATS can work regularly by means of software simulation. The experimental results indicate that the faultinjection system can find the deficiency of the automatic testing software, and identify the preference of fault re-dundancy. On the other hand, some soft deficiency never exposed before can be identified by analyzing the tes-ting results.

  13. TH-E-17A-01: Internal Respiratory Surrogate for 4D CT Using Fourier Transform and Anatomical Features

    Energy Technology Data Exchange (ETDEWEB)

    Hui, C; Suh, Y; Robertson, D; Pan, T; Das, P; Crane, C; Beddar, S [MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-15

    Purpose: To develop a novel algorithm to generate internal respiratory signals for sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm extracted multiple time resolved features as potential respiratory signals. These features were taken from the 4D CT images and its Fourier transformed space. Several low-frequency locations in the Fourier space and selected anatomical features from the images were used as potential respiratory signals. A clustering algorithm was then used to search for the group of appropriate potential respiratory signals. The chosen signals were then normalized and averaged to form the final internal respiratory signal. Performance of the algorithm was tested in 50 4D CT data sets and results were compared with external signals from the real-time position management (RPM) system. Results: In almost all cases, the proposed algorithm generated internal respiratory signals that visibly matched the external respiratory signals from the RPM system. On average, the end inspiration times calculated by the proposed algorithm were within 0.1 s of those given by the RPM system. Less than 3% of the calculated end inspiration times were more than one time frame away from those given by the RPM system. In 3 out of the 50 cases, the proposed algorithm generated internal respiratory signals that were significantly smoother than the RPM signals. In these cases, images sorted using the internal respiratory signals showed fewer artifacts in locations corresponding to the discrepancy in the internal and external respiratory signals. Conclusion: We developed a robust algorithm that generates internal respiratory signals from 4D CT images. In some cases, it even showed the potential to outperform the RPM system. The proposed algorithm is completely automatic and generally takes less than 2 min to process. It can be easily implemented into the clinic and can potentially replace the use of external surrogates.

  14. 一种基于用户偏好自动分类的社会媒体共享和推荐方法%A User Preference Based Automatic Potential Group Generation Method for Social Media Sharing and Recommendation

    Institute of Scientific and Technical Information of China (English)

    贾大文; 曾承; 彭智勇; 成鹏; 阳志敏; 卢舟

    2012-01-01

    Social media applications have become the mainstream of Web application. User-oriented and content generated by users are pivotal characteristics of social media sites. Data sharing and recommendation approaches play an important role in dealing with the problem of information overload in social media environment. In this paper, we analyze the flaws of current group-based information sharing mechanism and the common problem of traditional recommender approaches, and then we propose a novel approach of group automatic generating for social media sharing and recommendation. Intuitively, the essential idea of our approach is that we switch user's preference from the media objects to the interest elements which media objects imply. Then we gather the users who have common preference, namely users have the same interestingness in a set of interest elements, together as Common Preference Group (CPG). We also propose a new social media data sharing and recommendation system architecture based on CPG and design a CPG automatic mining algorithm. By compare our CPG mining algorithm with other algorithm which has similar functionality, it is shown that our algorithm could be applicable to real social media application with massive users.%社会媒体应用已成为Web应用的主流,以用户为中心并且海量媒体数据由用户自生成是社会媒体Web应用的重要特征.应对目前社会媒体环境中信息过载的问题,信息的共享和推荐机制发挥着重要的作用.文中分析了目前主流社会媒体网站基于用户自建组的信息共享机制所存在的问题以及传统推荐技术在效率上的问题,提出了一种新的基于用户偏好自动分类的社会媒体数据共享和推荐方法.直观上讲,该方法的本质是把用户对具体媒体对象的偏好转化成用户对媒体对象所蕴含兴趣元素的偏好,然后把具有相同偏好的用户,即对若干兴趣元素上的兴趣度都相同,自动聚

  15. Methodology for Automatic Generation of Models for Large Urban Spaces Based on GIS Data/Metodología para la generación automática de modelos de grandes espacios urbanos desde información SIG/

    Directory of Open Access Journals (Sweden)

    Sergio Arturo Ordóñez Medina

    2012-12-01

    Full Text Available In the planning and evaluation stages of infrastructure projects, it is necessary to manage huge quantities of information. Cities are very complex systems, which need to be modeled when an intervention is required. Suchmodels allow us to measure the impact of infrastructure changes, simulating hypothetic scenarios and evaluating results. This paper describes a methodology for the automatic generation of urban space models from GIS sources. A Voronoi diagram is used to partition large urban regions and subsequently define zones of interest. Finally, some examples of application models are presented, one used for microsimulation of traffic and another for air pollution simulation.En las etapas de planeación y evaluación de proyectos de infraestructura es necesario manejar grandes cantidades de información. Las ciudades son sistemas complejos que deben ser modeladas para ser intervenidas. Estos modelos permitirón medir el impacto de los cambios de infraestructura, simular escenarios hipotéticos y evaluar resultados. Este artículo describe una metodología para generar automáticamente modelos espaciales urbanos desde fuentes SIG: Un diagrama de Voronoi es usado para dividir grandes regiones urbanas, y a continuación serán definidas las zonas de interés. Finalmente, algunos ejemplos de modelos de aplicación serán presentados, uno usado para microsimulación de tráfico y el otro para simular contaminación atmosférica.

  16. Report of a rare anatomic variant

    DEFF Research Database (Denmark)

    De Brucker, Y; Ilsen, B; Muylaert, C;

    2015-01-01

    We report the CT findings in a case of partial anomalous pulmonary venous return (PAPVR) from the left upper lobe in an adult. PAPVR is an anatomic variant in which one to three pulmonary veins drain into the right atrium or its tributaries, rather than into the left atrium. This results in a lef...

  17. Evolution of the Anatomical Theatre in Padova

    Science.gov (United States)

    Macchi, Veronica; Porzionato, Andrea; Stecco, Carla; Caro, Raffaele

    2014-01-01

    The anatomical theatre played a pivotal role in the evolution of medical education, allowing students to directly observe and participate in the process of dissection. Due to the increase of training programs in clinical anatomy, the Institute of Human Anatomy at the University of Padova has renovated its dissecting room. The main guidelines in…

  18. Magnetic resonance angiography: infrequent anatomic variants

    International Nuclear Information System (INIS)

    We studied through RM angiography (3D TOF) with high magnetic field equipment (1.5 T) different infrequent intracerebral vascular anatomic variants. For their detection we emphasise the value of post-processed images obtained after conventional angiographic sequences. These post-processed images should be included in routine protocols for evaluation of the intracerebral vascular structures. (author)

  19. HPV Vaccine Effective at Multiple Anatomic Sites

    Science.gov (United States)

    A new study from NCI researchers finds that the HPV vaccine protects young women from infection with high-risk HPV types at the three primary anatomic sites where persistent HPV infections can cause cancer. The multi-site protection also was observed at l

  20. Handbook of anatomical models for radiation dosimetry

    CERN Document Server

    Eckerman, Keith F

    2010-01-01

    Covering the history of human model development, this title presents the major anatomical and physical models that have been developed for human body radiation protection, diagnostic imaging, and nuclear medicine therapy. It explores how these models have evolved and the role that modern technologies have played in this development.

  1. Automatic programming of grinding robot restoration of contours

    Directory of Open Access Journals (Sweden)

    Are Willersrud

    1995-07-01

    Full Text Available A new programming method has been developed for grinding robots. Instead of using the conventional jog-and-teach method, the workpiece contour is automatically tracked by the robot. During the tracking, the robot position is stored in the robot control system every 8th millisecond. After filtering and reducing this contour data, a robot program is automatically generated.

  2. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.; Stage, Bjarne

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due to...

  3. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; De Vries, A.P.; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive manua

  4. Automatic contrast phase estimation in CT volumes.

    Science.gov (United States)

    Sofka, Michal; Wu, Dijia; Sühling, Michael; Liu, David; Tietjen, Christian; Soza, Grzegorz; Zhou, S Kevin

    2011-01-01

    We propose an automatic algorithm for phase labeling that relies on the intensity changes in anatomical regions due to the contrast agent propagation. The regions (specified by aorta, vena cava, liver, and kidneys) are first detected by a robust learning-based discriminative algorithm. The intensities inside each region are then used in multi-class LogitBoost classifiers to independently estimate the contrast phase. Each classifier forms a node in a decision tree which is used to obtain the final phase label. Combining independent classification from multiple regions in a tree has the advantage when one of the region detectors fail or when the phase training example database is imbalanced. We show on a dataset of 1016 volumes that the system correctly classifies native phase in 96.2% of the cases, hepatic dominant phase (92.2%), hepatic venous phase (96.7%), and equilibrium phase (86.4%) in 7 seconds on average. PMID:22003696

  5. Reduction of Dutch Sentences for Automatic Subtitling

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.; Daelemans, W.; Höthker, A.

    2004-01-01

    We compare machine learning approaches for sentence length reduction for automatic generation of subtitles for deaf and hearing-impaired people with a method which relies on hand-crafted deletion rules. We describe building the necessary resources for this task: a parallel corpus of examples of news

  6. Historical evolution of anatomical terminology from ancient to modern.

    Science.gov (United States)

    Sakai, Tatsuo

    2007-06-01

    The historical development of anatomical terminology from the ancient to the modern can be divided into five stages. The initial stage is represented by the oldest extant anatomical treatises by Galen of Pergamon in the Roman Empire. The anatomical descriptions by Galen utilized only a limited number of anatomical terms, which were essentially colloquial words in the Greek of this period. In the second stage, Vesalius in the early 16th century described the anatomical structures in his Fabrica with the help of detailed magnificent illustrations. He coined substantially no anatomical terms, but devised a system that distinguished anatomical structures with ordinal numbers. The third stage of development in the late 16th century was marked by innovation of a large number of specific anatomical terms especially for the muscles, vessels and nerves. The main figures at this stage were Sylvius in Paris and Bauhin in Basel. In the fourth stage between Bauhin and the international anatomical terminology, many anatomical textbooks were written mainly in Latin in the 17th century, and in modern languages in the 18th and 19th centuries. Anatomical terms for the same structure were differently expressed by different authors. The last stage began at the end of the 19th century, when the first international anatomical terminology in Latin was published as Nomina anatomica. The anatomical terminology was revised repeatedly until the current Terminologia anatomica both in Latin and English. PMID:17585563

  7. ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS

    OpenAIRE

    Józef DREWNIAK; Stanisław ZAWIŚLAK; Wieczorek, Andrzej

    2014-01-01

    In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, ...

  8. Automatic page composition with combined image crop and layout metrics

    Science.gov (United States)

    Hunter, Andrew; Greig, Darryl

    2012-03-01

    Automatic layout algorithms simplify the composition of image-rich documents, but they still require users to have sufficient artistry to supply well cropped and composed imagery. Combining an automatic cropping technology with a document layout system enables better results to be produced faster by less-skilled users. This paper reviews prior work in automatic image cropping and automatic page layout and presents a case for a combined crop and layout technology. We describe one such technology in a system for interactive publication design by amateur self-publishers and show that providing an automatic cropping system with additional information about the layout context can enable it to generate a more appropriate set of ranked crop options for a given image. Furthermore, we show that providing an automatic layout system with sets of ranked crop options for images can enable it to compose more appropriate page layouts.

  9. Automatic detection system for multiple region of interest registration to account for posture changes in head and neck radiotherapy

    Science.gov (United States)

    Mencarelli, A.; van Beek, S.; Zijp, L. J.; Rasch, C.; van Herk, M.; Sonke, J.-J.

    2014-04-01

    Despite immobilization of head and neck (H and N) cancer patients, considerable posture changes occur over the course of radiotherapy (RT). To account for the posture changes, we previously implemented a multiple regions of interest (mROIs) registration system tailored to the H and N region for image-guided RT correction strategies. This paper is focused on the automatic segmentation of the ROIs in the H and N region. We developed a fast and robust automatic detection system suitable for an online image-guided application and quantified its performance. The system was developed to segment nine high contrast structures from the planning CT including cervical vertebrae, mandible, hyoid, manubrium of sternum, larynx and occipital bone. It generates nine 3D rectangular-shaped ROIs and informs the user in case of ambiguities. Two observers evaluated the robustness of the segmentation on 188 H and N cancer patients. Bland-Altman analysis was applied to a sub-group of 50 patients to compare the registration results using only the automatically generated ROIs and those manually set by two independent experts. Finally the time performance and workload were evaluated. Automatic detection of individual anatomical ROIs had a success rate of 97%/53% with/without user notifications respectively. Following the notifications, for 38% of the patients one or more structures were manually adjusted. The processing time was on average 5 s. The limits of agreement between the local registrations of manually and automatically set ROIs was comprised between ±1.4 mm, except for the manubrium of sternum (-1.71 mm and 1.67 mm), and were similar to the limits agreement between the two experts. The workload to place the nine ROIs was reduced from 141 s (±20 s) by the manual procedure to 59 s (±17 s) using the automatic method. An efficient detection system to segment multiple ROIs was developed for Cone-Beam CT image-guided applications in the H and N region and is clinically implemented in

  10. Automatic detection system for multiple region of interest registration to account for posture changes in head and neck radiotherapy

    International Nuclear Information System (INIS)

    Despite immobilization of head and neck (H and N) cancer patients, considerable posture changes occur over the course of radiotherapy (RT). To account for the posture changes, we previously implemented a multiple regions of interest (mROIs) registration system tailored to the H and N region for image-guided RT correction strategies. This paper is focused on the automatic segmentation of the ROIs in the H and N region. We developed a fast and robust automatic detection system suitable for an online image-guided application and quantified its performance. The system was developed to segment nine high contrast structures from the planning CT including cervical vertebrae, mandible, hyoid, manubrium of sternum, larynx and occipital bone. It generates nine 3D rectangular-shaped ROIs and informs the user in case of ambiguities. Two observers evaluated the robustness of the segmentation on 188 H and N cancer patients. Bland–Altman analysis was applied to a sub-group of 50 patients to compare the registration results using only the automatically generated ROIs and those manually set by two independent experts. Finally the time performance and workload were evaluated. Automatic detection of individual anatomical ROIs had a success rate of 97%/53% with/without user notifications respectively. Following the notifications, for 38% of the patients one or more structures were manually adjusted. The processing time was on average 5 s. The limits of agreement between the local registrations of manually and automatically set ROIs was comprised between ±1.4 mm, except for the manubrium of sternum (−1.71 mm and 1.67 mm), and were similar to the limits agreement between the two experts. The workload to place the nine ROIs was reduced from 141 s (±20 s) by the manual procedure to 59 s (±17 s) using the automatic method. An efficient detection system to segment multiple ROIs was developed for Cone-Beam CT image-guided applications in the H and N region and is clinically

  11. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers by...... members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers a...

  12. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  13. Automatic utilities auditing

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Colin Boughton [Energy Metering Technology (United Kingdom)

    2000-08-01

    At present, energy audits represent only snapshot situations of the flow of energy. The normal pattern of energy audits as seen through the eyes of an experienced energy auditor is described. A brief history of energy auditing is given. It is claimed that the future of energy auditing lies in automatic meter reading with expert data analysis providing continuous automatic auditing thereby reducing the skill element. Ultimately, it will be feasible to carry out auditing at intervals of say 30 minutes rather than five years.

  14. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  15. An automatic system for segmentation, matching, anatomical labeling and measurement of airways from CT images

    DEFF Research Database (Denmark)

    Petersen, Jens; Feragen, Aasa; Lo, P.; Owen, Megan; Wille, M.M.W.; Thomsen, Laura; Dirksen, Asger; de Bruijne, Marleen

    Purpose: Assessing airway dimensions and attenuation from CT images is useful in the study of diseases affecting the airways such as Chronic Obstructive Pulmonary Disease (COPD). Measurements can be compared between patients and over time if specific airway segments can be identified. However, ma...

  16. Integrating anatomical pathology to the healthcare enterprise.

    Science.gov (United States)

    Daniel-Le Bozec, Christel; Henin, Dominique; Fabiani, Bettina; Bourquard, Karima; Ouagne, David; Degoulet, Patrice; Jaulent, Marie-Christine

    2006-01-01

    For medical decisions, healthcare professionals need that all required information is both correct and easily available. We address the issue of integrating anatomical pathology department to the healthcare enterprise. The pathology workflow from order to report, including specimen process and image acquisition was modeled. Corresponding integration profiles were addressed by expansion of the IHE (Integrating the Healthcare Enterprise) initiative. Implementation using respectively DICOM Structured Report (SR) and DICOM Slide-Coordinate Microscopy (SM) was tested. The two main integration profiles--pathology general workflow and pathology image workflow--rely on 13 transactions based on HL7 or DICOM standard. We propose a model of the case in anatomical pathology and of other information entities (orders, image folders and reports) and real-world objects (specimen, tissue samples, slides, etc). Cases representation in XML schemas, based on DICOM specification, allows producing DICOM image files and reports to be stored into a PACS (Picture Archiving and Communication System. PMID:17108550

  17. ANATOMIC RESEARCH OF SUPERIOR CLUNIAL NERVE TRAUMA

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    In order to find the mechanism of superior clunial nerve (SCN) trauma, we dissected and revealed SCN from 12 corpses (24 sides). Combining 100 sides of SCN trauma, we inspected the course of SCN, the relation between SCN and it's neighbour tissues with the situation of SCN when being subjected to force. We found that the following special anatomic characteristics and mechanical elements such as the course of SCN, it's turning angles, the bony fibrous tube at the iliac crest, the posterior layer of the lumbodorsal fascia and SCN neighbour adipose tissue, are the causes of external force inducing SCN trauma. The anatomic revealment is the guidance of SCN trauma treatment with edged needle.

  18. A multi-institution evaluation of deformable image registration algorithms for automatic organ delineation in adaptive head and neck radiotherapy

    International Nuclear Information System (INIS)

    Adaptive Radiotherapy aims to identify anatomical deviations during a radiotherapy course and modify the treatment plan to maintain treatment objectives. This requires regions of interest (ROIs) to be defined using the most recent imaging data. This study investigates the clinical utility of using deformable image registration (DIR) to automatically propagate ROIs. Target (GTV) and organ-at-risk (OAR) ROIs were non-rigidly propagated from a planning CT scan to a per-treatment CT scan for 22 patients. Propagated ROIs were quantitatively compared with expert physician-drawn ROIs on the per-treatment scan using Dice scores and mean slicewise Hausdorff distances, and center of mass distances for GTVs. The propagated ROIs were qualitatively examined by experts and scored based on their clinical utility. Good agreement between the DIR-propagated ROIs and expert-drawn ROIs was observed based on the metrics used. 94% of all ROIs generated using DIR were scored as being clinically useful, requiring minimal or no edits. However, 27% (12/44) of the GTVs required major edits. DIR was successfully used on 22 patients to propagate target and OAR structures for ART with good anatomical agreement for OARs. It is recommended that propagated target structures be thoroughly reviewed by the treating physician

  19. A multi-institution evaluation of deformable image registration algorithms for automatic organ delineation in adaptive head and neck radiotherapy

    Directory of Open Access Journals (Sweden)

    Hardcastle Nicholas

    2012-06-01

    Full Text Available Abstract Background Adaptive Radiotherapy aims to identify anatomical deviations during a radiotherapy course and modify the treatment plan to maintain treatment objectives. This requires regions of interest (ROIs to be defined using the most recent imaging data. This study investigates the clinical utility of using deformable image registration (DIR to automatically propagate ROIs. Methods Target (GTV and organ-at-risk (OAR ROIs were non-rigidly propagated from a planning CT scan to a per-treatment CT scan for 22 patients. Propagated ROIs were quantitatively compared with expert physician-drawn ROIs on the per-treatment scan using Dice scores and mean slicewise Hausdorff distances, and center of mass distances for GTVs. The propagated ROIs were qualitatively examined by experts and scored based on their clinical utility. Results Good agreement between the DIR-propagated ROIs and expert-drawn ROIs was observed based on the metrics used. 94% of all ROIs generated using DIR were scored as being clinically useful, requiring minimal or no edits. However, 27% (12/44 of the GTVs required major edits. Conclusion DIR was successfully used on 22 patients to propagate target and OAR structures for ART with good anatomical agreement for OARs. It is recommended that propagated target structures be thoroughly reviewed by the treating physician.

  20. Anatomical basis for impotence following haemorrhoid sclerotherapy.

    OpenAIRE

    Pilkington, S. A.; Bateman, A C; Wombwell, S.; Miller, R

    2000-01-01

    Impotence has been reported as a rare but important complication of sclerotherapy for haemorrhoids. The relationship between the anterior wall of the rectum and the periprostatic parasympathetic nerves responsible for penile erection was studied to investigate a potential anatomical explanation for this therapeutic complication. A tissue block containing the anal canal, rectum and prostate was removed from each of six male cadaveric subjects. The dimensions of the components of the rectal wal...

  1. Identification of anatomical terminology in medical text.

    OpenAIRE

    Sneiderman, C. A.; Rindflesch, T. C.; Bean, C. A.

    1998-01-01

    We report on an experiment to use the natural language processing tools being developed in the SPECIALIST system to accurately identify terminology associated with the coronary arteries as expressed in coronary catheterization reports. The ultimate goal is to map from any anatomically-oriented medical text to online images, using the UMLS as an intermediate knowledge source. We describe some of the problems encountered when processing coronary artery terminology and report on the results of a...

  2. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...... energy that has earlier proved to be particularly well suited for human surface scans. The method has been tested on full cranial scans of ten test subjects and on several scans of the outer human ear....

  3. Anatomic Landmarks for the First Dorsal Compartment

    OpenAIRE

    Hazani, Ron; Engineer, Nitin J.; Cooney, Damon; Wilhelmi, Bradon J.

    2009-01-01

    Objective: Knowledge of anatomic landmarks for the first dorsal compartment can assist clinicians with management of de Quervain's disease. The radial styloid, the scaphoid tubercle, and Lister's tubercle can be used as superficial landmarks for the first dorsal compartment. Methods: Thirty-two cadaveric wrists were dissected, and measurements were taken from the predetermined landmarks to the extensor retinaculum. The compartments were also inspected for variability of the abductor pollicis ...

  4. Microstructure and Anatomical Characteristics of Daemonorops margaritae

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Daemonorops margaritae is among the most important commercial rattan in South China. Its microstructure and basic anatomical characteristics as well as variation were investigated. Results show that: 1)The variation along the height is small, while the variation along the radial direction is significant; 2) The fibre length, fibre ratio and distribution density of the vascular bundles in the cross section decrease from cortex to core, while the fibre width, vessel element length and width, parenchyma ratio,...

  5. Pure endoscopic endonasal odontoidectomy: anatomical study

    OpenAIRE

    Messina, Andrea; Bruno, Maria Carmela; Decq, Philippe; Coste, Andre; Cavallo, Luigi Maria; de Divittis, Enrico; Cappabianca, Paolo; Tschabitscher, Manfred

    2007-01-01

    Different disorders may produce irreducible atlanto-axial dislocation with compression of the ventral spinal cord. Among the surgical approaches available for a such condition, the transoral resection of the odontoid process is the most often used. The aim of this anatomical study is to demonstrate the possibility of an anterior cervico-medullary decompression through an endoscopic endonasal approach. Three fresh cadaver heads were used. A modified endonasal endoscopic approach was made in al...

  6. ACCESSORY SPLEEN: A CLINICALLY RELEVANT ANATOMIC ANOMALY

    OpenAIRE

    Prachi Saffar; Amit Kumar; Ankur

    2016-01-01

    The purpose of our study is to emphasize on the clinical relevance of the presence of accessory spleen. It is not only a well-documented anatomic anomaly, it holds special significance in the differential diagnosis of intra-abdominal tumours and lymphadenopathy. MATERIALS AND METHODS Thirty male cadavers from North Indian population above the age of 60 yrs. were dissected in the Anatomy Department of FMHS, SGT University, Gurgaon, over a period of 5 yrs. (Sep 2010-Aug 2015) and presence...

  7. Automatic cortical surface reconstruction of high-resolution T1 echo planar imaging data.

    Science.gov (United States)

    Renvall, Ville; Witzel, Thomas; Wald, Lawrence L; Polimeni, Jonathan R

    2016-07-01

    Echo planar imaging (EPI) is the method of choice for the majority of functional magnetic resonance imaging (fMRI), yet EPI is prone to geometric distortions and thus misaligns with conventional anatomical reference data. The poor geometric correspondence between functional and anatomical data can lead to severe misplacements and corruption of detected activation patterns. However, recent advances in imaging technology have provided EPI data with increasing quality and resolution. Here we present a framework for deriving cortical surface reconstructions directly from high-resolution EPI-based reference images that provide anatomical models exactly geometric distortion-matched to the functional data. Anatomical EPI data with 1mm isotropic voxel size were acquired using a fast multiple inversion recovery time EPI sequence (MI-EPI) at 7T, from which quantitative T1 maps were calculated. Using these T1 maps, volumetric data mimicking the tissue contrast of standard anatomical data were synthesized using the Bloch equations, and these T1-weighted data were automatically processed using FreeSurfer. The spatial alignment between T2(⁎)-weighted EPI data and the synthetic T1-weighted anatomical MI-EPI-based images was improved compared to the conventional anatomical reference. In particular, the alignment near the regions vulnerable to distortion due to magnetic susceptibility differences was improved, and sampling of the adjacent tissue classes outside of the cortex was reduced when using cortical surface reconstructions derived directly from the MI-EPI reference. The MI-EPI method therefore produces high-quality anatomical data that can be automatically segmented with standard software, providing cortical surface reconstructions that are geometrically matched to the BOLD fMRI data. PMID:27079529

  8. Research on an Intelligent Automatic Turning System

    Directory of Open Access Journals (Sweden)

    Lichong Huang

    2012-12-01

    Full Text Available Equipment manufacturing industry is the strategic industries of a country. And its core part is the CNC machine tool. Therefore, enhancing the independent research of relevant technology of CNC machine, especially the open CNC system, is of great significance. This paper presented some key techniques of an Intelligent Automatic Turning System and gave a viable solution for system integration. First of all, the integrated system architecture and the flexible and efficient workflow for perfoming the intelligent automatic turning process is illustrated. Secondly, the innovated methods of the workpiece feature recognition and expression and process planning of the NC machining are put forward. Thirdly, the cutting tool auto-selection and the cutting parameter optimization solution are generated with a integrated inference of rule-based reasoning and case-based reasoning. Finally, the actual machining case based on the developed intelligent automatic turning system proved the presented solutions are valid, practical and efficient.

  9. Exploring brain function from anatomical connectivity

    Directory of Open Access Journals (Sweden)

    Gorka Zamora-López

    2011-06-01

    Full Text Available The intrinsic relationship between the architecture of the brain and the range of sensory and behavioral phenomena it produces is a relevant question in neuroscience. Here, we review recent knowledge gained on the architecture of the anatomical connectivity by means of complex network analysis. It has been found that corticocortical networks display a few prominent characteristics: (i modular organization, (ii abundant alternative processing paths and (iii the presence of highly connected hubs. Additionally, we present a novel classification of cortical areas of the cat according to the role they play in multisensory connectivity. All these properties represent an ideal anatomical substrate supporting rich dynamical behaviors, as-well-as facilitating the capacity of the brain to process sensory information of different modalities segregated and to integrate them towards a comprehensive perception of the real world. The result here exposed are mainly based in anatomical data of cats’ brain, but we show how further observations suggest that, from worms to humans, the nervous system of all animals might share fundamental principles of organization.

  10. Anatomical MRI with an atomic magnetometer

    CERN Document Server

    Savukov, I

    2012-01-01

    Ultra-low field (ULF) MRI is a promising method for inexpensive medical imaging with various additional advantages over conventional instruments such as low weight, low power, portability, absence of artifacts from metals, and high contrast. Anatomical ULF MRI has been successfully implemented with SQUIDs, but SQUIDs have the drawback of cryogen requirement. Atomic magnetometers have sensitivity comparable to SQUIDs and can be in principle used for ULF MRI to replace SQUIDs. Unfortunately some problems exist due to the sensitivity of atomic magnetometers to magnetic field and gradients. At low frequency, noise is also substantial and a shielded room is needed for improving sensitivity. In this paper, we show that at 85 kHz, the atomic magnetometer can be used to obtain anatomical images. This is the first demonstration of any use of atomic magnetometers for anatomical MRI. The demonstrated resolution is 1.1x1.4 mm2 in about six minutes of acquisition with SNR of 10. Some applications of the method are discuss...

  11. Anatomic variation of cranial parasympathetic ganglia

    Directory of Open Access Journals (Sweden)

    Selma Siéssere

    2008-06-01

    Full Text Available Having broad knowledge of anatomy is essential for practicing dentistry. Certain anatomical structures call for detailed studies due to their anatomical and functional importance. Nevertheless, some structures are difficult to visualize and identify due to their small volume and complicated access. Such is the case of the parasympathetic ganglia located in the cranial part of the autonomic nervous system, which include: the ciliary ganglion (located deeply in the orbit, laterally to the optic nerve, the pterygopalatine ganglion (located in the pterygopalatine fossa, the submandibular ganglion (located laterally to the hyoglossus muscle, below the lingual nerve, and the otic ganglion (located medially to the mandibular nerve, right beneath the oval foramen. The aim of this study was to present these structures in dissected anatomic specimens and perform a comparative analysis regarding location and morphology. The proximity of the ganglia and associated nerves were also analyzed, as well as the number and volume of fibers connected to them. Human heads were dissected by planes, partially removing the adjacent structures to the point we could reach the parasympathetic ganglia. With this study, we concluded that there was no significant variation regarding the location of the studied ganglia. Morphologically, our observations concur with previous classical descriptions of the parasympathetic ganglia, but we observed variations regarding the proximity of the otic ganglion to the mandibular nerve. We also observed that there were variations regarding the number and volume of fiber bundles connected to the submandibular, otic, and pterygopalatine ganglia.

  12. An anatomical and functional model of the human tracheobronchial tree.

    Science.gov (United States)

    Florens, M; Sapoval, B; Filoche, M

    2011-03-01

    The human tracheobronchial tree is a complex branched distribution system in charge of renewing the air inside the acini, which are the gas exchange units. We present here a systematic geometrical model of this system described as a self-similar assembly of rigid pipes. It includes the specific geometry of the upper bronchial tree and a self-similar intermediary tree with a systematic branching asymmetry. It ends by the terminal bronchioles whose generations range from 8 to 22. Unlike classical models, it does not rely on a simple scaling law. With a limited number of parameters, this model reproduces the morphometric data from various sources (Horsfield K, Dart G, Olson DE, Filley GF, Cumming G. J Appl Physiol 31: 207-217, 1971; Weibel ER. Morphometry of the Human Lung. New York: Academic Press, 1963) and the main characteristics of the ventilation. Studying various types of random variations of the airway sizes, we show that strong correlations are needed to reproduce the measured distributions. Moreover, the ventilation performances are observed to be robust against anatomical variability. The same methodology applied to the rat also permits building a geometrical model that reproduces the anatomical and ventilation characteristics of this animal. This simple model can be directly used as a common description of the entire tree in analytical or numerical studies such as the computation of air flow distribution or aerosol transport. PMID:21183626

  13. Developing a derivatives generator

    Directory of Open Access Journals (Sweden)

    Mircea Petic

    2016-01-01

    Full Text Available The article intends to highlight the particularities of the derivational morphology mechanisms that will help in lexical resources extension. Some computing approaches for derivational morphology are given for several languages, inclusively for Romanian. This paper deals with some preprocessing particularities, that are needed in the process of automatic generation. Then, generative mechanisms are presented in the form of derivational formal rules separately for prefixation and suffixation. The article ends with several approaches in automatic new generated words validation.

  14. Anatomic feature-based registration for patient set-up in head and neck cancer radiotherapy

    International Nuclear Information System (INIS)

    Modern radiotherapy equipment is capable of delivering high precision conformal dose distributions relative to isocentre. One of the barriers to precise treatments is accurate patient re-positioning before each fraction of treatment. At Massachusetts General Hospital, we perform daily patient alignment using radiographs, which are captured by flat panel imaging devices and sent to an analysis program. A trained therapist manually selects anatomically significant features in the skeleton, and couch movement is computed based on the image coordinates of the features. The current procedure takes about 5 to 10 min and significantly affects the efficiency requirement in a busy clinic. This work presents our effort to develop an improved, semi-automatic procedure that uses the manually selected features from the first treatment fraction to automatically locate the same features on the second and subsequent fractions. An implementation of this semi-automatic procedure is currently in clinical use for head and neck tumour sites. Radiographs collected from 510 patient set-ups were used to test this algorithm. A mean difference of 1.5 mm between manual and automatic localization of individual features and a mean difference of 0.8 mm for overall set-up were seen

  15. Piriformis Fossa – An Anatomical and Orthopedics Consideration

    OpenAIRE

    Lakhwani, O. P.; Mittal, P.S.; D. C. Naik

    2014-01-01

    Introduction: Piriformis fossa is an important anatomical landmark having significant clinical value in orthopedic surgery; but its location and anatomical relationship with surrounding structures are not clearly defined. Hence it is necessary to clearly describe it in respect to anatomical and orthopedic aspect.

  16. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  17. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  18. Anatomical versus Non-Anatomical Single Bundle Anterior Cruciate Ligament Reconstruction: A Cadaveric Study of Comparison of Knee Stability

    OpenAIRE

    Lim, Hong-Chul; Yoon, Yong-Cheol; Wang, Joon-Ho; Bae, Ji-Hoon

    2012-01-01

    Background The purpose of this study was to compare the initial stability of anatomical and non-anatomical single bundle anterior cruciate ligament (ACL) reconstruction and to determine which would better restore intact knee kinematics. Our hypothesis was that the initial stability of anatomical single bundle ACL reconstruction would be superior to that of non-anatomical single bundle ACL reconstruction. Methods Anterior tibial translation (ATT) and internal rotation of the tibia were measure...

  19. Modeling of Automatic Generation Control for Power System Transient, Medium-Term and Long-Term Stabilities Simulations%电力系统全过程动态仿真中的自动发电控制模型

    Institute of Scientific and Technical Information of China (English)

    宋新立; 王成山; 仲悟之; 汤涌; 卓峻峰; 旸吴国; 苏志达

    2013-01-01

    针对大规模电力系统二次调频控制的动态仿真问题,采用混杂系统的建模方法,提出一种适于机电暂态及中长期动态全过程仿真的自动发电控制模型。模型主要由属于连续动态系统的区域控制偏差计算、属于离散动态系统的控制策略和机组调节指令计算3个模块组成。通过与电力系统全过程动态仿真程序中已有模型的接口,该模型可以模拟大规模电网中基于A标准和CPS控制性能评价标准的控制策略,以及定频率控制、定交换功率控制和联络线功率频率偏差控制等多种方式。与我国特高压交流联络线相关的2个算例仿真表明,该模型可为大规模电网联络线功率波动限制、多区域AGC控制策略的协调配合和二次调频的优化控制等实际电网问题提供有效的仿真手段。%In order to dynamically simulate secondary power frequency control in large power systems, a new automatic generation control (AGC) model, which can be applied for power system electro-mechanical transient, medium-term and long-term dynamics simulation, is proposed based on the modeling method of hybrid system. It mainly consists of three parts:calculation of area control error (ACE), simulation of control strategy, and calculation of generating power regulation. The first module is modeled by the method of continuous dynamic systems, and the last two modules are modeled by the method of discrete event dynamic systems. By interfacing to the existing models in the power system unified dynamic simulation program, it is capable of simulating not only the three main control modes of AGC for large power systems, i.e., flat frequency control (FFC), constant net interchange control (CIC), and tie line bias frequency control (TBC), but also the widely-used control strategies based on CPS and A standard. Two simulation cases, which are related to the active power control for the tie-line in China UHVAC interconnected

  20. Automatic Segmentation and Online virtualCT in Head-and-Neck Adaptive Radiation Therapy

    International Nuclear Information System (INIS)

    Purpose: The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. Method: We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Results: Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. Conclusion: The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization.