WorldWideScience

Sample records for automatically generated anatomically

  1. Automatic anatomical segmentation of the liver by separation planes

    OpenAIRE

    Boltcheva , Dobrina; Passat , Nicolas; Agnus , Vincent; Jacob-Da Col , Marie-Andrée; Ronse , Christian; Soler , Luc

    2006-01-01

    International audience; Surgical planning in oncological liver surgery is based on the location of the 8 anatomical segments according to Couinaud’s definition and tumors inside these structures. The detection of the boundaries between the segments is then the first step of the preoperative planning. The proposed method, devoted to binary images of livers segmented from CT-scans, has been designed to delineate these segments. It automatically detects a set of landmarks using a priori anatomic...

  2. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  3. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    DEFF Research Database (Denmark)

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas

    2016-01-01

    's CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns....... To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient...

  4. Deformable meshes for medical image segmentation accurate automatic segmentation of anatomical structures

    CERN Document Server

    Kainmueller, Dagmar

    2014-01-01

    ? Segmentation of anatomical structures in medical image data is an essential task in clinical practice. Dagmar Kainmueller introduces methods for accurate fully automatic segmentation of anatomical structures in 3D medical image data. The author's core methodological contribution is a novel deformation model that overcomes limitations of state-of-the-art Deformable Surface approaches, hence allowing for accurate segmentation of tip- and ridge-shaped features of anatomical structures. As for practical contributions, she proposes application-specific segmentation pipelines for a range of anatom

  5. Automatic anatomical structures location based on dynamic shape measurement

    Science.gov (United States)

    Witkowski, Marcin; Rapp, Walter; Sitnik, Robert; Kujawinska, Malgorzata; Vander Sloten, Jos; Haex, Bart; Bogaert, Nico; Heitmann, Kjell

    2005-09-01

    New image processing methods and active photonics apparatus have made possible the development of relatively inexpensive optical systems for complex shape and object measurements. We present dynamic 360° scanning method for analysis of human lower body biomechanics, with an emphasis on the analysis of the knee joint. The anatomical structure (of high medical interest) that is possible to scan and analyze, is patella. Tracking of patella position and orientation under dynamic conditions may lead to detect pathological patella movements and help in knee joint disease diagnosis. The processed data is obtained from a dynamic laser triangulation surface measurement system, able to capture slow to normal movements with a scan frequency between 15 and 30 Hz. These frequency rates are enough to capture controlled movements used e.g. for medical examination purposes. The purpose of the work presented is to develop surface analysis methods that may be used as support of diagnosis of motoric abilities of lower limbs. The paper presents algorithms used to process acquired lower limbs surface data in order to find the position and orientation of patella. The algorithms implemented include input data preparation, curvature description methods, knee region discrimination and patella assumed position/orientation calculation. Additionally, a method of 4D (3D + time) medical data visualization is proposed. Also some exemplary results are presented.

  6. Semi-Automatic Anatomical Tree Matching for Landmark-Based Elastic Registration of Liver Volumes

    Directory of Open Access Journals (Sweden)

    Klaus Drechsler

    2010-01-01

    Full Text Available One promising approach to register liver volume acquisitions is based on the branching points of the vessel trees as anatomical landmarks inherently available in the liver. Automated tree matching algorithms were proposed to automatically find pair-wise correspondences between two vessel trees. However, to the best of our knowledge, none of the existing automatic methods are completely error free. After a review of current literature and methodologies on the topic, we propose an efficient interaction method that can be employed to support tree matching algorithms with important pre-selected correspondences or after an automatic matching to manually correct wrongly matched nodes. We used this method in combination with a promising automatic tree matching algorithm also presented in this work. The proposed method was evaluated by 4 participants and a CT dataset that we used to derive multiple artificial datasets.

  7. The automatic electromagnetic field generating system

    Science.gov (United States)

    Audone, B.; Gerbi, G.

    1982-07-01

    The technical study and the design approaches adopted for the definition of the automatic electromagnetic field generating system (AEFGS) dedicated to EMC susceptibility testing are presented. The AEFGS covers the frequency range 10 KHz to 40 GHZ and operates successfully in the two EMC shielded chambers at ESTEC. The performance of the generators/amplifiers subsystems, antennas selection, field amplitude and susceptibility feedback and monitoring systems is described. System control modes which guarantee the AEFGS full operability under different test conditions are discussed. Advantages of automation of susceptibility testing include increased measurement accuracy and testing cost reduction.

  8. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  9. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  10. First performance evaluation of software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine at CT

    Energy Technology Data Exchange (ETDEWEB)

    Scholtz, Jan-Erik, E-mail: janerikscholtz@gmail.com; Wichmann, Julian L.; Kaup, Moritz; Fischer, Sebastian; Kerl, J. Matthias; Lehnert, Thomas; Vogl, Thomas J.; Bauer, Ralf W.

    2015-03-15

    Highlights: •Automatic segmentation and labeling of the thoracolumbar spine. •Automatically generated double-angulated and aligned axial images of spine segments. •High grade of accurateness for the symmetric depiction of anatomical structures. •Time-saving and may improve workflow in daily practice. -- Abstract: Objectives: To evaluate software for automatic segmentation, labeling and reformation of anatomical aligned axial images of the thoracolumbar spine on CT in terms of accuracy, potential for time savings and workflow improvement. Material and methods: 77 patients (28 women, 49 men, mean age 65.3 ± 14.4 years) with known or suspected spinal disorders (degenerative spine disease n = 32; disc herniation n = 36; traumatic vertebral fractures n = 9) underwent 64-slice MDCT with thin-slab reconstruction. Time for automatic labeling of the thoracolumbar spine and reconstruction of double-angulated axial images of the pathological vertebrae was compared with manually performed reconstruction of anatomical aligned axial images. Reformatted images of both reconstruction methods were assessed by two observers regarding accuracy of symmetric depiction of anatomical structures. Results: In 33 cases double-angulated axial images were created in 1 vertebra, in 28 cases in 2 vertebrae and in 16 cases in 3 vertebrae. Correct automatic labeling was achieved in 72 of 77 patients (93.5%). Errors could be manually corrected in 4 cases. Automatic labeling required 1 min in average. In cases where anatomical aligned axial images of 1 vertebra were created, reconstructions made by hand were significantly faster (p < 0.05). Automatic reconstruction was time-saving in cases of 2 and more vertebrae (p < 0.05). Both reconstruction methods revealed good image quality with excellent inter-observer agreement. Conclusion: The evaluated software for automatic labeling and anatomically aligned, double-angulated axial image reconstruction of the thoracolumbar spine on CT is time

  11. An automatic system for segmentation, matching, anatomical labeling and measurement of airways from CT images

    DEFF Research Database (Denmark)

    Petersen, Jens; Feragen, Aasa; Owen, Megan

    Purpose: Assessing airway dimensions and attenuation from CT images is useful in the study of diseases affecting the airways such as Chronic Obstructive Pulmonary Disease (COPD). Measurements can be compared between patients and over time if specific airway segments can be identified. However......, manually finding these segments and performing such measurements is very time consuming. The purpose of the developed and validated system is to enable such measurements using automatic segmentations of the airway interior and exterior wall surfaces in three dimensions, anatomical branch labeling of all...... is used to match specific airway segments in multiple images of the same subject. The anatomical names of all segmental branches are assigned based on distances to a training set of expert labeled trees. Distances are measured in a geometric tree-space, incorporating both topology and centerline shape...

  12. Automatic Testcase Generation for Flight Software

    Science.gov (United States)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to

  13. A System for Automatically Generating Scheduling Heuristics

    Science.gov (United States)

    Morris, Robert

    1996-01-01

    The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.

  14. Automatic Generation of Validated Specific Epitope Sets

    Directory of Open Access Journals (Sweden)

    Sebastian Carrasco Pro

    2015-01-01

    Full Text Available Accurate measurement of B and T cell responses is a valuable tool to study autoimmunity, allergies, immunity to pathogens, and host-pathogen interactions and assist in the design and evaluation of T cell vaccines and immunotherapies. In this context, it is desirable to elucidate a method to select validated reference sets of epitopes to allow detection of T and B cells. However, the ever-growing information contained in the Immune Epitope Database (IEDB and the differences in quality and subjects studied between epitope assays make this task complicated. In this study, we develop a novel method to automatically select reference epitope sets according to a categorization system employed by the IEDB. From the sets generated, three epitope sets (EBV, mycobacteria and dengue were experimentally validated by detection of T cell reactivity ex vivo from human donors. Furthermore, a web application that will potentially be implemented in the IEDB was created to allow users the capacity to generate customized epitope sets.

  15. Automatic Generation of Minimal Cut Sets

    Directory of Open Access Journals (Sweden)

    Sentot Kromodimoeljo

    2015-06-01

    Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.

  16. Automatic generation of digital anthropomorphic phantoms from simulated MRI acquisitions

    Science.gov (United States)

    Lindsay, C.; Gennert, M. A.; KÓ§nik, A.; Dasari, P. K.; King, M. A.

    2013-03-01

    In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.

  17. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  18. Automatic generation control of interconnected power system with ...

    African Journals Online (AJOL)

    In this paper, automatic generation control (AGC) of two area interconnected power system having diverse sources of power generation is studied. A two area power system comprises power generations from hydro, thermal and gas sources in area-1 and power generations from hydro and thermal sources in area-2. All the ...

  19. System for Automatic Generation of Examination Papers in Discrete Mathematics

    Science.gov (United States)

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  20. Next Generation Model 8800 Automatic TLD Reader

    International Nuclear Information System (INIS)

    Velbeck, K.J.; Streetz, K.L.; Rotunda, J.E.

    1999-01-01

    BICRON NE has developed an advanced version of the Model 8800 Automatic TLD Reader. Improvements in the reader include a Windows NT TM -based operating system and a Pentium microprocessor for the host controller, a servo-controlled transport, a VGA display, mouse control, and modular assembly. This high capacity reader will automatically read fourteen hundred TLD Cards in one loading. Up to four elements in a card can be heated without mechanical contact, using hot nitrogen gas. Improvements in performance include an increased throughput rate and more precise card positioning. Operation is simplified through easy-to-read Windows-type screens. Glow curves are displayed graphically along with light intensity, temperature, and channel scaling. Maintenance and diagnostic aids are included for easier troubleshooting. A click of a mouse will command actions that are displayed in easy-to-understand English words. Available options include an internal 90 Sr irradiator, automatic TLD calibration, and two different extremity monitoring modes. Results from testing include reproducibility, reader stability, linearity, detection threshold, residue, primary power supply voltage and frequency, transient voltage, drop testing, and light leakage. (author)

  1. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  2. Automatic recognition of surface landmarks of anatomical structures of back and posture

    Science.gov (United States)

    Michoński, Jakub; Glinkowski, Wojciech; Witkowski, Marcin; Sitnik, Robert

    2012-05-01

    Faulty postures, scoliosis and sagittal plane deformities should be detected as early as possible to apply preventive and treatment measures against major clinical consequences. To support documentation of the severity of deformity and diminish x-ray exposures, several solutions utilizing analysis of back surface topography data were introduced. A novel approach to automatic recognition and localization of anatomical landmarks of the human back is presented that may provide more repeatable results and speed up the whole procedure. The algorithm was designed as a two-step process involving a statistical model built upon expert knowledge and analysis of three-dimensional back surface shape data. Voronoi diagram is used to connect mean geometric relations, which provide a first approximation of the positions, with surface curvature distribution, which further guides the recognition process and gives final locations of landmarks. Positions obtained using the developed algorithms are validated with respect to accuracy of manual landmark indication by experts. Preliminary validation proved that the landmarks were localized correctly, with accuracy depending mostly on the characteristics of a given structure. It was concluded that recognition should mainly take into account the shape of the back surface, putting as little emphasis on the statistical approximation as possible.

  3. Generation of anatomically realistic numerical phantoms for optoacoustic breast imaging

    Science.gov (United States)

    Lou, Yang; Mitsuhashi, Kenji; Appleton, Catherine M.; Oraevsky, Alexander; Anastasio, Mark A.

    2016-03-01

    Because optoacoustic tomography (OAT) can provide functional information based on hemoglobin contrast, it is a promising imaging modality for breast cancer diagnosis. Developing an effective OAT breast imaging system requires balancing multiple design constraints, which can be expensive and time-consuming. Therefore, computer- simulation studies are often conducted to facilitate this task. However, most existing computer-simulation studies of OAT breast imaging employ simple phantoms such as spheres or cylinders that over-simplify the complex anatomical structures in breasts, thus limiting the value of these studies in guiding real-world system design. In this work, we propose a method to generate realistic numerical breast phantoms for OAT research based on clinical magnetic resonance imaging (MRI) data. The phantoms include a skin layer that defines breast-air boundary, major vessel branches that affect light absorption in the breast, and fatty tissue and fibroglandular tissue whose acoustical heterogeneity perturbs acoustic wave propagation. By assigning realistic optical and acoustic parameters to different tissue types, we establish both optic and acoustic breast phantoms, which will be exported into standard data formats for cross-platform usage.

  4. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of ...

  5. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    Abstract. This system proposes an N-gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby mak- ing the system understand the situation and generate lyrics accordingly.

  6. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    S. Bocconi; F.-M. Nack (Frank); L. Hardman (Lynda)

    2008-01-01

    textabstractIn this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical

  7. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  8. Procedure for the automatic mesh generation of innovative gear teeth

    Directory of Open Access Journals (Sweden)

    Radicella Andrea Chiaramonte

    2016-01-01

    Full Text Available After having described gear wheels with teeth having the two sides constituted by different involutes and their importance in engineering applications, we stress the need for an efficient procedure for the automatic mesh generation of innovative gear teeth. First, we describe the procedure for the subdivision of the tooth profile in the various possible cases, then we show the method for creating the subdivision mesh, defined by two series of curves called meridians and parallels. Finally, we describe how the above procedure for automatic mesh generation is able to solve specific cases that may arise when dealing with teeth having the two sides constituted by different involutes.

  9. Automatic Generation of Network Protocol Gateways

    DEFF Research Database (Denmark)

    Bromberg, Yérom-David; Réveillère, Laurent; Lawall, Julia

    2009-01-01

    for describing protocol behaviors, message structures, and the gateway logic.  Z2z includes a compiler that checks essential correctness properties and produces efficient code. We have used z2z to develop a number of gateways, including SIP to RTSP, SLP to UPnP, and SMTP to SMTP via HTTP, involving a range......The emergence of networked devices in the home has made it possible to develop applications that control a variety of household functions. However, current devices communicate via a multitude of incompatible protocols, and thus gateways are needed to translate between them.  Gateway construction......, however, requires an intimate knowledge of the relevant protocols and a substantial understanding of low-level network programming, which can be a challenge for many application programmers. This paper presents a generative approach to gateway construction, z2z, based on a domain-specific language...

  10. Preliminary study of automatic detection method for anatomical landmarks in body trunk CT images

    International Nuclear Information System (INIS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni; Hanaoka, Shouhei

    2010-01-01

    In the research field of medical image processing and analysis, it is important to develop medical image understanding methods which are robust for individual and case differences, since they often interfere with accurate medical image processing and analysis. Location of anatomical landmarks, which are localized regions with anatomical reference to the human body, allows for robust medical understanding since the relative position of anatomical landmarks is basically the same among cases. This is a preliminary study for detecting anatomical point landmarks by using a technique of local area model matching. The model for matching process, which is called appearance model, shows the spatial appearance of voxel values at the detection target landmark and its surrounding region, while the Principal Component Analysis (PCA) is used to train appearance models. In this study, we experimentally investigate the optimal appearance model for landmark detection and analyze detection accuracy of anatomical point landmarks. (author)

  11. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning,...

  12. Automatic Generation of Map-Based Interface for VRML Contents

    Science.gov (United States)

    Araya, Shinji; Suzaki, Kenichi; Miyake, Yoshihiro

    The paper proposes a Web page that can automatically generate a map-based interface for any VRML contents on the Web. This new approach reduces map development costs and provides a common interface to the users. 3D contents reconstruction is distributed among the client computers to guarantee Web service efficiency.

  13. Design dependencies within the automatic generation of hypermedia presentations

    NARCIS (Netherlands)

    O. Rosell Martinez

    2002-01-01

    textabstractMany dependencies appear between the different stages of the creation of a hypermedia presentation. These dependencies have to be taken into account while designing a system for their automatic generation. In this work we study two of them and propose some techniques to treat them.

  14. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  15. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  16. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  17. Automatic Performance Model Generation for Java Enterprise Edition (EE) Applications

    OpenAIRE

    Brunnert, Andreas;Vögele, Christian;Krcmar, Helmut

    2015-01-01

    The effort required to create performance models for enterprise applications is often out of proportion compared to their benefits. This work aims to reduce this effort by introducing an approach to automatically generate component-based performance models for running Java EE applications. The approach is applicable for all Java EE server products as it relies on standardized component types and interfaces to gather the required data for modeling an application. The feasibility of the approac...

  18. Developing an Automatic Generation Tool for Cryptographic Pairing Functions

    OpenAIRE

    Dominguez Perez, Luis Julian

    2011-01-01

    Pairing-Based Cryptography is receiving steadily more attention from industry, mainly because of the increasing interest in Identity-Based protocols. Although there are plenty of applications, efficiently implementing the pairing functions is often difficult as it requires more knowledge than previous cryptographic primitives. The author presents a tool for automatically generating optimized code for the pairing functions which can be used in the construction of such cryptograp...

  19. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Science.gov (United States)

    Freire, Paulo G L; Ferrari, Ricardo J

    2016-06-01

    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Medial structure generation for registration of anatomical structures

    DEFF Research Database (Denmark)

    Vera, Sergio; Gil, Debora; Kjer, Hans Martin

    2017-01-01

    Medial structures (skeletons and medial manifolds) have shown capacity to describe shape in a compact way. In the field of medical imaging, they have been employed to enrich the description of organ anatomy, to improve segmentation, or to describe the organ position in relation to surrounding...... structures. Methods for generation of medial structures, however, are prone to the generation of medial artifacts (spurious branches) that traditionally need to be pruned before the medial structure can be used for further computations. The act of pruning can affect main sections of the medial surface......, hindering its performance as shape descriptor. In this work, we present a method for the computation of medial structures that generates smooth medial surfaces that do not need to be explicitly pruned. Additionally, we present a validation framework for medial surface evaluation. Finally, we apply...

  1. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  2. Automatic generation of Fortran programs for algebraic simulation models

    International Nuclear Information System (INIS)

    Schopf, W.; Rexer, G.; Ruehle, R.

    1978-04-01

    This report documents a generator program by which econometric simulation models formulated in an application-orientated language can be transformed automatically in a Fortran program. Thus the model designer is able to build up, test and modify models without the need of a Fortran programmer. The development of a computer model is therefore simplified and shortened appreciably; in chapter 1-3 of this report all rules are presented for the application of the generator to the model design. Algebraic models including exogeneous and endogeneous time series variables, lead and lag function can be generated. In addition, to these language elements, Fortran sequences can be applied to the formulation of models in the case of complex model interrelations. Automatically the generated model is a module of the program system RSYST III and is therefore able to exchange input and output data with the central data bank of the system and in connection with the method library modules can be used to handle planning problems. (orig.) [de

  3. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  4. Central Pattern Generator for Locomotion: Anatomical, Physiological and Pathophysiological Considerations

    Directory of Open Access Journals (Sweden)

    Pierre A. Guertin

    2013-02-01

    Full Text Available This article provides a perspective on major innovations over the past century in research on the spinal cord and, specifically, on specialized spinal circuits involved in the control of rhythmic locomotor pattern generation and modulation. Pioneers such as Charles Sherrington and Thomas Graham Brown have conducted experiments in the early twentieth century that changed our views of the neural control of locomotion. Their seminal work supported subsequently by several decades of evidence has led to the conclusion that walking, flying and swimming are largely controlled by a network of spinal neurons generally referred to as the central pattern generator (CPG for locomotion. It has been subsequently demonstrated across all vertebrate species examined, from lampreys to humans, that this CPG is capable, under some conditions, to self-produce, even in absence of descending or peripheral inputs, basic rhythmic and coordinated locomotor movements. Recent evidence suggests, in turn, that plasticity changes of some CPG elements may contribute to the development of specific pathophysiological conditions associated with impaired locomotion or spontaneous locomotor-like movements. This article constitutes a comprehensive review summarizing key findings on the CPG as well as on its potential role in Restless Leg Syndrome (RLS, Periodic Leg Movement (PLM, and Alternating Leg Muscle Activation (ALMA. Special attention will be paid to the role of the CPG in a recently identified, and uniquely different neurological disorder, called the Uner Tan Syndrome.

  5. MODULEWRITER: a program for automatic generation of database interfaces.

    Science.gov (United States)

    Zheng, Christina L; Fana, Fariba; Udupi, Poornaprajna V; Gribskov, Michael

    2003-05-01

    MODULEWRITER is a PERL object relational mapping (ORM) tool that automatically generates database specific application programming interfaces (APIs) for SQL databases. The APIs consist of a package of modules providing access to each table row and column. Methods for retrieving, updating and saving entries are provided, as well as other generally useful methods (such as retrieval of the highest numbered entry in a table). MODULEWRITER provides for the inclusion of user-written code, which can be preserved across multiple runs of the MODULEWRITER program.

  6. Automatic generation of gene finders for eukaryotic species

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Krogh, A.

    2006-01-01

    Background The number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity...... length distributions. The performance of each individual gene predictor on each individual genome is comparable to the best of the manually optimised species-specific gene finders. It is shown that species-specific gene finders are superior to gene finders trained on other species....

  7. Anatomical database generation for radiation transport modeling from computed tomography (CT) scan data

    International Nuclear Information System (INIS)

    Margle, S.M.; Tinnel, E.P.; Till, L.E.; Eckerman, K.F.; Durfee, R.C.

    1989-01-01

    Geometric models of the anatomy are used routinely in calculations of the radiation dose in organs and tissues of the body. Development of such models has been hampered by lack of detailed anatomical information on children, and models themselves have been limited to quadratic conic sections. This summary reviews the development of an image processing workstation used to extract anatomical information from routine diagnostic CT procedure. A standard IBM PC/AT microcomputer has been augmented with an automatically loading 9-track magnetic tape drive, an 8-bit 1024 x 1024 pixel graphics adapter/monitor/film recording package, a mouse/trackball assembly, dual 20 MB removable cartridge media, a 72 MB disk drive, and a printer. Software utilized by the workstation includes a Geographic Information System (modified for manipulation of CT images), CAD software, imaging software, and various modules to ease data transfer among the software packages. 5 refs., 3 figs

  8. Automatic anatomical calibration for IMU-based elbow angle measurement in disturbed magnetic fields

    Directory of Open Access Journals (Sweden)

    Laidig Daniel

    2017-09-01

    Full Text Available Inertial Measurement Units (IMUs are increasingly used for human motion analysis. However, two major challenges remain: First, one must know precisely in which orientation the sensor is attached to the respective body segment. This is commonly achieved by accurate manual placement of the sensors or by letting the subject perform tedious calibration movements. Second, standard methods for inertial motion analysis rely on a homogeneous magnetic field, which is rarely found in indoor environments. To address both challenges, we introduce an automatic calibration method for joints with two degrees of freedom such as the combined radioulnar and elbow joint. While the user performs arbitrary movements, the method automatically identifies the sensor-to-segment orientations by exploiting the kinematic constraints of the joint. Simultaneously, the method identifies and compensates the influence of magnetic disturbances on the sensor orientation quaternions and the joint angles. In experimental trials, we obtain angles that agree well with reference values from optical motion capture. We conclude that the proposed method overcomes mounting and calibration restrictions and improves measurement accuracy in indoor environments. It therefore improves the practical usability of IMUs for many medical applications.

  9. An automatic system for segmentation, matching, anatomical labeling and measurement of airways from CT images

    DEFF Research Database (Denmark)

    Petersen, Jens; Feragen, Aasa; Owen, Megan

    Purpose: Assessing airway dimensions and attenuation from CT images is useful in the study of diseases affecting the airways such as Chronic Obstructive Pulmonary Disease (COPD). Measurements can be compared between patients and over time if specific airway segments can be identified. However...... segmental branches, and longitudinal matching of airway branches in repeated scans of the same subject. Methods and Materials: The segmentation process begins from an automatically detected seed point in the trachea. The airway centerline tree is then constructed by iteratively adding locally optimal paths...... that most resemble the airway centerlines based on a statistical model derived from a training set. A full segmentation of the wall surfaces is then extracted around the centerline, using a graph based approach, which simultaneously detects both surfaces using image gradients. Deformable image registration...

  10. A novel 3D shape descriptor for automatic retrieval of anatomical structures from medical images

    Science.gov (United States)

    Nunes, Fátima L. S.; Bergamasco, Leila C. C.; Delmondes, Pedro H.; Valverde, Miguel A. G.; Jackowski, Marcel P.

    2017-03-01

    Content-based image retrieval (CBIR) aims at retrieving from a database objects that are similar to an object provided by a query, by taking into consideration a set of extracted features. While CBIR has been widely applied in the two-dimensional image domain, the retrieval of3D objects from medical image datasets using CBIR remains to be explored. In this context, the development of descriptors that can capture information specific to organs or structures is desirable. In this work, we focus on the retrieval of two anatomical structures commonly imaged by Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) techniques, the left ventricle of the heart and blood vessels. Towards this aim, we developed the Area-Distance Local Descriptor (ADLD), a novel 3D local shape descriptor that employs mesh geometry information, namely facet area and distance from centroid to surface, to identify shape changes. Because ADLD only considers surface meshes extracted from volumetric medical images, it substantially diminishes the amount of data to be analyzed. A 90% precision rate was obtained when retrieving both convex (left ventricle) and non-convex structures (blood vessels), allowing for detection of abnormalities associated with changes in shape. Thus, ADLD has the potential to aid in the diagnosis of a wide range of vascular and cardiac diseases.

  11. LINGUISTIC DATABASE FOR AUTOMATIC GENERATION SYSTEM OF ENGLISH ADVERTISING TEXTS

    Directory of Open Access Journals (Sweden)

    N. A. Metlitskaya

    2017-01-01

    Full Text Available The article deals with the linguistic database for the system of automatic generation of English advertising texts on cosmetics and perfumery. The database for such a system includes two main blocks: automatic dictionary (that contains semantic and morphological information for each word, and semantic-syntactical formulas of the texts in a special formal language SEMSINT. The database is built on the result of the analysis of 30 English advertising texts on cosmetics and perfumery. First, each word was given a unique code. For example, N stands for nouns, A – for adjectives, V – for verbs, etc. Then all the lexicon of the analyzed texts was distributed into different semantic categories. According to this semantic classification each word was given a special semantic code. For example, the record N01 that is attributed to the word «lip» in the dictionary means that this word refers to nouns of the semantic category «part of a human’s body».The second block of the database includes the semantic-syntactical formulas of the analyzed advertising texts written in a special formal language SEMSINT. The author gives a brief description of this language, presenting its essence and structure. Also, an example of one formalized advertising text in SEMSINT is provided.

  12. AUTO-LAY: automatic layout generation for procedure flow diagrams

    International Nuclear Information System (INIS)

    Forzano, P.; Castagna, P.

    1995-01-01

    Nuclear Power Plant Procedures can be seen from essentially two viewpoints: the process and the information management. From the first point of view, it is important to supply the knowledge apt to solve problems connected with the control of the process, from the second one the focus of attention is on the knowledge representation, its structure, elicitation and maintenance, formal quality assurance. These two aspects of procedure representation can be considered and solved separately. In particular, methodological, formal and management issues require long and tedious activities, that in most cases constitute a great barrier for procedures development and upgrade. To solve these problems, Ansaldo is developing DIAM, a wide integrated tool for procedure management to support in procedure writing, updating, usage and documentation. One of the most challenging features of DIAM is AUTO-LAY, a CASE sub-tool that, in a complete automatical way, structures parts or complete flow diagrams. This is a feature that is partially present in some other CASE products, that, anyway, do not allow complex graph handling and isomorphism between video and paper representation AUTO-LAY has the unique prerogative to draw graphs of any complexity, to section them in pages, and to automatically compose a document. This has been recognized in the literature as the most important second-generation CASE improvement. (author). 5 refs., 9 figs

  13. Automatic generation of warehouse mediators using an ontology engine

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  14. Reaction Mechanism Generator: Automatic construction of chemical kinetic mechanisms

    Science.gov (United States)

    Gao, Connie W.; Allen, Joshua W.; Green, William H.; West, Richard H.

    2016-06-01

    Reaction Mechanism Generator (RMG) constructs kinetic models composed of elementary chemical reaction steps using a general understanding of how molecules react. Species thermochemistry is estimated through Benson group additivity and reaction rate coefficients are estimated using a database of known rate rules and reaction templates. At its core, RMG relies on two fundamental data structures: graphs and trees. Graphs are used to represent chemical structures, and trees are used to represent thermodynamic and kinetic data. Models are generated using a rate-based algorithm which excludes species from the model based on reaction fluxes. RMG can generate reaction mechanisms for species involving carbon, hydrogen, oxygen, sulfur, and nitrogen. It also has capabilities for estimating transport and solvation properties, and it automatically computes pressure-dependent rate coefficients and identifies chemically-activated reaction paths. RMG is an object-oriented program written in Python, which provides a stable, robust programming architecture for developing an extensible and modular code base with a large suite of unit tests. Computationally intensive functions are cythonized for speed improvements.

  15. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  16. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Ehrenstein, T.; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-01-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [de

  17. Generating anatomically accurate finite element meshes for electrical impedance tomography of the human head

    Science.gov (United States)

    Yang, Bin; Xu, Canhua; Dai, Meng; Fu, Feng; Dong, Xiuzhen

    2013-07-01

    For electrical impedance tomography (EIT) of brain, the use of anatomically accurate and patient-specific finite element (FE) mesh has been shown to confer significant improvements in the quality of image reconstruction. But, given the lack of a rapid method to achieve the accurate anatomic geometry of the head, the generation of patient-specifc mesh is time-comsuming. In this paper, a modified fuzzy c-means algorithm based on non-local means method is performed to implement the segmentation of different layers in the head based on head CT images. This algorithm showed a better effect, especially an accurate recognition of the ventricles and a suitable performance dealing with noise. And the FE mesh established according to the segmentation results is validated in computational simulation. So a rapid practicable method can be provided for the generation of patient-specific FE mesh of the human head that is suitable for brain EIT.

  18. An Algorithm to Automatically Generate the Combinatorial Orbit Counting Equations

    Science.gov (United States)

    Melckenbeeck, Ine; Audenaert, Pieter; Michoel, Tom; Colle, Didier; Pickavet, Mario

    2016-01-01

    Graphlets are small subgraphs, usually containing up to five vertices, that can be found in a larger graph. Identification of the graphlets that a vertex in an explored graph touches can provide useful information about the local structure of the graph around that vertex. Actually finding all graphlets in a large graph can be time-consuming, however. As the graphlets grow in size, more different graphlets emerge and the time needed to find each graphlet also scales up. If it is not needed to find each instance of each graphlet, but knowing the number of graphlets touching each node of the graph suffices, the problem is less hard. Previous research shows a way to simplify counting the graphlets: instead of looking for the graphlets needed, smaller graphlets are searched, as well as the number of common neighbors of vertices. Solving a system of equations then gives the number of times a vertex is part of each graphlet of the desired size. However, until now, equations only exist to count graphlets with 4 or 5 nodes. In this paper, two new techniques are presented. The first allows to generate the equations needed in an automatic way. This eliminates the tedious work needed to do so manually each time an extra node is added to the graphlets. The technique is independent on the number of nodes in the graphlets and can thus be used to count larger graphlets than previously possible. The second technique gives all graphlets a unique ordering which is easily extended to name graphlets of any size. Both techniques were used to generate equations to count graphlets with 4, 5 and 6 vertices, which extends all previous results. Code can be found at https://github.com/IneMelckenbeeck/equation-generator and https://github.com/IneMelckenbeeck/graphlet-naming. PMID:26797021

  19. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    Science.gov (United States)

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  20. Automatic run-time provenance capture for scientific dataset generation

    Science.gov (United States)

    Frew, J.; Slaughter, P.

    2008-12-01

    Provenance---the directed graph of a dataset's processing history---is difficult to capture effectively. Human- generated provenance, as narrative metadata, is labor-intensive and thus often incorrect, incomplete, or simply not recorded. Workflow systems capture some provenance implicitly in workflow specifications, but these systems are not ubiquitous or standardized, and a workflow specification may not capture all of the factors involved in a dataset's production. System audit trails capture potentially all processing activities, but not the relationships between them. We describe a system that transparently (i.e., without any modification to science codes) and automatically (i.e. without any human intervention) captures the low-level interactions (files read/written, parameters accessed, etc.) between scientific processes, and then synthesizes these relationships into a provenance graph. This system---the Earth System Science Server (ES3)---is sufficiently general that it can accommodate any combination of stand-alone programs, interpreted codes (e.g. IDL), and command- language scripts. Provenance in ES3 can be published in well-defined XML formats (including formats suitable for graphical visualization), and queried to determine the ancestors or descendants of any specific data file or process invocation. We demonstrate how ES3 can be used to capture the provenance of a large operational ocean color dataset.

  1. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  2. Automatic summary generating technology of vegetable traceability for information sharing

    Science.gov (United States)

    Zhenxuan, Zhang; Minjing, Peng

    2017-06-01

    In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.

  3. Automatic generation of investigator bibliographies for institutional research networking systems.

    Science.gov (United States)

    Johnson, Stephen B; Bales, Michael E; Dine, Daniel; Bakken, Suzanne; Albert, Paul J; Weng, Chunhua

    2014-10-01

    Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss' kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Semi-Automatic Story Generation for a Geographic Server

    Directory of Open Access Journals (Sweden)

    Rizwan Mehmood

    2017-06-01

    Full Text Available Most existing servers providing geographic data tend to offer various numeric data. We started to work on a new type of geographic server, motivated by four major issues: (i How to handle figures when different databases present different values; (ii How to build up sizeable collections of pictures with detailed descriptions; (iii How to update rapidly changing information, such as personnel holding important functions, and (iv how to describe countries not just by using trivial facts, but stories typical of the country involved. We have discussed and partially resolved issues (i and (ii in previous papers; we have decided to deal with (iii, regional updates, by tying in an international consortium whose members would either help themselves or find individuals to do so. It is issue (iv, how to generate non-trivial stories typical of a country, that we decided to tackle both manually (the consortium has by now generated around 200 stories, and by developing techniques for semi-automatic story generation, which is the topic of this paper. The basic idea was first to define sets of reasonably reliable servers that may differ from region to region, to extract “interesting facts” from the servers, and combine them in a raw version of a report that would require some manual cleaning-up (hence: semi-automatic. It may sound difficult to extract “interesting facts” from Web pages, but it is quite possible to define heuristics to do so, never exceeding the few lines allowed for quotation purposes. One very simple rule we adopted was this: ‘Look for sentences with superlatives!’ If a sentence contains words like “biggest”, “highest”, “most impressive” etc. it is likely to contain an interesting fact. With a little imagination, we have been able to establish a set of such rules. We will show that the stories can be completely different. For some countries, historical facts may dominate; for others, the beauty of landscapes; for

  5. User evaluation of a communication system that automatically generates captions to improve telephone communication

    NARCIS (Netherlands)

    Zekveld, A.A.; Kramer, S.E.; Kessens, J.M.; Vlaming, M.S.M.G.; Houtgast, T.

    2009-01-01

    This study examined the subjective benefit obtained from automatically generated captions during telephone-speech comprehension in the presence of babble noise. Short stories were presented by telephone either with or without captions that were generated offline by an automatic speech recognition

  6. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  7. Automatic segmentation of thoracic and pelvic CT images for radiotherapy planning using implicit anatomic knowledge and organ-specific segmentation strategies.

    Science.gov (United States)

    Haas, B; Coradi, T; Scholz, M; Kunz, P; Huber, M; Oppitz, U; André, L; Lengkeek, V; Huyskens, D; van Esch, A; Reddick, R

    2008-03-21

    Automatic segmentation of anatomical structures in medical images is a valuable tool for efficient computer-aided radiotherapy and surgery planning and an enabling technology for dynamic adaptive radiotherapy. This paper presents the design, algorithms and validation of new software for the automatic segmentation of CT images used for radiotherapy treatment planning. A coarse to fine approach is followed that consists of presegmentation, anatomic orientation and structure segmentation. No user input or a priori information about the image content is required. In presegmentation, the body outline, the bones and lung equivalent tissue are detected. Anatomic orientation recognizes the patient's position, orientation and gender and creates an elastic mapping of the slice positions to a reference scale. Structure segmentation is divided into localization, outlining and refinement, performed by procedures with implicit anatomic knowledge using standard image processing operations. The presented version of algorithms automatically segments the body outline and bones in any gender and patient position, the prostate, bladder and femoral heads for male pelvis in supine position, and the spinal canal, lungs, heart and trachea in supine position. The software was developed and tested on a collection of over 600 clinical radiotherapy planning CT stacks. In a qualitative validation on this test collection, anatomic orientation correctly detected gender, patient position and body region in 98% of the cases, a correct mapping was produced for 89% of thorax and 94% of pelvis cases. The average processing time for the entire segmentation of a CT stack was less than 1 min on a standard personal computer. Two independent retrospective studies were carried out for clinical validation. Study I was performed on 66 cases (30 pelvis, 36 thorax) with dosimetrists, study II on 52 cases (39 pelvis, 13 thorax) with radio-oncologists as experts. The experts rated the automatically produced

  8. Automatic Grasp Generation and Improvement for Industrial Bin-Picking

    DEFF Research Database (Denmark)

    Kraft, Dirk; Ellekilde, Lars-Peter; Rytz, Jimmy Alison

    2014-01-01

    and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented...... work allows to minimize cycle time as well as setup cost, which are essential factors in automatic bin-picking. It therefore leads to a wider applicability of bin-picking in industry....

  9. A strategy for automatically generating programs in the lucid programming language

    Science.gov (United States)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  10. Modeling the pharyngeal anatomical effects on breathing resistance and aerodynamically generated sound.

    Science.gov (United States)

    Xi, Jinxiang; Si, Xiuhua; Kim, JongWon; Su, Guoguang; Dong, Haibo

    2014-07-01

    The objective of this study was to systematically assess the effects of pharyngeal anatomical details on breathing resistance and acoustic characteristics by means of computational modeling. A physiologically realistic nose-throat airway was reconstructed from medical images. Individual airway anatomy such as the uvula, pharynx, and larynx was then isolated for examination by gradually simplifying this image-based model geometry. Large eddy simulations with the FW-H acoustics model were used to simulate airflows and acoustic sound generation with constant flow inhalations in rigid-walled airway geometries. Results showed that pharyngeal anatomical details exerted a significant impact on breathing resistance and energy distribution of acoustic sound. The uvula constriction induced considerably increased levels of pressure drop and acoustic power in the pharynx, which could start and worsen snoring symptoms. Each source anatomy was observed to generate a unique spectrum with signature peak frequencies and energy distribution. Moreover, severe pharyngeal airway narrowing led to an upward shift of sound energy in the high-frequency range. Results indicated that computational aeroacoustic modeling appeared to be a practical tool to study breathing-related disorders. Specifically, high-frequency acoustic signals might disclose additional clues to the mechanism of apneic snoring and should be included in future acoustic studies.

  11. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Georgel, B.; Zorgati, R.

    1994-01-01

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  12. Automatic Texture and Orthophoto Generation from Registered Panoramic Views

    DEFF Research Database (Denmark)

    Krispel, Ulrich; Evers, Henrik Leander; Tamke, Martin

    2015-01-01

    from range data only. In order to detect these elements, we developed a method that utilizes range data and color information from high-resolution panoramic images of indoor scenes, taken at the scanners position. A proxy geometry is derived from the point clouds; orthographic views of the scene......Recent trends in 3D scanning are aimed at the fusion of range data and color information from images. The combination of these two outputs allows to extract novel semantic information. The workflow presented in this paper allows to detect objects, such as light switches, that are hard to identify...... are automatically identified from the geometry and an image per view is created via projection. We combine methods of computer vision to train a classifier to detect the objects of interest from these orthographic views. Furthermore, these views can be used for automatic texturing of the proxy geometry....

  13. Computer program for automatic generation of BWR control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsia, M.Y.

    1990-01-01

    A computer program named OCTOPUS has been developed to automatically determine a control rod pattern that approximates some desired target power distribution as closely as possible without violating any thermal safety or reactor criticality constraints. The program OCTOPUS performs a semi-optimization task based on the method of approximation programming (MAP) to develop control rod patterns. The SIMULATE-E code is used to determine the nucleonic characteristics of the reactor core state

  14. System and Component Software Specification, Run-time Verification and Automatic Test Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  15. Cuypers : a semi-automatic hypermedia generation system

    NARCIS (Netherlands)

    J.R. van Ossenbruggen (Jacco); F.J. Cornelissen; J.P.T.M. Geurts (Joost); L. Rutledge (Lloyd); L. Hardman (Lynda)

    2000-01-01

    textabstractThe report describes the architecture of emph{Cuypers, a system supporting second and third generation Web-based multimedia. First generation Web-content encodes information in handwritten (HTML) Web pages. Second generation Web content generates HTML pages on demand, e.g. by filling in

  16. Automatic test pattern generation for iterative logic arrays | Boateng ...

    African Journals Online (AJOL)

    test are first formulated. Next, the repetition property of the test patterns is exploited to develop a method for generating C-tests for ILAs under the cell fault model. Based on the results of test generation, the method identifies points of insertion of ...

  17. Validating EHR documents: automatic schematron generation using archetypes.

    Science.gov (United States)

    Pfeiffer, Klaus; Duftschmid, Georg; Rinner, Christoph

    2014-01-01

    The goal of this study was to examine whether Schematron schemas can be generated from archetypes. The openEHR Java reference API was used to transform an archetype into an object model, which was then extended with context elements. The model was processed and the constraints were transformed into corresponding Schematron assertions. A prototype of the generator for the reference model HL7 v3 CDA R2 was developed and successfully tested. Preconditions for its reusability with other reference models were set. Our results indicate that an automated generation of Schematron schemas is possible with some limitations.

  18. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  19. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  20. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    Science.gov (United States)

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  1. Composite structured mesh generation with automatic domain decomposition in complex geometries

    Science.gov (United States)

    This paper presents a novel automatic domain decomposition method to generate quality composite structured meshes in complex domains with arbitrary shapes, in which quality structured mesh generation still remains a challenge. The proposed decomposition algorithm is based on the analysis of an initi...

  2. Automatic 3d Building Model Generations with Airborne LiDAR Data

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D

  3. AUTOMATIC 3D BUILDING MODEL GENERATIONS WITH AIRBORNE LiDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2017-11-01

    Full Text Available LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified

  4. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    Once the appropriate tri-grams are selected, the root words from these tri-grams are sent to the morphological generator, to form words in their packed form. These words are then assembled to form the final lyrics. Parameters of poetry like rhyme, alliteration, simile, vocative words, etc., are also taken care of by the system.

  5. Designing a story database for use in automatic story generation

    NARCIS (Netherlands)

    Oinonen, Katri; Theune, Mariët; Nijholt, Anton; Uijlings, Jasper; Harper, Richard; Rauterberg, Matthias; Combetto, Marco

    In this paper we propose a model for the representation of stories in a story database. The use of such a database will enable computational story generation systems to learn from previous stories and associated user feedback, in order to create believable stories with dramatic plots that invoke an

  6. Training IBM Watson using Automatically Generated Question-Answer Pairs

    OpenAIRE

    Lee, Jangho; Kim, Gyuwan; Yoo, Jaeyoon; Jung, Changwoo; Kim, Minseok; Yoon, Sungroh

    2016-01-01

    IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of well-prepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and...

  7. Automatic capture of attention by conceptually generated working memory templates.

    Science.gov (United States)

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  8. A Hybrid Intelligent Search Algorithm for Automatic Test Data Generation

    Directory of Open Access Journals (Sweden)

    Ying Xing

    2015-01-01

    Full Text Available The increasing complexity of large-scale real-world programs necessitates the automation of software testing. As a basic problem in software testing, the automation of path-wise test data generation is especially important, which is in essence a constraint optimization problem solved by search strategies. Therefore, the constraint processing efficiency of the selected search algorithm is a key factor. Aiming at the increase of search efficiency, a hybrid intelligent algorithm is proposed to efficiently search the solution space of potential test data by making full use of both global and local search methods. Branch and bound is adopted for global search, which gives definite results with relatively less cost. In the search procedure for each variable, hill climbing is adopted for local search, which is enhanced with the initial values selected heuristically based on the monotonicity analysis of branching conditions. They are highly integrated by an efficient ordering method and the backtracking operation. In order to facilitate the search methods, the solution space is represented as state space. Experimental results show that the proposed method outperformed some other methods used in test data generation. The heuristic initial value selection strategy improves the search efficiency greatly and makes the search basically backtrack-free. The results also demonstrate that the proposed method is applicable in engineering.

  9. Automatic generation of computer programs servicing TFTR console displays

    International Nuclear Information System (INIS)

    Eisenberg, H.

    1983-01-01

    A number of alternatives were considered in providing programs to support the several hundred displays required for control and monitoring of TFTR equipment. Since similar functions were performed, an automated method of creating programs was suggested. The complexity of a single program servicing as many as thirty consoles mitigated against that approach. Similarly, creation of a syntactic language while elegant, was deemed to be too time consuming, and had the disadvantage of requiring a working knowledge of the language on a programming level. It was elected to pursue a method of generating an individual program to service a particular display. A feasibility study was conducted and the Control and Monitor Display Generator system (CMDG) was developed. A Control and Monitor Display Service Program (CMDS) provides a means of performing monitor and control functions for devices associated with TFTR subsystems, as well as other user functions, via TFTR Control Consoles. This paper discusses the specific capabilities provided by CMDS in a usage context, as well as the mechanics of implementation

  10. Automatic navigation path generation based on two-phase adaptive region-growing algorithm for virtual angioscopy.

    Science.gov (United States)

    Kim, Do-Yeon; Chung, Sung-Mo; Park, Jong-Won

    2006-05-01

    In this paper, we propose a fast and automated navigation path generation algorithm to visualize inside of carotid artery using MR angiography images. The carotid artery is one of the body regions not accessible by real optical probe but can be visualized with virtual endoscopy. By applying two-phase adaptive region-growing algorithm, the carotid artery segmentation is started at the initial seed, which is located on the initially thresholded binary image. This segmentation algorithm automatically detects the branch position with stack feature. Combining with a priori knowledge of anatomic structure of carotid artery, the detected branch position is used to separate the carotid artery into internal carotid artery and external carotid artery. A fly-through path is determined to automatically move the virtual camera based on the intersecting coordinates of two bisectors on the circumscribed quadrangle of segmented carotid artery. In consideration of the interactive rendering speed and the usability of standard graphic hardware, endoscopic view of carotid artery is generated by using surface rendering algorithm with perspective projection method. In addition, the endoscopic view is provided with ray casting algorithm for off-line navigation of carotid artery. Experiments have been conducted on both mathematical phantom and clinical data sets. This algorithm is more effective than key-framing and topological thinning method in terms of automated features and computing time. This algorithm is also applicable to generate the centerline of renal artery, coronary artery, and airway tree which has tree-like cylinder shape of organ structures in the medical imagery.

  11. Automatic Valve Plane Localization in Myocardial Perfusion SPECT/CT by Machine Learning: Anatomic and Clinical Validation.

    Science.gov (United States)

    Betancur, Julian; Rubeaux, Mathieu; Fuchs, Tobias A; Otaki, Yuka; Arnson, Yoav; Slipczuk, Leandro; Benz, Dominik C; Germano, Guido; Dey, Damini; Lin, Chih-Jen; Berman, Daniel S; Kaufmann, Philipp A; Slomka, Piotr J

    2017-06-01

    Precise definition of the mitral valve plane (VP) during segmentation of the left ventricle for SPECT myocardial perfusion imaging (MPI) quantification often requires manual adjustment, which affects the quantification of perfusion. We developed a machine learning approach using support vector machines (SVM) for automatic VP placement. Methods: A total of 392 consecutive patients undergoing 99m Tc-tetrofosmin stress (5 min; mean ± SD, 350 ± 54 MBq) and rest (5 min; 1,024 ± 153 MBq) fast SPECT MPI attenuation corrected (AC) by CT and same-day coronary CT angiography were studied; included in the 392 patients were 48 patients who underwent invasive coronary angiography and had no known coronary artery disease. The left ventricle was segmented with standard clinical software (quantitative perfusion SPECT) by 2 experts, adjusting the VP if needed. Two-class SVM models were computed from the expert placements with 10-fold cross validation to separate the patients used for training and those used for validation. SVM probability estimates were used to compute the best VP position. Automatic VP localizations on AC and non-AC images were compared with expert placement on coronary CT angiography. Stress and rest total perfusion deficits and detection of per-vessel obstructive stenosis by invasive coronary angiography were also compared. Results: Bland-Altman 95% confidence intervals (CIs) for VP localization by SVM and experts for AC stress images (bias, 1; 95% CI, -5 to 7 mm) and AC rest images (bias, 1; 95% CI, -7 to 10 mm) were narrower than interexpert 95% CIs for AC stress images (bias, 0; 95% CI, -8 to 8 mm) and AC rest images (bias, 0; 95% CI, -10 to 10 mm) ( P learning with SVM allows automatic and accurate VP localization, decreasing user dependence in SPECT MPI quantification. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  12. Automatic Generation of Proof Tactics for Finite-Valued Logics

    Directory of Open Access Journals (Sweden)

    João Marcos

    2010-03-01

    Full Text Available A number of flexible tactic-based logical frameworks are nowadays available that can implement a wide range of mathematical theories using a common higher-order metalanguage. Used as proof assistants, one of the advantages of such powerful systems resides in their responsiveness to extensibility of their reasoning capabilities, being designed over rule-based programming languages that allow the user to build her own `programs to construct proofs' - the so-called proof tactics. The present contribution discusses the implementation of an algorithm that generates sound and complete tableau systems for a very inclusive class of sufficiently expressive finite-valued propositional logics, and then illustrates some of the challenges and difficulties related to the algorithmic formation of automated theorem proving tactics for such logics. The procedure on whose implementation we will report is based on a generalized notion of analyticity of proof systems that is intended to guarantee termination of the corresponding automated tactics on what concerns theoremhood in our targeted logics.

  13. Automatic generation of indoor navigable space using a point cloud and its scanner trajectory

    NARCIS (Netherlands)

    Staats, B. R.; Diakite, A.A.; Voûte, R.; Zlatanova, S.

    2017-01-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may

  14. Automatic generation of medium-detailed 3D models of buildings based on CAD data

    NARCIS (Netherlands)

    Dominguez-Martin, B.; Van Oosterom, P.; Feito-Higueruela, F.R.; Garcia-Fernandez, A.L.; Ogayar-Anguita, C.J.

    2015-01-01

    We present the preliminary results of a work in progress which aims to obtain a software system able to automatically generate a set of diverse 3D building models with a medium level of detail, that is, more detailed that a mere parallelepiped, but not as detailed as a complete geometric

  15. Validation study of automatically generated codes in colonoscopy using the endoscopic report system Endobase

    NARCIS (Netherlands)

    Groenen, Marcel J. M.; van Buuren, Henk R.; van Berge Henegouwen, Gerard P.; Fockens, Paul; van der Lei, Johan; Stuifbergen, Wouter N. H. M.; van der Schaar, Peter J.; Kuipers, Ernst J.; Ouwendijk, Rob J. Th

    2010-01-01

    OBJECTIVE: Gastrointestinal endoscopy databases are important for surveillance, epidemiology, quality control and research. A good quality of automatically generated databases to enable drawing justified conclusions based on the data is of key importance. The aim of this study is to validate the

  16. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  17. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  18. A GA-fuzzy automatic generation controller for interconnected power system

    CSIR Research Space (South Africa)

    Boesack, CD

    2011-10-01

    Full Text Available This paper presents a GA-Fuzzy Automatic Generation Controller for large interconnected power systems. The design of Fuzzy Logic Controllers by means of expert knowledge have typically been the traditional design norm, however, this may not yield...

  19. Cross-cultural assessment of automatically generated multimodal referring expressions in a virtual world

    NARCIS (Netherlands)

    van der Sluis, Ielka; Luz, Saturnino; Breitfuss, Werner; Ishizuka, Mitsuru; Prendinger, Helmut

    This paper presents an assessment of automatically generated multimodal referring expressions as produced by embodied conversational agents in a virtual world. The algorithm used for this purpose employs general principles of human motor control and cooperativity in dialogues that can be

  20. Accuracy assessment of building point clouds automatically generated from iphone images

    Science.gov (United States)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  1. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    Science.gov (United States)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the

  2. Aspects of an automatic system of implants of radioactive seeds and anatomic object simulator for tests in prostate brachytherapy

    International Nuclear Information System (INIS)

    Silva, Leonardo S.M.; Braga, Viviane V.B.; Campos, Tarcísio P. R. de

    2017-01-01

    This work presents the state of the art of the research and development of an automatic radioactive seed implantation system (PSIS). PSIS may assist in the procedure of testing permanent implants in the prostate. These tests will be important in measurements of absorbed doses in the pelvic structures, involving the organs and tissues at risk to improve planning, seed positioning and dosimetry. The automated Prostate Seed Implant System (PSIS) has been designed to meet operational needs, which offers the freedom of positioning of the brachytherapy needle within the treatment area and ensures repeatability and fidelity to the planned treatment. Both the ultrasound probe and the seed implant needle are driven by step motors, Atmega microcontroller, bearings, aluminum shafts and a GUI (Graphical User Interface). Movement of both the probe and the needle holder was performed by fixed spindle on a threaded rod rushing to the step motors by a coupling. The step motors used to move the system consist of step motors used in CNC (Computer Numeric Control) machine. The choice of these engines occurred due to the precision in the movements that can be obtained with these types of motors. The ultrasound probe serves to help, through the images acquired during the longitudinal movement, to monitor the application of the seeds. The parts that make up the system infrastructure were made of aluminum and translucent acrylic and cylindrical aluminum bars of different diameters. All these pieces were fixed and adjusted trough screws, washers, nuts and adhesive to metal, composing the final prototype of the PSIS. The project was developed and the PSIS prototype was assembled. The prototype presented acceptable operating characteristics for prostate implants. The advantage of this system is the automation of the application that provides an accurate positioning and movement of both probe and seed application. In addition to this study, seeds implantation tests will be performed, and

  3. Aspects of an automatic system of implants of radioactive seeds and anatomic object simulator for tests in prostate brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Leonardo S.M.; Braga, Viviane V.B.; Campos, Tarcísio P. R. de, E-mail: leonardosantiago.lsms@gmail.com, E-mail: vitoriabraga06@gmail.com, E-mail: tprcampos@yahoo.com.br [Universidade Federal de Minas Gerais (PCTN/UFMG), Belo Horizonte (Brazil). Pós-Graduação em Ciências e Técnicas Nucleares. Departamento de Engenharia Nuclear

    2017-07-01

    This work presents the state of the art of the research and development of an automatic radioactive seed implantation system (PSIS). PSIS may assist in the procedure of testing permanent implants in the prostate. These tests will be important in measurements of absorbed doses in the pelvic structures, involving the organs and tissues at risk to improve planning, seed positioning and dosimetry. The automated Prostate Seed Implant System (PSIS) has been designed to meet operational needs, which offers the freedom of positioning of the brachytherapy needle within the treatment area and ensures repeatability and fidelity to the planned treatment. Both the ultrasound probe and the seed implant needle are driven by step motors, Atmega microcontroller, bearings, aluminum shafts and a GUI (Graphical User Interface). Movement of both the probe and the needle holder was performed by fixed spindle on a threaded rod rushing to the step motors by a coupling. The step motors used to move the system consist of step motors used in CNC (Computer Numeric Control) machine. The choice of these engines occurred due to the precision in the movements that can be obtained with these types of motors. The ultrasound probe serves to help, through the images acquired during the longitudinal movement, to monitor the application of the seeds. The parts that make up the system infrastructure were made of aluminum and translucent acrylic and cylindrical aluminum bars of different diameters. All these pieces were fixed and adjusted trough screws, washers, nuts and adhesive to metal, composing the final prototype of the PSIS. The project was developed and the PSIS prototype was assembled. The prototype presented acceptable operating characteristics for prostate implants. The advantage of this system is the automation of the application that provides an accurate positioning and movement of both probe and seed application. In addition to this study, seeds implantation tests will be performed, and

  4. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  5. Automatic exposure control in CT: the effect of patient size, anatomical region and prescribed modulation strength on tube current and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Papadakis, Antonios E. [University Hospital of Heraklion, Department of Medical Physics, Stavrakia, P.O. Box 1352, Heraklion, Crete (Greece); Perisinakis, Kostas; Damilakis, John [University of Crete, Faculty of Medicine, Department of Medical Physics, P.O. Box 2208, Heraklion, Crete (Greece)

    2014-10-15

    To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAs{sub mod}) values were recorded. Image noise was measured at selected anatomical sites. The mAs{sub mod} recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P < 0.05). The mAs{sub mod} was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAs{sub mod} ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. circle ATCM efficiency is related to the size of the patient's body. (orig.)

  6. Automatic exposure control in CT: the effect of patient size, anatomical region and prescribed modulation strength on tube current and image quality

    International Nuclear Information System (INIS)

    Papadakis, Antonios E.; Perisinakis, Kostas; Damilakis, John

    2014-01-01

    To study the effect of patient size, body region and modulation strength on tube current and image quality on CT examinations that use automatic tube current modulation (ATCM). Ten physical anthropomorphic phantoms that simulate an individual as neonate, 1-, 5-, 10-year-old and adult at various body habitus were employed. CT acquisition of head, neck, thorax and abdomen/pelvis was performed with ATCM activated at weak, average and strong modulation strength. The mean modulated mAs (mAs mod ) values were recorded. Image noise was measured at selected anatomical sites. The mAs mod recorded for neonate compared to 10-year-old increased by 30 %, 14 %, 6 % and 53 % for head, neck, thorax and abdomen/pelvis, respectively, (P mod was lower than the preselected mAs with the exception of the 10-year-old phantom. In paediatric and adult phantoms, the mAs mod ranged from 44 and 53 for weak to 117 and 93 for strong modulation strength, respectively. At the same exposure parameters image noise increased with body size (P < 0.05). The ATCM system studied here may affect dose differently for different patient habitus. Dose may decrease for overweight adults but increase for children older than 5 years old. Care should be taken when implementing ATCM protocols to ensure that image quality is maintained. circle ATCM efficiency is related to the size of the patient's body. (orig.)

  7. High-speed particle tracking in nuclear emulsion by last-generation automatic microscopes

    International Nuclear Information System (INIS)

    Armenise, N.; De Serio, M.; Ieva, M.; Muciaccia, M.T.; Pastore, A.; Simone, S.; Damet, J.; Kreslo, I.; Savvinov, N.; Waelchli, T.; Consiglio, L.; Cozzi, M.; Di Ferdinando, D.; Esposito, L.S.; Giacomelli, G.; Giorgini, M.; Mandrioli, G.; Patrizii, L.; Sioli, M.; Sirri, G.; Arrabito, L.; Laktineh, I.; Royole-Degieux, P.; Buontempo, S.; D'Ambrosio, N.; De Lellis, G.; De Rosa, G.; Di Capua, F.; Coppola, D.; Formisano, F.; Marotta, A.; Migliozzi, P.; Pistillo, C.; Scotto Lavina, L.; Sorrentino, G.; Strolin, P.; Tioukov, V.; Juget, F.; Hauger, M.; Rosa, G.; Barbuto, E.; Bozza, C.; Grella, G.; Romano, G.; Sirignano, C.

    2005-01-01

    The technique of nuclear emulsions for high-energy physics experiments is being revived, thanks to the remarkable progress in measurement automation achieved in the past years. The present paper describes the features and performances of the European Scanning System, a last-generation automatic microscope working at a scanning speed of 20cm 2 /h. The system has been developed in the framework of the OPERA experiment, designed to unambigously detect ν μ ->ν τ oscillations in nuclear emulsions

  8. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    Science.gov (United States)

    2015-12-01

    more protocols (especially at different layers of the OSI model ), implementing an inference engine to extract inter- and intrapacket dependencies, and...ARL-TR-7543 ● DEC 2015 US Army Research Laboratory Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model ...ICMP) Model Generation for ns-3 by Jaime C Acosta and Felipe Jovel Survivability/Lethality Analysis Directorate, ARL Felipe Sotelo and Caesar

  9. Accuracy assessment of building point clouds automatically generated from iphone images

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2014-06-01

    Full Text Available Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ and standard deviation (σ of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m. and (μ2 = 0.025 m., σ2 = 0.037 m. for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  10. Automatic generation of stop word lists for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  11. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    Science.gov (United States)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  12. Automatic geometric modeling, mesh generation and FE analysis for pipelines with idealized defects and arbitrary location

    Energy Technology Data Exchange (ETDEWEB)

    Motta, R.S.; Afonso, S.M.B.; Willmersdorf, R.B.; Lyra, P.R.M. [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Cabral, H.L.D. [TRANSPETRO, Rio de Janeiro, RJ (Brazil); Andrade, E.Q. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Although the Finite Element Method (FEM) has proved to be a powerful tool to predict the failure pressure of corroded pipes, the generation of good computational models of pipes with corrosion defects can take several days. This makes the use of computational simulation procedure difficult to apply in practice. The main purpose of this work is to develop a set of computational tools to produce automatically models of pipes with defects, ready to be analyzed with commercial FEM programs, starting from a few parameters that locate and provide the main dimensions of the defect or a series of defects. Here these defects can be internal and external and also assume general spatial locations along the pipe. Idealized rectangular and elliptic geometries can be generated. These tools were based on MSC.PATRAN pre and post-processing programs and were written with PCL (Patran Command Language). The program for the automatic generation of models (PIPEFLAW) has a simplified and customized graphical interface, so that an engineer with basic notions of computational simulation with the FEM can generate rapidly models that result in precise and reliable simulations. Some examples of models of pipes with defects generated by the PIPEFLAW system are shown, and the results of numerical analyses, done with the tools presented in this work, are compared with, empiric results. (author)

  13. Automatic Generation System of Multiple-Choice Cloze Questions and its Evaluation

    Directory of Open Access Journals (Sweden)

    Takuya Goto

    2010-09-01

    Full Text Available Since English expressions vary according to the genres, it is important for students to study questions that are generated from sentences of the target genre. Although various questions are prepared, it is still not enough to satisfy various genres which students want to learn. On the other hand, when producing English questions, sufficient grammatical knowledge and vocabulary are needed, so it is difficult for non-expert to prepare English questions by themselves. In this paper, we propose an automatic generation system of multiple-choice cloze questions from English texts. Empirical knowledge is necessary to produce appropriate questions, so machine learning is introduced to acquire knowledge from existing questions. To generate the questions from texts automatically, the system (1 extracts appropriate sentences for questions from texts based on Preference Learning, (2 estimates a blank part based on Conditional Random Field, and (3 generates distracters based on statistical patterns of existing questions. Experimental results show our method is workable for selecting appropriate sentences and blank part. Moreover, our method is appropriate to generate the available distracters, especially for the sentence that does not contain the proper noun.

  14. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  15. Development of ANJOYMC Program for Automatic Generation of Monte Carlo Cross Section Libraries

    International Nuclear Information System (INIS)

    Kim, Kang Seog; Lee, Chung Chan

    2007-03-01

    The NJOY code developed at Los Alamos National Laboratory is to generate the cross section libraries in ACE format for the Monte Carlo codes such as MCNP and McCARD by processing the evaluated nuclear data in ENDF/B format. It takes long time to prepare all the NJOY input files for hundreds of nuclides with various temperatures, and there can be some errors in the input files. In order to solve these problems, ANJOYMC program has been developed. By using a simple user input deck, this program is not only to generate all the NJOY input files automatically, but also to generate a batch file to perform all the NJOY calculations. The ANJOYMC program is written in Fortran90 and can be executed under the WINDOWS and LINUX operating systems in Personal Computer. Cross section libraries in ACE format can be generated in a short time and without an error by using a simple user input deck

  16. Evaluating the Potential of Imaging Rover for Automatic Point Cloud Generation

    Science.gov (United States)

    Cera, V.; Campi, M.

    2017-02-01

    The paper presents a phase of an on-going interdisciplinary research concerning the medieval site of Casertavecchia (Italy). The project aims to develop a multi-technique approach for the semantic - enriched 3D modeling starting from the automatic acquisition of several data. In particular, the paper reports the results of the first stage about the Cathedral square of the medieval village. The work is focused on evaluating the potential of an imaging rover for automatic point cloud generation. Each of survey techniques has its own advantages and disadvantages so the ideal approach is an integrated methodology in order to maximize single instrument performance. The experimentation was conducted on the Cathedral square of the ancient site of Casertavecchia, in Campania, Italy.

  17. On the application of bezier surfaces for GA-Fuzzy controller design for use in automatic generation control

    CSIR Research Space (South Africa)

    Boesack, CD

    2012-03-01

    Full Text Available Automatic Generation Control (AGC) of large interconnected power systems are typically controlled by a PI or PID type control law. Recently intelligent control techniques such as GA-Fuzzy controllers have been widely applied within the power...

  18. Automatic mesh generation for structural analysis of pressure vessels using fuzzy knowledge processing

    International Nuclear Information System (INIS)

    Kado, Kenichiro; Sato, Takuya; Yoshimura, Shinobu; Yagawa, Genki.

    1994-01-01

    This paper describes the automatic mesh generation system for 2D axisymmetric and 3D shell structures based on the fuzzy knowledge processing. In this system, an analysis model, i.e. a geometric model, is first defined using a conventional method for 2D structures and a commercial CAD system, Auto-CAD, for 3D shell structures. Nodes are then generated based on the fuzzy knowledge processing technique, well controlling the node density distribution over the whole analysis domain. Triangular elements are generated using the Delaunay triangulation technique. The triangular elements are converted to quadrilateral elements. The fundamental performances of the system are demonstrated through its application to typical components of a pressure vessel. (author)

  19. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    CERN Document Server

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques

    2005-01-01

    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  20. Development of automatic intercomparison system for generation of time scale ensembling several atomic clocks

    Directory of Open Access Journals (Sweden)

    Thorat P.P.

    2015-01-01

    Full Text Available National physical laboratory India (NPLI has five commercial cesium atomic clocks. Till recently one of these clocks had been used to maintain coordinated universal time (UTC of NPLI. To utilize all these clocks in an ensemble manner to generate a smoother time scale, it has been essential to inter-compare them very precisely. This has been achieved with an automatic measurement system with well-conceived software. Though few laboratories have developed such automatic measurement system by themselves based on the respective requirements; but they are not reported. So keeping in mind of the specific requirement of the time scale generation, a new system has been developed by NPLI. The design has taken into account of the associated infrastructure that exists and would be used. The performance of the new system has also been studied. It has been found to be quite satisfactory to serve the purpose. The system is being utilized for the generation of time scale of NPLI.

  1. Application of GA optimization for automatic generation control design in an interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Golpira, H., E-mail: hemin.golpira@uok.ac.i [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Bevrani, H. [Department of Electrical and Computer Engineering, University of Kurdistan, Sanandaj, PO Box 416, Kurdistan (Iran, Islamic Republic of); Golpira, H. [Department of Industrial Engineering, Islamic Azad University, Sanandaj Branch, PO Box 618, Kurdistan (Iran, Islamic Republic of)

    2011-05-15

    Highlights: {yields} A realistic model for automatic generation control (AGC) design is proposed. {yields} The model considers GRC, Speed governor dead band, filters and time delay. {yields} The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  2. Application of GA optimization for automatic generation control design in an interconnected power system

    International Nuclear Information System (INIS)

    Golpira, H.; Bevrani, H.; Golpira, H.

    2011-01-01

    Highlights: → A realistic model for automatic generation control (AGC) design is proposed. → The model considers GRC, Speed governor dead band, filters and time delay. → The model provides an accurate model for the digital simulations. -- Abstract: This paper addresses a realistic model for automatic generation control (AGC) design in an interconnected power system. The proposed scheme considers generation rate constraint (GRC), dead band, and time delay imposed to the power system by governor-turbine, filters, thermodynamic process, and communication channels. Simplicity of structure and acceptable response of the well-known integral controller make it attractive for the power system AGC design problem. The Genetic algorithm (GA) is used to compute the decentralized control parameters to achieve an optimum operating point. A 3-control area power system is considered as a test system, and the closed-loop performance is examined in the presence of various constraints scenarios. It is shown that neglecting above physical constraints simultaneously or in part, leads to impractical and invalid results and may affect the system security, reliability and integrity. Taking to account the advantages of GA besides considering a more complete dynamic model provides a flexible and more realistic AGC system in comparison of existing conventional schemes.

  3. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  4. Automatic cloud-free image generation from high-resolution multitemporal imagery

    Science.gov (United States)

    Han, Youkyung; Bovolo, Francesca; Lee, Won Hee

    2017-04-01

    The aim of this paper is to document the automatic reconstruction of clouds and their cast shadows for the generation of spontaneous cloud-free images from high-resolution multitemporal images. To apply the proposed technique, a cloud-free reference image, which has the same position as a target image acquired at a different time, is required. First, the cloud region in the target image is detected based on integration of thick and peripheral cloud candidate regions. Next, the detected cloud region is restored using the pixel values of the target image by considering their location relative to the reference images. Finally, the pixel values of the restored image are separately normalized to the values of the reference image to generate a natural-looking cloud-free image. Multitemporal KOMPSAT-2 high-resolution images are used to construct study sites for evaluation of the proposed method in diverse cloud-cover cases. The experimental results show that the proposed method can automatically generate cloud-free images from high-resolution multitemporal images with reasonable qualitative and quantitative performance.

  5. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    Science.gov (United States)

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  6. Automatic Generation of Mashups for Personalized Commerce in Digital TV by Semantic Reasoning

    Science.gov (United States)

    Blanco-Fernández, Yolanda; López-Nores, Martín; Pazos-Arias, José J.; Martín-Vicente, Manuela I.

    The evolution of information technologies is consolidating recommender systems as essential tools in e-commerce. To date, these systems have focused on discovering the items that best match the preferences, interests and needs of individual users, to end up listing those items by decreasing relevance in some menus. In this paper, we propose extending the current scope of recommender systems to better support trading activities, by automatically generating interactive applications that provide the users with personalized commercial functionalities related to the selected items. We explore this idea in the context of Digital TV advertising, with a system that brings together semantic reasoning techniques and new architectural solutions for web services and mashups.

  7. Design and construction of a graphical interface for automatic generation of simulation code GEANT4

    International Nuclear Information System (INIS)

    Driss, Mozher; Bouzaine Ismail

    2007-01-01

    This work is set in the context of the engineering studies final project; it is accomplished in the center of nuclear sciences and technologies in Sidi Thabet. This project is about conceiving and developing a system based on graphical user interface which allows an automatic codes generation for simulation under the GEANT4 engine. This system aims to facilitate the use of GEANT4 by scientific not necessary expert in this engine and to be used in different areas: research, industry and education. The implementation of this project uses Root library and several programming languages such as XML and XSL. (Author). 5 refs

  8. Spreadsheet Activities with Conditional Progression and Automatically Generated Feedback and Grades

    Directory of Open Access Journals (Sweden)

    Thomas C Juster

    2013-02-01

    Full Text Available Spreadsheet activities following the Spreadsheets Across the Curriculum (SSAC model have been modified using VBA programming to automatically generate feedback, calculate grades, and ensure that students complete them in a linear fashion. Feedback is based not only on the value of cells, but also on the formulas used to compute the values. These changes greatly ease the burden of grading on instructors, and help students more quickly master tasks and concepts by providing immediate and directed feedback to their answers. Students performed significantly better on the new spreadsheet activities compared to traditional SSAC versions, with 87% achieving perfect scores of 100%.

  9. Evaluation of user-guided semi-automatic decomposition tool for hexahedral mesh generation

    Directory of Open Access Journals (Sweden)

    Jean Hsiang-Chun Lu

    2017-10-01

    Full Text Available Volumetric decomposition is essential for all-hexahedral mesh generation. Because fully automatic decomposition methods that can generate high-quality hexahedral meshes for arbitrary volumes have yet to be realized, manual decomposition is still required frequently. Manual decomposition is a laborious process and requires a high level of user expertise. Therefore, a user-guided semi-automatic tool to reduce the human effort and lower the requirement of expertise is necessary. To date, only a few of these approaches have been proposed, and a lack of user evaluation makes it difficult to improve upon this approach. Based on our previous work, we present a user evaluation of a user-guided semi-automatic tool that provides visual guidance to assist users in determining decomposition solutions, accepts sketch-based inputs to create decomposition surfaces, and simplifies the decomposition commands. This user evaluation investigated (1 the usability of the visual guidance, (2 the types of visual guidance essential for decomposition, (3 the effectiveness of the sketch-based decomposition, and (4 the performance differences between beginner and experienced users using the sketch-based decomposition. The result and user feedback indicate that the tool enables users who have limited prior experience or familiarity with the computer-aided engineering software to perform volumetric decomposition more efficiently. The visual guidance increases the success rate of the user’s decomposition solution by 28%. The sketch-based decomposition significantly reduces 46% of the user’s time on creating decomposition surfaces and setting up decomposition commands.

  10. Automatic Generation Control Study in Two Area Reheat Thermal Power System

    Science.gov (United States)

    Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu

    2017-08-01

    Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.

  11. Development and Testing of Automatically Generated ACS Flight Software for the MAP Spacecraft

    Science.gov (United States)

    ODonnell, James R., Jr.; McComas, David C.; Andrews, Stephen F.

    1998-01-01

    By integrating the attitude determination and control system (ACS) analysis and design, flight software development, and flight software testing processes, it is possible to improve the overall spacecraft development cycle, as well as allow for more thorough software testing. One of the ways to achieve this integration is to use code-generation tools to automatically generate components of the ACS flight software directly from a high-fidelity (HiFi) simulation. In the development of the Microwave Anisotropy Probe (MAP) spacecraft, currently underway at the NASA Goddard Space Flight Center, approximately 1/3 of the ACS flight software was automatically generated. In this paper, we will examine each phase of the ACS subsystem and flight software design life cycle: analysis, design, and testing. In the analysis phase, we scoped how much software we would automatically generate and created the initial interface. The design phase included parallel development of the HiFi simulation and the hand-coded flight software components. Everything came together in the test phase, in which the flight software was tested, using results from the HiFi simulation as one of the bases of comparison for testing. Because parts of the spacecraft HiFi simulation were converted into flight software, more care needed to be put into its development and configuration control to support both the HiFi simulation and flight software. The components of the HiFi simulation from which code was generated needed to be designed based on the fact that they would become flight software. This process involved such considerations as protecting against mathematical exceptions, using acceptable module and parameter naming conventions, and using an input/output interface compatible with the rest of the flight software. Maintaining good configuration control was an issue for the HiFi simulation and the flight software, and a way to track the two systems was devised. Finally, an integrated test approach was

  12. Automatic generation control with thyristor controlled series compensator including superconducting magnetic energy storage units

    Directory of Open Access Journals (Sweden)

    Saroj Padhan

    2014-09-01

    Full Text Available In the present work, an attempt has been made to understand the dynamic performance of Automatic Generation Control (AGC of multi-area multi-units thermal–thermal power system with the consideration of Reheat turbine, Generation Rate Constraint (GRC and Time delay. Initially, the gains of the fuzzy PID controller are optimized using Differential Evolution (DE algorithm. The superiority of DE is demonstrated by comparing the results with Genetic Algorithm (GA. After that performance of Thyristor Controlled Series Compensator (TCSC has been investigated. Further, a TCSC is placed in the tie-line and Superconducting Magnetic Energy Storage (SMES units are considered in both areas. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions from their nominal values. It is observed that the optimum gains of the proposed controller need not be reset even if the system is subjected to wide variation in loading condition and system parameters.

  13. A reusable automatically generated software system for the control of the Large Millimeter Telescope

    Science.gov (United States)

    Souccar, Kamal; Wallace, Gary; Malin, Daniella

    2002-12-01

    A telescope system is composed of a set of real-world objects that are mapped onto software objects whose properties are described in XML configuration files. These XML files are processed to automatically generate user interfaces, underlying communication mechanisms, and extendible source code. Developers need not write user interfaces or communication methods but can focus on the production of scientific results. Any modifications or additions of objects can be easily achieved by editing or generating corresponding XML files and compiling them into the system. This framework can be utilized to implement servo controllers, device drivers, observing algorithms and instrument controllers; and is applicable to any problem domain that requires a user-based interaction with the inputs and outputs of a particular resource or program. This includes telescope systems, instruments, data reduction methods, and database interfaces. The system is implemented using Java, C++, and CORBA.

  14. Automatic deodorizing system for waste water from radioisotope facilities using an ozone generator

    International Nuclear Information System (INIS)

    Kawamura, Hiroko; Hirata, Yasuki

    2002-01-01

    We applied an ozone generator to sterilize and to deodorize the waste water from radioisotope facilities. A small tank connected to the generator is placed outside of the drainage facility founded previously, not to oxidize the other apparatus. The waste water is drained 1 m 3 at a time from the tank of drainage facility, treated with ozone and discharged to sewer. All steps proceed automatically once the draining work is started remotely in the office. The waste water was examined after the ozone treatment for 0 (original), 0.5, 1.0, 1.5 and 2.0 h. Regarding original waste water, the sum of coliform groups varied with every examination repeated - probably depend on the colibacilli used in experiments; hydrogen sulfide, biochemical oxygen demand and the offensive odor increased with increasing coliform groups. The ozone treatment remarkably decreased hydrogen sulfide and the offensive odor, decreased coliform groups when the original water had rich coliforms. (author)

  15. Optimal gravitational search algorithm for automatic generation control of interconnected power systems

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2014-09-01

    Full Text Available An attempt is made for the effective application of Gravitational Search Algorithm (GSA to optimize PI/PIDF controller parameters in Automatic Generation Control (AGC of interconnected power systems. Initially, comparison of several conventional objective functions reveals that ITAE yields better system performance. Then, the parameters of GSA technique are properly tuned and the GSA control parameters are proposed. The superiority of the proposed approach is demonstrated by comparing the results of some recently published techniques such as Differential Evolution (DE, Bacteria Foraging Optimization Algorithm (BFOA and Genetic Algorithm (GA. Additionally, sensitivity analysis is carried out that demonstrates the robustness of the optimized controller parameters to wide variations in operating loading condition and time constants of speed governor, turbine, tie-line power. Finally, the proposed approach is extended to a more realistic power system model by considering the physical constraints such as reheat turbine, Generation Rate Constraint (GRC and Governor Dead Band nonlinearity.

  16. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling.

    Directory of Open Access Journals (Sweden)

    Florencio Rusty Punzalan

    Full Text Available Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs. Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code

  17. Automatic verification of SSD and generation of respiratory signal with lasers in radiotherapy: a preliminary study.

    Science.gov (United States)

    Prabhakar, Ramachandran

    2012-01-01

    Source to surface distance (SSD) plays a very important role in external beam radiotherapy treatment verification. In this study, a simple technique has been developed to verify the SSD automatically with lasers. The study also suggests a methodology for determining the respiratory signal with lasers. Two lasers, red and green are mounted on the collimator head of a Clinac 2300 C/D linac along with a camera to determine the SSD. A software (SSDLas) was developed to estimate the SSD automatically from the images captured by a 12-megapixel camera. To determine the SSD to a patient surface, the external body contour of the central axis transverse computed tomography (CT) cut is imported into the software. Another important aspect in radiotherapy is the generation of respiratory signal. The changes in the lasers separation as the patient breathes are converted to produce a respiratory signal. Multiple frames of laser images were acquired from the camera mounted on the collimator head and each frame was analyzed with SSDLas to generate the respiratory signal. The SSD as observed with the ODI on the machine and SSD measured by the SSDlas software was found to be within the tolerance limit. The methodology described for generating the respiratory signals will be useful for the treatment of mobile tumors such as lung, liver, breast, pancreas etc. The technique described for determining the SSD and the generation of respiratory signals using lasers is cost effective and simple to implement. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Automatic evaluation and data generation for analytical chemistry instrumental analysis exercises

    Directory of Open Access Journals (Sweden)

    Arsenio Muñoz de la Peña

    2014-01-01

    Full Text Available In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS, an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

  19. Automatic generation of 3D motifs for classification of protein binding sites

    Directory of Open Access Journals (Sweden)

    Herzyk Pawel

    2007-08-01

    Full Text Available Abstract Background Since many of the new protein structures delivered by high-throughput processes do not have any known function, there is a need for structure-based prediction of protein function. Protein 3D structures can be clustered according to their fold or secondary structures to produce classes of some functional significance. A recent alternative has been to detect specific 3D motifs which are often associated to active sites. Unfortunately, there are very few known 3D motifs, which are usually the result of a manual process, compared to the number of sequential motifs already known. In this paper, we report a method to automatically generate 3D motifs of protein structure binding sites based on consensus atom positions and evaluate it on a set of adenine based ligands. Results Our new approach was validated by generating automatically 3D patterns for the main adenine based ligands, i.e. AMP, ADP and ATP. Out of the 18 detected patterns, only one, the ADP4 pattern, is not associated with well defined structural patterns. Moreover, most of the patterns could be classified as binding site 3D motifs. Literature research revealed that the ADP4 pattern actually corresponds to structural features which show complex evolutionary links between ligases and transferases. Therefore, all of the generated patterns prove to be meaningful. Each pattern was used to query all PDB proteins which bind either purine based or guanine based ligands, in order to evaluate the classification and annotation properties of the pattern. Overall, our 3D patterns matched 31% of proteins with adenine based ligands and 95.5% of them were classified correctly. Conclusion A new metric has been introduced allowing the classification of proteins according to the similarity of atomic environment of binding sites, and a methodology has been developed to automatically produce 3D patterns from that classification. A study of proteins binding adenine based ligands showed that

  20. Perfusion CT in acute stroke: effectiveness of automatically-generated colour maps.

    Science.gov (United States)

    Ukmar, Maja; Degrassi, Ferruccio; Pozzi Mucelli, Roberta Antea; Neri, Francesca; Mucelli, Fabio Pozzi; Cova, Maria Assunta

    2017-04-01

    To evaluate the accuracy of perfusion CT (pCT) in the definition of the infarcted core and the penumbra, comparing the data obtained from the evaluation of parametric maps [cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT)] with software-generated colour maps. A retrospective analysis was performed to identify patients with suspected acute ischaemic strokes and who had undergone unenhanced CT and pCT carried out within 4.5 h from the onset of the symptoms. A qualitative evaluation of the CBV, CBF and MTT maps was performed, followed by an analysis of the colour maps automatically generated by the software. 26 patients were identified, but a direct CT follow-up was performed only on 19 patients after 24-48 h. In the qualitative analysis, 14 patients showed perfusion abnormalities. Specifically, 29 perfusion deficit areas were detected, of which 15 areas suggested the penumbra and the remaining 14 areas suggested the infarct. As for automatically software-generated maps, 12 patients showed perfusion abnormalities. 25 perfusion deficit areas were identified, 15 areas of which suggested the penumbra and the other 10 areas the infarct. The McNemar's test showed no statistically significant difference between the two methods of evaluation in highlighting infarcted areas proved later at CT follow-up. We demonstrated how pCT provides good diagnostic accuracy in the identification of acute ischaemic lesions. The limits of identification of the lesions mainly lie at the pons level and in the basal ganglia area. Qualitative analysis has proven to be more efficient in identification of perfusion lesions in comparison with software-generated maps. However, software-generated maps have proven to be very useful in the emergency setting. Advances in knowledge: The use of CT perfusion is requested in increasingly more patients in order to optimize the treatment, thanks also to the technological evolution of CT, which now allows a whole

  1. OConGraX - Automatically Generating Data-Flow Test Cases for Fault-Tolerant Systems

    Science.gov (United States)

    Nunes, Paulo R. F.; Hanazumi, Simone; de Melo, Ana C. V.

    The more complex to develop and manage systems the more software design faults increase, making fault-tolerant systems highly required. To ensure their quality, the normal and exceptional behaviors must be tested and/or verified. Software testing is still a difficult and costly software development task and a reasonable amount of effort has been employed to develop techniques for testing programs’ normal behaviors. For the exceptional behavior, however, there is a lack of techniques and tools to effectively test it. To help in testing and analyzing fault-tolerant systems, we present in this paper a tool that provides an automatic generation of data-flow test cases for objects and exception-handling mechanisms of Java programs and data/control-flow graphs for program analysis.

  2. Automatic modulation format recognition for the next generation optical communication networks using artificial neural networks

    Science.gov (United States)

    Guesmi, Latifa; Hraghi, Abir; Menif, Mourad

    2015-03-01

    A new technique for Automatic Modulation Format Recognition (AMFR) in next generation optical communication networks is presented. This technique uses the Artificial Neural Network (ANN) in conjunction with the features of Linear Optical Sampling (LOS) of the detected signal at high bit rates using direct detection or coherent detection. The use of LOS method for this purpose mainly driven by the increase of bit rates which enables the measurement of eye diagrams. The efficiency of this technique is demonstrated under different transmission impairments such as chromatic dispersion (CD) in the range of -500 to 500 ps/nm, differential group delay (DGD) in the range of 0-15 ps and the optical signal tonoise ratio (OSNR) in the range of 10-30 dB. The results of numerical simulation for various modulation formats demonstrate successful recognition from a known bit rates with a higher estimation accuracy, which exceeds 99.8%.

  3. HELAC-Onia: an automatic matrix element generator for heavy quarkonium physics

    CERN Document Server

    Shao, Hua-Sheng

    2013-01-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code \\mtt{HELAC} to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub \\mtt{HELAC-Onia}. We rewrote the original \\mtt{HELAC} to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, \\mtt{HELAC-Onia} is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. $h_{c,b},\\chi_{c,b}$) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  4. AUTOMATIC GENERATION OF BUILDING MODELS WITH LEVELS OF DETAIL 1-3

    Directory of Open Access Journals (Sweden)

    W. Nguatem

    2016-06-01

    Full Text Available We present a workflow for the automatic generation of building models with levels of detail (LOD 1 to 3 according to the CityGML standard (Gröger et al., 2012. We start with orienting unsorted image sets employing (Mayer et al., 2012, we compute depth maps using semi-global matching (SGM (Hirschmüller, 2008, and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014. Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013 and window model selection (Nguatem et al., 2014. We demonstrate our workflow up to the export into CityGML.

  5. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  6. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  7. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  8. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fengli [Univ. of Arkansas, Fayetteville, AR (United States); Li, Qinghua [Univ. of Arkansas, Fayetteville, AR (United States)

    2017-10-09

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Network and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.

  9. AUTOMATIC GENERATION OF INDOOR NAVIGABLE SPACE USING A POINT CLOUD AND ITS SCANNER TRAJECTORY

    Directory of Open Access Journals (Sweden)

    B. R. Staats

    2017-09-01

    Full Text Available Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user’s preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  10. Performance Evaluation of Antlion Optimizer Based Regulator in Automatic Generation Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Esha Gupta

    2016-01-01

    Full Text Available This paper presents an application of the recently introduced Antlion Optimizer (ALO to find the parameters of primary governor loop of thermal generators for successful Automatic Generation Control (AGC of two-area interconnected power system. Two standard objective functions, Integral Square Error (ISE and Integral Time Absolute Error (ITAE, have been employed to carry out this parameter estimation process. The problem is transformed in optimization problem to obtain integral gains, speed regulation, and frequency sensitivity coefficient for both areas. The comparison of the regulator performance obtained from ALO is carried out with Genetic Algorithm (GA, Particle Swarm Optimization (PSO, and Gravitational Search Algorithm (GSA based regulators. Different types of perturbations and load changes are incorporated to establish the efficacy of the obtained design. It is observed that ALO outperforms all three optimization methods for this real problem. The optimization performance of ALO is compared with other algorithms on the basis of standard deviations in the values of parameters and objective functions.

  11. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  12. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    Science.gov (United States)

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Automatic generation and verification of railway interlocking control tables using FSM and NuSMV

    Directory of Open Access Journals (Sweden)

    Mohammad B. YAZDI

    2009-01-01

    Full Text Available Due to their important role in providing safe conditions for train movements, railway interlocking systems are considered as safety critical systems. The reliability, safety and integrity of these systems, relies on reliability and integrity of all stages in their lifecycle including the design, verification, manufacture, test, operation and maintenance.In this paper, the Automatic generation and verification of interlocking control tables, as one of the most important stages in the interlocking design process has been focused on, by the safety critical research group in the School of Railway Engineering, SRE. Three different subsystems including a graphical signalling layout planner, a Control table generator and a Control table verifier, have been introduced. Using NuSMV model checker, the control table verifier analyses the contents of control table besides the safe train movement conditions and checks for any conflicting settings in the table. This includes settings for conflicting routes, signals, points and also settings for route isolation and single and multiple overlap situations. The latest two settings, as route isolation and multiple overlap situations are from new outcomes of the work comparing to works represented on the subject recently.

  14. Integration of Variable Speed Pumped Hydro Storage in Automatic Generation Control Systems

    Science.gov (United States)

    Fulgêncio, N.; Moreira, C.; Silva, B.

    2017-04-01

    Pumped storage power (PSP) plants are expected to be an important player in modern electrical power systems when dealing with increasing shares of new renewable energies (NRE) such as solar or wind power. The massive penetration of NRE and consequent replacement of conventional synchronous units will significantly affect the controllability of the system. In order to evaluate the capability of variable speed PSP plants participation in the frequency restoration reserve (FRR) provision, taking into account the expected performance in terms of improved ramp response capability, a comparison with conventional hydro units is presented. In order to address this issue, a three area test network was considered, as well as the corresponding automatic generation control (AGC) systems, being responsible for re-dispatching the generation units to re-establish power interchange between areas as well as the system nominal frequency. The main issue under analysis in this paper is related to the benefits of the fast response of variable speed PSP with respect to its capability of providing fast power balancing in a control area.

  15. Wolf pack hunting strategy for automatic generation control of an islanding smart distribution network

    International Nuclear Information System (INIS)

    Xi, Lei; Zhang, Zeyu; Yang, Bo; Huang, Linni; Yu, Tao

    2016-01-01

    Highlights: • A mixed homogeneous and heterogeneous multi-agent based wolf pack hunting (WPH) method is proposed. • WPH can effectively handle the ever-increasing penetration of renewable energy in smart grid. • An AGC power dispatch, coordinated control, and electric power autonomy of an ISDN is achieved. - Abstract: As the conventional centralized automatic generation control (AGC) is inadequate to handle the ever-increasing penetration of renewable energy and the requirement of plug-and-play of smart grid, this paper proposes a mixed homogeneous and heterogeneous multi-agent based wolf pack hunting (WPH) strategy to achieve a fast AGC power dispatch, optimal coordinated control, and electric power autonomy of an islanding smart distribution network (ISDN). A virtual consensus variable is employed to deal with the topology variation resulted from the excess of power limits and to achieve the plug-and-play of AGC units. Then an integrated objective of frequency deviation and short-term economic dispatch is developed, such that all units can maintain an optimal operation in the presence of load disturbances. Four case studies are undertaken to an ISDN with various distributed generations and microgrids. Simulation results demonstrate that WPH has a greater robustness and a faster dynamic optimization than that of conventional approaches, which can increase the utilization rate of the renewable energy and effectively resolve the coordination and electric power autonomy of ISDN.

  16. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  17. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  18. Automatic treatment planning facilitates fast generation of high-quality treatment plans for esophageal cancer.

    Science.gov (United States)

    Hansen, Christian Rønn; Nielsen, Morten; Bertelsen, Anders Smedegaard; Hazell, Irene; Holtved, Eva; Zukauskaite, Ruta; Bjerregaard, Jon Kroll; Brink, Carsten; Bernchou, Uffe

    2017-11-01

    The quality of radiotherapy planning has improved substantially in the last decade with the introduction of intensity modulated radiotherapy. The purpose of this study was to analyze the plan quality and efficacy of automatically (AU) generated VMAT plans for inoperable esophageal cancer patients. Thirty-two consecutive inoperable patients with esophageal cancer originally treated with manually (MA) generated volumetric modulated arc therapy (VMAT) plans were retrospectively replanned using an auto-planning engine. All plans were optimized with one full 6MV VMAT arc giving 60 Gy to the primary target and 50 Gy to the elective target. The planning techniques were blinded before clinical evaluation by three specialized oncologists. To supplement the clinical evaluation, the optimization time for the AU plan was recorded along with DVH parameters for all plans. Upon clinical evaluation, the AU plan was preferred for 31/32 patients, and for one patient, there was no difference in the plans. In terms of DVH parameters, similar target coverage was obtained between the two planning methods. The mean dose for the spinal cord increased by 1.8 Gy using AU (p = .002), whereas the mean lung dose decreased by 1.9 Gy (p plans were more modulated as seen by the increase of 12% in mean MUs (p = .001). The median optimization time for AU plans was 117 min. The AU plans were in general preferred and showed a lower mean dose to the lungs. The automation of the planning process generated esophageal cancer treatment plans quickly and with high quality.

  19. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  20. ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles

    Directory of Open Access Journals (Sweden)

    Bahar Sateli

    2017-07-01

    Full Text Available Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF for representing user profiles and Linked Open Data (LOD sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP techniques. In our method, we start by processing a set of research articles for a

  1. Comparison of Intensity-Modulated Radiotherapy Planning Based on Manual and Automatically Generated Contours Using Deformable Image Registration in Four-Dimensional Computed Tomography of Lung Cancer Patients

    International Nuclear Information System (INIS)

    Weiss, Elisabeth; Wijesooriya, Krishni; Ramakrishnan, Viswanathan; Keall, Paul J.

    2008-01-01

    Purpose: To evaluate the implications of differences between contours drawn manually and contours generated automatically by deformable image registration for four-dimensional (4D) treatment planning. Methods and Materials: In 12 lung cancer patients intensity-modulated radiotherapy (IMRT) planning was performed for both manual contours and automatically generated ('auto') contours in mid and peak expiration of 4D computed tomography scans, with the manual contours in peak inspiration serving as the reference for the displacement vector fields. Manual and auto plans were analyzed with respect to their coverage of the manual contours, which were assumed to represent the anatomically correct volumes. Results: Auto contours were on average larger than manual contours by up to 9%. Objective scores, D 2% and D 98% of the planning target volume, homogeneity and conformity indices, and coverage of normal tissue structures (lungs, heart, esophagus, spinal cord) at defined dose levels were not significantly different between plans (p = 0.22-0.94). Differences were statistically insignificant for the generalized equivalent uniform dose of the planning target volume (p = 0.19-0.94) and normal tissue complication probabilities for lung and esophagus (p = 0.13-0.47). Dosimetric differences >2% or >1 Gy were more frequent in patients with auto/manual volume differences ≥10% (p = 0.04). Conclusions: The applied deformable image registration algorithm produces clinically plausible auto contours in the majority of structures. At this stage clinical supervision of the auto contouring process is required, and manual interventions may become necessary. Before routine use, further investigations are required, particularly to reduce imaging artifacts

  2. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    Science.gov (United States)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  3. Multi-stage fuzzy PID power system automatic generation controller in deregulated environments

    International Nuclear Information System (INIS)

    Shayeghi, H.; Shayanfar, H.A.; Jalili, A.

    2006-01-01

    In this paper, a multi-stage fuzzy proportional integral derivative (PID) type controller is proposed to solve the automatic generation control (AGC) problem in a deregulated power system that operates under deregulation based on the bilateral policy scheme. In each control area, the effects of the possible contracts are treated as a set of new input signals in a modified traditional dynamical model. The multi-stage controller uses the fuzzy switch to blend a proportional derivative (PD) fuzzy logic controller with an integral fuzzy logic input. The proposed controller operates on fuzzy values passing the consequence of a prior stage on to the next stage as fact. The salient advantage of this strategy is its high insensitivity to large load changes and disturbances in the presence of plant parameter variations and system nonlinearities. This newly developed strategy leads to a flexible controller with simple structure that is easy to implement, and therefore, it can be useful for the real world power systems. The proposed method is tested on a three area power system with different contracted scenarios under various operating conditions. The results of the proposed controller are compared with those of the classical fuzzy PID type controller and classical PID controller through some performance indices to illustrate its robust performance

  4. Automatic generation of bioinformatics tools for predicting protein-ligand binding sites.

    Science.gov (United States)

    Komiyama, Yusuke; Banno, Masaki; Ueki, Kokoro; Saad, Gul; Shimizu, Kentaro

    2016-03-15

    Predictive tools that model protein-ligand binding on demand are needed to promote ligand research in an innovative drug-design environment. However, it takes considerable time and effort to develop predictive tools that can be applied to individual ligands. An automated production pipeline that can rapidly and efficiently develop user-friendly protein-ligand binding predictive tools would be useful. We developed a system for automatically generating protein-ligand binding predictions. Implementation of this system in a pipeline of Semantic Web technique-based web tools will allow users to specify a ligand and receive the tool within 0.5-1 day. We demonstrated high prediction accuracy for three machine learning algorithms and eight ligands. The source code and web application are freely available for download at http://utprot.net They are implemented in Python and supported on Linux. shimizu@bi.a.u-tokyo.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  5. Automatic generation of bioinformatics tools for predicting protein–ligand binding sites

    Science.gov (United States)

    Banno, Masaki; Ueki, Kokoro; Saad, Gul; Shimizu, Kentaro

    2016-01-01

    Motivation: Predictive tools that model protein–ligand binding on demand are needed to promote ligand research in an innovative drug-design environment. However, it takes considerable time and effort to develop predictive tools that can be applied to individual ligands. An automated production pipeline that can rapidly and efficiently develop user-friendly protein–ligand binding predictive tools would be useful. Results: We developed a system for automatically generating protein–ligand binding predictions. Implementation of this system in a pipeline of Semantic Web technique-based web tools will allow users to specify a ligand and receive the tool within 0.5–1 day. We demonstrated high prediction accuracy for three machine learning algorithms and eight ligands. Availability and implementation: The source code and web application are freely available for download at http://utprot.net. They are implemented in Python and supported on Linux. Contact: shimizu@bi.a.u-tokyo.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26545824

  6. AutoWIG: automatic generation of python bindings for C++ libraries

    Directory of Open Access Journals (Sweden)

    Pierre Fernique

    2018-04-01

    Full Text Available Most of Python and R scientific packages incorporate compiled scientific libraries to speed up the code and reuse legacy libraries. While several semi-automatic solutions exist to wrap these compiled libraries, the process of wrapping a large library is cumbersome and time consuming. In this paper, we introduce AutoWIG, a Python package that wraps automatically compiled libraries into high-level languages using LLVM/Clang technologies and the Mako templating engine. Our approach is automatic, extensible, and applies to complex C++ libraries, composed of thousands of classes or incorporating modern meta-programming constructs.

  7. Experience in connecting the power generating units of thermal power plants to automatic secondary frequency regulation within the united power system of Russia

    International Nuclear Information System (INIS)

    Zhukov, A. V.; Komarov, A. N.; Safronov, A. N.; Barsukov, I. V.

    2009-01-01

    The principles of central control of the power generating units of thermal power plants by automatic secondary frequency and active power overcurrent regulation systems, and the algorithms for interactions between automatic power control systems for the power production units in thermal power plants and centralized systems for automatic frequency and power regulation, are discussed. The order of switching the power generating units of thermal power plants over to control by a centralized system for automatic frequency and power regulation and by the Central Coordinating System for automatic frequency and power regulation is presented. The results of full-scale system tests of the control of power generating units of the Kirishskaya, Stavropol, and Perm GRES (State Regional Electric Power Plants) by the Central Coordinating System for automatic frequency and power regulation at the United Power System of Russia on September 23-25, 2008, are reported.

  8. Automatic Generation and Validation of an ITER Neutronics Model from CAD Data

    International Nuclear Information System (INIS)

    Tsige-Tamirat, H.; Fischer, U.; Serikov, A.; Stickel, S.

    2006-01-01

    Quality assurance rules request the consistency of the geometry model used in neutronics Monte Carlo calculations and the underlying engineering CAD model. This can be ensured by automatically converting the CAD geometry data into the representation used by Monte Carlo codes such as MCNP. Suitable conversion algorithms have been previously developed at FZK and were implemented into an interface program. This paper describes the application of the interface program to a CAD model of a 40 degree ITER torus sector for the generation of a neutronics geometry model for MCNP. A CAD model provided by ITER consisting of all significant components was analyzed, pre-processed, and converted into MCNP geometry representation. The analysis and pre-processing steps include the checking of the adequacy of the CAD model for neutronics calculations in terms of geometric representation and complexity, and of corresponding corrections. This step is followed by the conversion of the CAD model into MCNP geometry including error detection and correction as well as the completion of the model by voids. The conversion process does not introduce any approximations so that the resulting MCNP geometry is fully equivalent to the original CAD geometry. However, there is a moderate increase of the complexity measured in terms of the number of cell and surfaces. The validity of the converted geometry model was shown by comparing the results of stochastic MCNP volume calculations and the volumes provided by the CAD kernel of the interface programme. Furthermore, successful MCNP test calculations have been performed for verifying the converted ITER model in application calculations. (author)

  9. Embedded Platform for Automatic Testing and Optimizing of FPGA Based Cryptographic True Random Number Generators

    Directory of Open Access Journals (Sweden)

    M. Varchola

    2009-12-01

    Full Text Available This paper deals with an evaluation platform for cryptographic True Random Number Generators (TRNGs based on the hardware implementation of statistical tests for FPGAs. It was developed in order to provide an automatic tool that helps to speed up the TRNG design process and can provide new insights on the TRNG behavior as it will be shown on a particular example in the paper. It enables to test sufficient statistical properties of various TRNG designs under various working conditions on the fly. Moreover, the tests are suitable to be embedded into cryptographic hardware products in order to recognize TRNG output of weak quality and thus increase its robustness and reliability. Tests are fully compatible with the FIPS 140 standard and are implemented by the VHDL language as an IP-Core for vendor independent FPGAs. A recent Flash based Actel Fusion FPGA was chosen for preliminary experiments. The Actel version of the tests possesses an interface to the Actel’s CoreMP7 softcore processor that is fully compatible with the industry standard ARM7TDMI. Moreover, identical tests suite was implemented to the Xilinx Virtex 2 and 5 in order to compare the performance of the proposed solution with the performance of already published one based on the same FPGAs. It was achieved 25% and 65% greater clock frequency respectively while consuming almost equal resources of the Xilinx FPGAs. On the top of it, the proposed FIPS 140 architecture is capable of processing one random bit per one clock cycle which results in 311.5 Mbps throughput for Virtex 5 FPGA.

  10. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation.

    Science.gov (United States)

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J

    2018-04-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical

  11. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ole Green

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  12. Field Robotics in Sports: Automatic Generation of Guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ibrahim A. Hameed

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  13. DEVELOPMENT OF THE MODEL OF AN AUTOMATIC GENERATION OF TOTAL AMOUNTS OF COMMISSIONS IN INTERNATIONAL INTERBANK PAYMENTS

    Directory of Open Access Journals (Sweden)

    Dmitry N. Bolotov

    2013-01-01

    Full Text Available The article deals with the main form of international payment - bank transfer and features when it is charging by banks correspondent fees for transit funds in their correspondent accounts. In order to optimize the cost of expenses for international money transfers there is a need to develop models and toolkit of automatic generation of the total amount of commissions in international interbank settlements. Accordingly, based on graph theory, approach to the construction of the model was developed.

  14. Development of user interface to support automatic program generation of nuclear power plant analysis by module-based simulation system

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Mizutani, Naoki; Nakaya, Ken-ichiro; Wakabayashi, Jiro

    1988-01-01

    Module-based Simulation System (MSS) has been developed to realize a new software work environment enabling versatile dynamic simulation of a complex nuclear power system flexibly. The MSS makes full use of modern software technology to replace a large fraction of human software works in complex, large-scale program development by computer automation. Fundamental methods utilized in MSS and developmental study on human interface system SESS-1 to help users in generating integrated simulation programs automatically are summarized as follows: (1) To enhance usability and 'communality' of program resources, the basic mathematical models of common usage in nuclear power plant analysis are programed as 'modules' and stored in a module library. The information on usage of individual modules are stored in module database with easy registration, update and retrieval by the interactive management system. (2) Target simulation programs and the input/output files are automatically generated with simple block-wise languages by a precompiler system for module integration purpose. (3) Working time for program development and analysis in an example study of an LMFBR plant thermal-hydraulic transient analysis was demonstrated to be remarkably shortened, with the introduction of an interface system SESS-1 developed as an automatic program generation environment. (author)

  15. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  16. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    International Nuclear Information System (INIS)

    Gerhard, M.A.; Sommer, S.C.

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests

  17. AUTOCASK (AUTOmatic Generation of 3-D CASK models). A microcomputer based system for shipping cask design review analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gerhard, M.A.; Sommer, S.C. [Lawrence Livermore National Lab., CA (United States)

    1995-04-01

    AUTOCASK (AUTOmatic Generation of 3-D CASK models) is a microcomputer-based system of computer programs and databases developed at the Lawrence Livermore National Laboratory (LLNL) for the structural analysis of shipping casks for radioactive material. Model specification is performed on the microcomputer, and the analyses are performed on an engineering workstation or mainframe computer. AUTOCASK is based on 80386/80486 compatible microcomputers. The system is composed of a series of menus, input programs, display programs, a mesh generation program, and archive programs. All data is entered through fill-in-the-blank input screens that contain descriptive data requests.

  18. Automatic Generation and Evaluation of Sentence Graphs out of Word Graphs

    NARCIS (Netherlands)

    Reidsma, Dennis; Priss, U.; Corbett, D.; Angelova, G.

    This paper reports on the development of a system that automatically constructs representations of the meaning of sentences using rules of grammar and a dictionary of word meanings. The meanings of words and sentences are expressed using an extension of knowledge graphs, a semantic network

  19. Reduction to spark coordinates of data generated by automatic measurement of spark chamber film

    International Nuclear Information System (INIS)

    Maybury, R.; Hart, J.C.

    1976-09-01

    The initial stage in the data reduction for film from two spark chamber experiments is described. The film was automatically measured at the Rutherford Laboratory. The data from these measurements were reduced to a series of spark coordinates for each gap of the spark chambers. Quality control checks are discussed. (author)

  20. Regulatory analysis for the resolution of Generic Issue 125.II.7 ''Reevaluate Provision to Automatically Isolate Feedwater from Steam Generator During a Line Break''

    International Nuclear Information System (INIS)

    Basdekas, D.L.

    1988-09-01

    Generic Issue 125.II.7 addresses the concern related to the automatic isolation of auxiliary feedwater (AFW) to a steam generator with a broken steam or feedwater line. This regulatory analysis provides a quantitative assessment of the costs and benefits associated with the removal of the AFW automatic isolation and concludes that no new regulatory requirements are warranted. 21 refs., 7 tabs

  1. Design and implementation of a control automatic module for the volume extraction of a 99mTc generator

    International Nuclear Information System (INIS)

    Lopez, Yon; Urquizo, Rafael; Gago, Javier; Mendoza, Pablo

    2014-01-01

    A module for the automatic extraction of volume from 0.05 mL to 1 mL has been developed using a 3D printer, using as base material acrylonitrile butadiene styrene (ABS). The design allows automation of the input and ejection eluate 99m Tc in the generator prototype 99 Mo/ 99m Tc processes; use in other systems is feasible due to its high degree of versatility, depending on the selection of the main components: precision syringe and multi-way solenoid valve. An accuracy equivalent to commercial equipment has been obtained, but at lower cost. This article describes the mechanical design, design calculations of the movement mechanism, electronics and automatic syringe dispenser control. (authors).

  2. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    Science.gov (United States)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  3. Can virtual reality improve anatomy education? A randomised controlled study of a computer-generated three-dimensional anatomical ear model.

    Science.gov (United States)

    Nicholson, Daren T; Chalk, Colin; Funnell, W Robert J; Daniel, Sam J

    2006-11-01

    The use of computer-generated 3-dimensional (3-D) anatomical models to teach anatomy has proliferated. However, there is little evidence that these models are educationally effective. The purpose of this study was to test the educational effectiveness of a computer-generated 3-D model of the middle and inner ear. We reconstructed a fully interactive model of the middle and inner ear from a magnetic resonance imaging scan of a human cadaver ear. To test the model's educational usefulness, we conducted a randomised controlled study in which 28 medical students completed a Web-based tutorial on ear anatomy that included the interactive model, while a control group of 29 students took the tutorial without exposure to the model. At the end of the tutorials, both groups were asked a series of 15 quiz questions to evaluate their knowledge of 3-D relationships within the ear. The intervention group's mean score on the quiz was 83%, while that of the control group was 65%. This difference in means was highly significant (P < 0.001). Our findings stand in contrast to the handful of previous randomised controlled trials that evaluated the effects of computer-generated 3-D anatomical models on learning. The equivocal and negative results of these previous studies may be due to the limitations of these studies (such as small sample size) as well as the limitations of the models that were studied (such as a lack of full interactivity). Given our positive results, we believe that further research is warranted concerning the educational effectiveness of computer-generated anatomical models.

  4. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    Science.gov (United States)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  5. An automatic MRI/SPECT registration algorithm using image intensity and anatomical feature as matching characters: application on the evaluation of Parkinson's disease

    International Nuclear Information System (INIS)

    Lee, J.-D.; Huang, C.-H.; Weng, Y.-H.; Lin, K.-J.; Chen, C.-T.

    2007-01-01

    Single-photon emission computed tomography (SPECT) of dopamine transporters with 99m Tc-TRODAT-1 has recently been proposed to offer valuable information in assessing the functionality of dopaminergic systems. Magnetic resonance imaging (MRI) and SPECT imaging are important in the noninvasive examination of dopamine concentration in vivo. Therefore, this investigation presents an automated MRI/SPECT image registration algorithm based on a new similarity metric. This similarity metric combines anatomical features that are characterized by specific binding, the mean count per voxel in putamens and caudate nuclei, and the distribution of image intensity that is characterized by normalized mutual information (NMI). A preprocess, a novel two-cluster SPECT normalization algorithm, is also presented for MRI/SPECT registration. Clinical MRI/SPECT data from 18 healthy subjects and 13 Parkinson's disease (PD) patients are involved to validate the performance of the proposed algorithms. An appropriate color map, such as 'rainbow,' for image display enables the two-cluster SPECT normalization algorithm to provide clinically meaningful visual contrast. The proposed registration scheme reduces target registration error from >7 mm for conventional registration algorithm based on NMI to approximately 4 mm. The error in the specific/nonspecific 99m Tc-TRODAT-1 binding ratio, which is employed as a quantitative measure of TRODAT receptor binding, is also reduced from 0.45±0.22 to 0.08±0.06 among healthy subjects and from 0.28±0.18 to 0.12±0.09 among PD patients

  6. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  7. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...

  8. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...

  9. Automatic selection of informative sentences: The sentences that can generate multiple choice questions

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-12-01

    Full Text Available Traditional education cannot meet the expectation and requirement of a Smart City; it require more advance forms like active learning, ICT education etc. Multiple choice questions (MCQs play an important role in educational assessment and active learning which has a key role in Smart City education. MCQs are effective to assess the understanding of well-defined concepts. A fraction of all the sentences of a text contain well-defined concepts or information that can be asked as a MCQ. These informative sentences are required to be identified first for preparing multiple choice questions manually or automatically. In this paper we propose a technique for automatic identification of such informative sentences that can act as the basis of MCQ. The technique is based on parse structure similarity. A reference set of parse structures is compiled with the help of existing MCQs. The parse structure of a new sentence is compared with the reference structures and if similarity is found then the sentence is considered as a potential candidate. Next a rule-based post-processing module works on these potential candidates to select the final set of informative sentences. The proposed approach is tested in sports domain, where many MCQs are easily available for preparing the reference set of structures. The quality of the system selected sentences is evaluated manually. The experimental result shows that the proposed technique is quite promising.

  10. A NEW APPROACH FOR THE SEMI-AUTOMATIC TEXTURE GENERATION OF THE BUILDINGS FACADES, FROM TERRESTRIAL LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    E. Oniga

    2012-07-01

    Full Text Available The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  11. Wind power integration into the automatic generation control of power systems with large-scale wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Altin, Müfit

    2014-01-01

    Transmission system operators have an increased interest in the active participation of wind power plants (WPP) in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC......) of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs) and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described...... and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different...

  12. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.

    2014-01-01

    distribution govern groundwater flow. The coupling between hydrological and geophysical parameters is managed using a translator function with spatially variable parameters followed by a 3D zonation. The translator function translates geophysical resistivities into clay fractions and is calibrated...... with observed lithological data. Principal components are computed for the translated clay fractions and geophysical resistivities. Zonation is carried out by k-means clustering on the principal components. The hydraulic parameters of the zones are determined in a hydrological model calibration using head...... and discharge observations. The method was applied to field data collected at a Danish field site. Our results show that a competitive hydrological model can be constructed from the AEM dataset using the automatic procedure outlined above....

  13. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    DEFF Research Database (Denmark)

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.

    2014-01-01

    and heterogeneity, which spatially scarce borehole lithology data may overlook, are well resolved in AEM surveys. This study presents a semi-automatic sequential hydrogeophysical inversion method for the integration of AEM and borehole data into regional groundwater models in sedimentary areas, where sand/ clay...... distribution govern groundwater flow. The coupling between hydrological and geophysical parameters is managed using a translator function with spatially variable parameters followed by a 3D zonation. The translator function translates geophysical resistivities into clay fractions and is calibrated...... with observed lithological data. Principal components are computed for the translated clay fractions and geophysical resistivities. Zonation is carried out by k-means clustering on the principal components. The hydraulic parameters of the zones are determined in a hydrological model calibration using head...

  14. Automatic Generation of Structural Building Descriptions from 3D Point Cloud Scans

    DEFF Research Database (Denmark)

    Ochmann, Sebastian; Vock, Richard; Wessel, Raoul

    2013-01-01

    We present a new method for automatic semantic structuring of 3D point clouds representing buildings. In contrast to existing approaches which either target the outside appearance like the facade structure or rather low-level geometric structures, we focus on the building’s interior using indoor...... scans to derive high-level architectural entities like rooms and doors. Starting with a registered 3D point cloud, we probabilistically model the affiliation of each measured point to a certain room in the building. We solve the resulting clustering problem using an iterative algorithm that relies...... on the estimated visibilities between any two locations within the point cloud. With the segmentation into rooms at hand, we subsequently determine the locations and extents of doors between adjacent rooms. In our experiments, we demonstrate the feasibility of our method by applying it to synthetic as well...

  15. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    NARCIS (Netherlands)

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written

  16. Automatically Generating Questions to Support the Acquisition of Particle Verbs: Evaluating via Crowdsourcing

    Science.gov (United States)

    Chinkina, Maria; Ruiz, Simón; Meurers, Detmar

    2017-01-01

    We integrate insights from research in Second Language Acquisition (SLA) and Computational Linguistics (CL) to generate text-based questions. We discuss the generation of wh- questions as functionally-driven input enhancement facilitating the acquisition of particle verbs and report the results of two crowdsourcing studies. The first study shows…

  17. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    Science.gov (United States)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  18. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    Science.gov (United States)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  19. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    OpenAIRE

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written accident report. The CarSim system processes formal descriptions of accidents and creates corresponding 3D simulations. A planning component models the trajectories and temporal values of every vehicle ...

  20. Effective System for Automatic Bundle Block Adjustment and Ortho Image Generation from Multi Sensor Satellite Imagery

    Science.gov (United States)

    Akilan, A.; Nagasubramanian, V.; Chaudhry, A.; Reddy, D. Rajesh; Sudheer Reddy, D.; Usha Devi, R.; Tirupati, T.; Radhadevi, P. V.; Varadan, G.

    2014-11-01

    Block Adjustment is a technique for large area mapping for images obtained from different remote sensingsatellites.The challenge in this process is to handle huge number of satellite imageries from different sources with different resolution and accuracies at the system level. This paper explains a system with various tools and techniques to effectively handle the end-to-end chain in large area mapping and production with good level of automation and the provisions for intuitive analysis of final results in 3D and 2D environment. In addition, the interface for using open source ortho and DEM references viz., ETM, SRTM etc. and displaying ESRI shapes for the image foot-prints are explained. Rigorous theory, mathematical modelling, workflow automation and sophisticated software engineering tools are included to ensure high photogrammetric accuracy and productivity. Major building blocks like Georeferencing, Geo-capturing and Geo-Modelling tools included in the block adjustment solution are explained in this paper. To provide optimal bundle block adjustment solution with high precision results, the system has been optimized in many stages to exploit the full utilization of hardware resources. The robustness of the system is ensured by handling failure in automatic procedure and saving the process state in every stage for subsequent restoration from the point of interruption. The results obtained from various stages of the system are presented in the paper.

  1. Automatic generation of virtual worlds from architectural and mechanical CAD models

    International Nuclear Information System (INIS)

    Szepielak, D.

    2003-12-01

    Accelerator projects like the XFEL or the planned linear collider TESLA involve extensive architectural and mechanical design work, resulting in a variety of CAD models. The CAD models will be showing different parts of the project, like e.g. the different accelerator components or parts of the building complexes, and they will be created and stored by different groups in different formats. A complete CAD model of the accelerator and its buildings is thus difficult to obtain and would also be extremely huge and difficult to handle. This thesis describes the design and prototype development of a tool which automatically creates virtual worlds from different CAD models. The tool will enable the user to select a required area for visualization on a map, and then create a 3D-model of the selected area which can be displayed in a web-browser. The thesis first discusses the system requirements and provides some background on data visualization. Then, it introduces the system architecture, the algorithms and the used technologies, and finally demonstrates the capabilities of the system using two case studies. (orig.)

  2. An extensible six-step methodology to automatically generate fuzzy DSSs for diagnostic applications

    Science.gov (United States)

    2013-01-01

    Background The diagnosis of many diseases can be often formulated as a decision problem; uncertainty affects these problems so that many computerized Diagnostic Decision Support Systems (in the following, DDSSs) have been developed to aid the physician in interpreting clinical data and thus to improve the quality of the whole process. Fuzzy logic, a well established attempt at the formalization and mechanization of human capabilities in reasoning and deciding with noisy information, can be profitably used. Recently, we informally proposed a general methodology to automatically build DDSSs on the top of fuzzy knowledge extracted from data. Methods We carefully refine and formalize our methodology that includes six stages, where the first three stages work with crisp rules, whereas the last three ones are employed on fuzzy models. Its strength relies on its generality and modularity since it supports the integration of alternative techniques in each of its stages. Results The methodology is designed and implemented in the form of a modular and portable software architecture according to a component-based approach. The architecture is deeply described and a summary inspection of the main components in terms of UML diagrams is outlined as well. A first implementation of the architecture has been then realized in Java following the object-oriented paradigm and used to instantiate a DDSS example aimed at accurately diagnosing breast masses as a proof of concept. Conclusions The results prove the feasibility of the whole methodology implemented in terms of the architecture proposed. PMID:23368970

  3. An extensible six-step methodology to automatically generate fuzzy DSSs for diagnostic applications.

    Science.gov (United States)

    d'Acierno, Antonio; Esposito, Massimo; De Pietro, Giuseppe

    2013-01-01

    The diagnosis of many diseases can be often formulated as a decision problem; uncertainty affects these problems so that many computerized Diagnostic Decision Support Systems (in the following, DDSSs) have been developed to aid the physician in interpreting clinical data and thus to improve the quality of the whole process. Fuzzy logic, a well established attempt at the formalization and mechanization of human capabilities in reasoning and deciding with noisy information, can be profitably used. Recently, we informally proposed a general methodology to automatically build DDSSs on the top of fuzzy knowledge extracted from data. We carefully refine and formalize our methodology that includes six stages, where the first three stages work with crisp rules, whereas the last three ones are employed on fuzzy models. Its strength relies on its generality and modularity since it supports the integration of alternative techniques in each of its stages. The methodology is designed and implemented in the form of a modular and portable software architecture according to a component-based approach. The architecture is deeply described and a summary inspection of the main components in terms of UML diagrams is outlined as well. A first implementation of the architecture has been then realized in Java following the object-oriented paradigm and used to instantiate a DDSS example aimed at accurately diagnosing breast masses as a proof of concept. The results prove the feasibility of the whole methodology implemented in terms of the architecture proposed.

  4. Automatic generation of aesthetic patterns on fractal tilings by means of dynamical systems

    International Nuclear Information System (INIS)

    Chung, K.W.; Ma, H.M.

    2005-01-01

    A fractal tiling or f-tiling is a tiling which possesses self-similarity and the boundary of which is a fractal. In this paper, we investigate the classification of fractal tilings with kite-shaped and dart-shaped prototiles from which three new f-tilings are found. Invariant mappings are constructed for the creation of aesthetic patterns on such tilings. A modified convergence time scheme is described, which reflects the rate of convergence of various orbits and at the same time, enhances the artistic appeal of a generated image. A scheme based on the frequency of visit at a pixel is used to generate chaotic attractors

  5. Accuracy of automatic tube compensation in new-generation mechanical ventilators.

    Science.gov (United States)

    Elsasser, Serge; Guttmann, Josef; Stocker, Reto; Mols, Georg; Priebe, Hans-Joachim; Haberthür, Christoph

    2003-11-01

    To compare performance of flow-adapted compensation of endotracheal tube resistance (automatic tube compensation, ATC) between the original ATC system and ATC systems incorporated in commercially available ventilators. Bench study. University research laboratory. The original ATC system, Dräger Evita 2 prototype, Dräger Evita 4, Puritan-Bennett 840. The four ventilators under investigation were alternatively connected via different sized endotracheal tubes and an artificial trachea to an active lung model. Test conditions consisted of two ventilatory modes (ATC vs. continuous positive airway pressure), three different sized endotracheal tubes (inner diameter 7.0, 8.0, and 9.0 mm), two ventilatory rates (15/min and 30/min), and four levels of positive end-expiratory pressure (0, 5, 10, and 15 cm H2O). Performance of tube compensation was assessed by the amount of tube-related (additional) work of breathing (WOBadd), which was calculated on the basis of pressure gradient across the endotracheal tube. Compared with continuous positive airway pressure, ATC reduced inspiratory WOBadd by 58%, 68%, 50%, and 97% when using the Evita 4, the Evita 2 prototype, the Puritan-Bennett 840, and the original ATC system, respectively. Depending on endotracheal tube diameter and ventilatory pattern, inspiratory WOBadd was 0.12-5.2 J/L with the original ATC system, 1.5-28.9 J/L with the Puritan-Bennett 840, 10.4-21.0 J/L with the Evita 2 prototype, and 10.1-36.1 J/L with the Evita 4 (difference between each ventilator at identical test situations, p ventilator (p <.025). Flow-adapted tube compensation by the original ATC system significantly reduced tube-related inspiratory and expiratory work of breathing. The commercially available ATC modes investigated here may be adequate for inspiratory but probably not for expiratory tube compensation.

  6. On the Automatic Generation of Plans for Life Cycle Assembly Processes

    Energy Technology Data Exchange (ETDEWEB)

    CALTON,TERRI L.

    2000-01-01

    Designing products for easy assembly and disassembly during their entire life cycles for purposes including product assembly, product upgrade, product servicing and repair, and product disposal is a process that involves many disciplines. In addition, finding the best solution often involves considering the design as a whole and by considering its intended life cycle. Different goals and manufacturing plan selection criteria, as compared to initial assembly, require re-visiting significant fundamental assumptions and methods that underlie current assembly planning techniques. Previous work in this area has been limited to either academic studies of issues in assembly planning or to applied studies of life cycle assembly processes that give no attention to automatic planning. It is believed that merging these two areas will result in a much greater ability to design for, optimize, and analyze the cycle assembly processes. The study of assembly planning is at the very heart of manufacturing research facilities and academic engineering institutions; and, in recent years a number of significant advances in the field of assembly planning have been made. These advances have ranged from the development of automated assembly planning systems, such as Sandia's Automated Assembly Analysis System Archimedes 3.0{copyright}, to the startling revolution in microprocessors and computer-controlled production tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), flexible manufacturing systems (EMS), and computer-integrated manufacturing (CIM). These results have kindled considerable interest in the study of algorithms for life cycle related assembly processes and have blossomed into a field of intense interest. The intent of this manuscript is to bring together the fundamental results in this area, so that the unifying principles and underlying concepts of algorithm design may more easily be implemented in practice.

  7. Design of an optimal SMES for automatic generation control of two-area thermal power system using Cuckoo search algorithm

    Directory of Open Access Journals (Sweden)

    Sabita Chaine

    2015-05-01

    Full Text Available This work presents a methodology adopted in order to tune the controller parameters of superconducting magnetic energy storage (SMES system in the automatic generation control (AGC of a two-area thermal power system. The gains of integral controllers of AGC loop, proportional controller of SMES loop and gains of the current feedback loop of the inductor in SMES are optimized simultaneously in order to achieve a desired performance. Recently proposed intelligent technique based algorithm known as Cuckoo search algorithm (CSA is applied for optimization. Sensitivity and robustness of the tuned gains tested at different operating conditions prove the effectiveness of fast acting energy storage devices like SMES in damping out oscillations in power system when their controllers are properly tuned.

  8. MISMATCH: A basis for semi-automatic functional mixed-signal test-pattern generation

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Tangelder, R.J.W.T.; Speek, Han; Engin, N.

    1996-01-01

    This paper describes a tool which assists the designer in the rapid generation of functional tests for mixed-signal circuits down to the actual test-signals for the tester. The tool is based on manipulating design data, making use of macro-based test libraries and tester resources provided by the

  9. Extending a User Interface Prototyping Tool with Automatic MISRA C Code Generation

    Directory of Open Access Journals (Sweden)

    Gioacchino Mauro

    2017-01-01

    Full Text Available We are concerned with systems, particularly safety-critical systems, that involve interaction between users and devices, such as the user interface of medical devices. We therefore developed a MISRA C code generator for formal models expressed in the PVSio-web prototyping toolkit. PVSio-web allows developers to rapidly generate realistic interactive prototypes for verifying usability and safety requirements in human-machine interfaces. The visual appearance of the prototypes is based on a picture of a physical device, and the behaviour of the prototype is defined by an executable formal model. Our approach transforms the PVSio-web prototyping tool into a model-based engineering toolkit that, starting from a formally verified user interface design model, will produce MISRA C code that can be compiled and linked into a final product. An initial validation of our tool is presented for the data entry system of an actual medical device.

  10. An expert system for automatic mesh generation for Sn particle transport simulation in parallel environment

    International Nuclear Information System (INIS)

    Apisit, Patchimpattapong; Alireza, Haghighat; Shedlock, D.

    2003-01-01

    An expert system for generating an effective mesh distribution for the SN particle transport simulation has been developed. This expert system consists of two main parts: 1) an algorithm for generating an effective mesh distribution in a serial environment, and 2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. For the first part, the algorithm prepares an effective mesh distribution considering problem physics and the spatial differencing scheme. For the second part, the algorithm determines a parallel-performance-index (PPI), which is defined as the ratio of the granularity to the degree-of-coupling. The parallel-performance-index provides expected performance of an algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems. (authors)

  11. Automatic generation of design structure matrices through the evolution of product models

    DEFF Research Database (Denmark)

    Gopsill, James A.; Snider, Chris; McMahon, Chris

    2016-01-01

    sense. For these reasons, tools and methods to support the identification and monitoring of component interactions and dependencies continues to be an active area of research. In particular, design structure matrices (DSMs) have been extensively applied to identify and visualize product...... update the DSM structure as a product develops. It follows that the proposition of this paper is to investigate whether an automated and continuously evolving DSM can be generated by monitoring the changes in the digital models that represent the product. This includes models that are generated from......, and lengthy redesigns. Thus, the management and monitoring of these dependencies remains a crucial activity in engineering projects and is becoming ever more challenging with the increase in the number of components, component interactions, and component dependencies, in both a structural and a functional...

  12. An information retrieval system using weighted descriptors generated by automatic frequency counting

    International Nuclear Information System (INIS)

    Komatsubara, Yasutoshi

    1979-01-01

    An information retrieval system with improved relevance is described, in which a weighted descriptor file, generated by feedback of requester's relevance judgement on pretest results, is used. This method does not need modification of search formulas, and works better by only setting weight thresholds, and can alleviate searcher duties, as examples show. Index word weighting and retrieval word weighting are compared and some problems to be encountered when retrieval word weighting is combined to operational systems are pointed out. (author)

  13. A tool for automatic generation of RTL-level VHDL description of RNS FIR filters

    DEFF Research Database (Denmark)

    Re, Andrea Del; Nannarelli, Alberto; Re, Marco

    2004-01-01

    Although digital filters based on the Residue Number System (RNS) show high performance and low power dissipation, RNS filters are not widely used in DSP systems, because of the complexity of the algorithms involved. We present a tool to design RNS FIR filters which hides the RNS algorithms...... to the designer, and generates a synthesizable VHDL description of the filter taking into account several design constraints such as: delay, area and energy....

  14. Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance

    Directory of Open Access Journals (Sweden)

    H. Rajabi Mashhadi

    2014-09-01

    Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.

  15. Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

    Directory of Open Access Journals (Sweden)

    Alan F. Smeaton

    2010-02-01

    Full Text Available In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor’s output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one’s life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with “Web 2.0” content collected by millions of other individuals.

  16. Automatic Aircraft Structural Topology Generation for Multidisciplinary Optimization and Weight Estimation

    Science.gov (United States)

    Sensmeier, Mark D.; Samareh, Jamshid A.

    2005-01-01

    An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.

  17. AUTOMATIC GENERATION OF ROAD INFRASTRUCTURE IN 3D FOR VEHICLE SIMULATORS

    Directory of Open Access Journals (Sweden)

    Adam Orlický

    2017-12-01

    Full Text Available One of the modern methods of testing new systems and interfaces in vehicles is testing in a vehicle simulator. Providing quality models of virtual scenes is one of tasks for driver-car interaction interface simulation. Nowadays, there exist many programs for creating 3D models of road infrastructures, but most of these programs are very expensive or canÂtt export models for the following use. Therefore, a plug-in has been developed at the Faculty of Transportation Sciences in Prague. It can generate road infrastructure by Czech standard for designing roads (CSN 73 6101. The uniqueness of this plug-in is that it is the first tool for generating road infrastructure in NURBS representation. This type of representation brings more exact models and allows to optimize transfer for creating quality models for vehicle simulators. The scenes created by this plug-in were tested on vehicle simulators. The results have shown that with newly created scenes drivers had a much better feeling in comparison to previous scenes.

  18. Automatic mesh generation for finite element calculations in the case of thermal loads

    International Nuclear Information System (INIS)

    Cords, H.; Zimmermann, R.

    1975-01-01

    The presentation describes a method to generate finite element nodal point networks on the basis of isothermals and flux lines. Such a mesh provides a relatively fine partitioning at regions where pronounced temperature variations exist. In case of entirely thermal loads a net of this kind is advantageous since the refinement is provided at exactly those locations where high stress levels are expected. In the present contribution the method was employed to analyze the structural behavior of a nuclear fuel element under operating conditions. The graphite block fuel elements for high temperature reactors are of prismatic shape with a large number of parallel bores in the axial direction. Some of these bores are open at both ends and cooling is effected by helium flowing through. Blind holes contain the fuel as compacts or cartridges. The basic temperature distribution in a horizontal section of the block was obtained by the boundary point least squares method which yields analytical expressions for both temperature and thermal flux. The corresponding computer code was presented at an earlier SMiRT conference. The method is particularly useful for regular arrays of heat sources and sinks as encountered in heat exchanger problems. The generated mesh matches the requirements of a subsequent structural analysis with finite elements provided there are no other than thermal loads

  19. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M. [Univ. of California, San Diego, CA (United States

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  20. From sequencer to supercomputer: an automatic pipeline for managing and processing next generation sequencing data.

    Science.gov (United States)

    Camerlengo, Terry; Ozer, Hatice Gulcin; Onti-Srinivasan, Raghuram; Yan, Pearlly; Huang, Tim; Parvin, Jeffrey; Huang, Kun

    2012-01-01

    Next Generation Sequencing is highly resource intensive. NGS Tasks related to data processing, management and analysis require high-end computing servers or even clusters. Additionally, processing NGS experiments requires suitable storage space and significant manual interaction. At The Ohio State University's Biomedical Informatics Shared Resource, we designed and implemented a scalable architecture to address the challenges associated with the resource intensive nature of NGS secondary analysis built around Illumina Genome Analyzer II sequencers and Illumina's Gerald data processing pipeline. The software infrastructure includes a distributed computing platform consisting of a LIMS called QUEST (http://bisr.osumc.edu), an Automation Server, a computer cluster for processing NGS pipelines, and a network attached storage device expandable up to 40TB. The system has been architected to scale to multiple sequencers without requiring additional computing or labor resources. This platform provides demonstrates how to manage and automate NGS experiments in an institutional or core facility setting.

  1. Radon transform based automatic metal artefacts generation for 3D threat image projection

    Science.gov (United States)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  2. LigParGen web server: an automatic OPLS-AA parameter generator for organic ligands

    Science.gov (United States)

    Dodda, Leela S.

    2017-01-01

    Abstract The accurate calculation of protein/nucleic acid–ligand interactions or condensed phase properties by force field-based methods require a precise description of the energetics of intermolecular interactions. Despite the progress made in force fields, small molecule parameterization remains an open problem due to the magnitude of the chemical space; the most critical issue is the estimation of a balanced set of atomic charges with the ability to reproduce experimental properties. The LigParGen web server provides an intuitive interface for generating OPLS-AA/1.14*CM1A(-LBCC) force field parameters for organic ligands, in the formats of commonly used molecular dynamics and Monte Carlo simulation packages. This server has high value for researchers interested in studying any phenomena based on intermolecular interactions with ligands via molecular mechanics simulations. It is free and open to all at jorgensenresearch.com/ligpargen, and has no login requirements. PMID:28444340

  3. Automatic generation of active coordinates for quantum dynamics calculations: Application to the dynamics of benzene photochemistry

    International Nuclear Information System (INIS)

    Lasorne, Benjamin; Sicilia, Fabrizio; Bearpark, Michael J.; Robb, Michael A.; Worth, Graham A.; Blancafort, Lluis

    2008-01-01

    A new practical method to generate a subspace of active coordinates for quantum dynamics calculations is presented. These reduced coordinates are obtained as the normal modes of an analytical quadratic representation of the energy difference between excited and ground states within the complete active space self-consistent field method. At the Franck-Condon point, the largest negative eigenvalues of this Hessian correspond to the photoactive modes: those that reduce the energy difference and lead to the conical intersection; eigenvalues close to 0 correspond to bath modes, while modes with large positive eigenvalues are photoinactive vibrations, which increase the energy difference. The efficacy of quantum dynamics run in the subspace of the photoactive modes is illustrated with the photochemistry of benzene, where theoretical simulations are designed to assist optimal control experiments

  4. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    Science.gov (United States)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  5. Automatic string generation for estimating in vivo length changes of the medial patellofemoral ligament during knee flexion.

    Science.gov (United States)

    Graf, Matthias; Diether, Salomon; Vlachopoulos, Lazaros; Fucentese, Sandro; Fürnstahl, Philipp

    2014-06-01

    Modeling ligaments as three-dimensional strings is a popular method for in vivo estimation of ligament length. The purpose of this study was to develop an algorithm for automated generation of non-penetrating strings between insertion points and to evaluate its feasibility for estimating length changes of the medial patellofemoral ligament during normal knee flexion. Three-dimensional knee models were generated from computed tomography (CT) scans of 10 healthy subjects. The knee joint under weight-bearing was acquired in four flexion positions (0°-120°). The path between insertion points was computed in each position to quantify string length and isometry. The average string length was maximal in 0° of flexion (64.5 ± 3.9 mm between femoral and proximal patellar point; 62.8 ± 4.0 mm between femoral and distal patellar point). It was minimal in 30° (60.0 ± 2.6 mm) for the proximal patellar string and in 120° (58.7 ± 4.3 mm) for the distal patellar string. The insertion points were considered to be isometric in 4 of the 10 subjects. The proposed algorithm appears to be feasible for estimating string lengths between insertion points in an automatic fashion. The length measurements based on CT images acquired under physiological loading conditions may give further insights into knee kinematics.

  6. Wind power integration into the automatic generation control of power systems with large-scale wind power

    Directory of Open Access Journals (Sweden)

    Abdul Basit

    2014-10-01

    Full Text Available Transmission system operators have an increased interest in the active participation of wind power plants (WPP in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different possible future scenarios, when wind power production in the power system is high and conventional production from CHPs is at a minimum level. The investigation results of the proposed control strategy have shown that the WPPs can actively help the AGC, and reduce the real-time power imbalance in the power system, by down regulating their production when CHPs are unable to provide the required response.

  7. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  8. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  9. Computer-assisted anatomical placement of a double-bundle ACL through 3D-fitting of a statistically generated femoral template into individual knee geometry

    NARCIS (Netherlands)

    Luites, J. W. H.; Wymenga, A. B.; Sati, M.; Bourquin, Y.; Blankevoort, L.; van der Venne, R.; Kooloos, J. G. M.; Staubli, H. U.

    2000-01-01

    Femoral graft placement is an important factor in the success of ACL-reconstruction. Besides improving the accuracy of femoral tunnel placement, Computer Assisted Surgery (CAS) can be used to determine the anatomical Location. This requires a 3D femoral template with the position of the anatomical

  10. Hand-eye coordination of a robot for the automatic inspection of steam-generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, D.H.; Song, Y.C.; Kim, J.H.; Kim, J.G.

    2004-01-01

    The inspection of steam-generator tubes in nuclear power plants needs to collect test signals in a highly radiated region that is not accessible by humans. In general, a robot equipped with a camera and a test probe is used to handle such a dangerous environment. The robot moves the probe to right below a tube to be inspected and then the probe is inserted into the tube. The inspection signals are acquired while the probe is pulling back. Currently, an operator in a control room controls all the process remotely. To make a fully automatic inspection system, first of all, a control mechanism is needed to position the probe to the proper location. This is so called a hand-eye coordination problem. In this paper, a hand-eye coordination method for a robot has been presented. The proposed method consists of the two consecutive control modes: rough positioning and fine-tuning. The rough positioning controller tries to position its probe near a target place using kinematics information and the known environments, and then the fine-tuning controller tries to adjust the probe to the target using the image acquired by the camera attached to the robot. The usefulness of the proposed method has been tested and verified through experiments. (orig.)

  11. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    Science.gov (United States)

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  12. Automatic CT-based finite element model generation for temperature-based death time estimation: feasibility study and sensitivity analysis.

    Science.gov (United States)

    Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Erdmann, Bodo; Weiser, Martin; Zachow, Stefan; Heinrich, Andreas; Güttler, Felix Victor; Teichgräber, Ulf; Mall, Gita

    2017-05-01

    Temperature-based death time estimation is based either on simple phenomenological models of corpse cooling or on detailed physical heat transfer models. The latter are much more complex but allow a higher accuracy of death time estimation, as in principle, all relevant cooling mechanisms can be taken into account.Here, a complete workflow for finite element-based cooling simulation is presented. The following steps are demonstrated on a CT phantom: Computer tomography (CT) scan Segmentation of the CT images for thermodynamically relevant features of individual geometries and compilation in a geometric computer-aided design (CAD) model Conversion of the segmentation result into a finite element (FE) simulation model Computation of the model cooling curve (MOD) Calculation of the cooling time (CTE) For the first time in FE-based cooling time estimation, the steps from the CT image over segmentation to FE model generation are performed semi-automatically. The cooling time calculation results are compared to cooling measurements performed on the phantoms under controlled conditions. In this context, the method is validated using a CT phantom. Some of the phantoms' thermodynamic material parameters had to be determined via independent experiments.Moreover, the impact of geometry and material parameter uncertainties on the estimated cooling time is investigated by a sensitivity analysis.

  13. Expert system for the automatic analysis of the Eddy current signals from the monitoring of vapor generators of a PWR, type reactor

    International Nuclear Information System (INIS)

    Lefevre, F.; Baumaire, A.; Comby, R.; Benas, J.C.

    1990-01-01

    The automatization of the monitoring of the steam generator tubes required some developments in the field of data processing. The monitoring is performed by means of Eddy current tests. Improvements in signal processing and in pattern recognition associated to the artificial intelligence techniques induced EDF (French Electricity Company) to develop an automatic signal processing system. The system, named EXTRACSION (French acronym for Expert System for the Processing and classification of Signals of Nuclear Nature), insures the coherence between the different fields of knowledge (metallurgy, measurement, signals) during data processing by applying an object oriented representation [fr

  14. Automatic Test Data Generation Using Data Flow Information = Veri Akışı Bilgisi Kullanılarak Otomatik Test Verisi Üretimi

    Directory of Open Access Journals (Sweden)

    Rana ABDELAZIZ

    2000-06-01

    Full Text Available This paper presents a tool for automatically generating test data for Pascal programs that satisfy the data flow criteria. Unlike existing tools, our tool is not limited to Pascal programs whose program flow graph contains read statements in only one node but rather deals with read statements appearing in any node in the program flow graph. Moreover, our tool handles loops and arrays, these two features are traditionally difficult to handle in test data generation systems. This allows us to generate tests for larger programs than those previously reported in the literature.

  15. A neural network model for the automatic detection and forecast of convective cells based on meteosat second generation data

    Science.gov (United States)

    Puca, S.; de Leonibus, L.; Zauli, F.; Rosci, P.; Musmanno, L.

    The Mesoscale Convective Systems (MCSs) are often correlated with heavy rainfall, thunderstorms and hail showers, frequently causing significant damages. The most intensive weather activities occur during the maturing stage of the development, which can be found in the case of a multi-cell storm in the centre of the convective complex systems. These convective systems may occur in several different unstable air mass; in a cold air mass behind a polar cold front, in the frontal zone of a polar front and in warm air ahead of a polar warm front. To understand the meteorological situation and apply the best conceptual model, the knowledge of the convective cluster is often not enough. In many cases the forecasters need to know the distribution of the convective cells in the cloudy cluster. A model, running in operational mode at the Italian Air Force Meteorological Service (UGM/CNMCA), for the automatic detection and forecast of the convective cells, is here proposed. The application relays on the Meteosat Second Generation infrared (IR) windows (10.8 μ m, 7.3 μ m) and the two water vapour (WV) channels (6.2 μ m and 7.3 μ m), giving as output the detection of the convective cells and their evolution for the next 15 and 30 minutes. The format of the output of the product is the last IR (10.8 μ m) image where the detected cells, their development and their tracking are represented. This multispectral method, based on a variable threshold method during the detection phase and a neural network algorithm during the forecast phase, allowed us to define a model able to detect the convective cells present in a convective cluster, plot their distribution and forecast the evolution of them for the next 15 and 30 minutes with a good efficiency. For analysing the performance of the model with the Meteosat Second Generation data, different error functions have been evaluated for various meteorological cloud contexts (i.e. high layer and cirrus clouds). Some methods for

  16. Modeling and simulation of the generation automatic control of electric power systems; Modelado y simulacion del control automatico de generacion de sistemas electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Caballero Ortiz, Ezequiel

    2002-12-01

    This work is devoted to the analysis of the Automatic Control of Electrical Systems Generation of power, as of the information that generates the loop with Load-Frequency Control and the Automatic Voltage Regulator loop. To accomplish the analysis, the control classical theory and feedback control systems concepts are applied. Thus also, the modern theory concepts are employed. The studies are accomplished in the digital computer through the MATLAB program and the available simulation technique in the SIMULINK tool. In this thesis the theoretical and physical concepts of the automatic control of generation are established; dividing it in load frequency control and automatic voltage regulator loops. The mathematical models of the two control loops are established. Later, the models of the elements are interconnected in order to integrate the loop with load frequency control and the digital simulation of the system is carried out. In first instance, the function of the primary control in are - machine, area - multi machine and multi area - multi machine power systems, is analyzed. Then, the automatic control of generation of the area and multi area power systems is studied. The economic dispatch concept is established and with this plan the power system multi area is simulated, there in after the energy exchange among areas in stationary stage is studied. The mathematical models of the component elements of the control loop of the automatic voltage regulator are interconnected. Data according to the nature of each component are generated and their behavior is simulated to analyze the system response. The two control loops are interconnected and a simulation is carry out with data generated previously, examining the performance of the automatic control of generation and the interaction between the two control loops. Finally, the Poles Positioning and the Optimum Control techniques of the modern control theory are applied to the automatic control of an area generation

  17. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  18. An algorithm for generating data accessibility recommendations for flight deck Automatic Dependent Surveillance-Broadcast (ADS-B) applications

    Science.gov (United States)

    2014-09-09

    Automatic Dependent Surveillance-Broadcast (ADS-B) In technology supports the display of traffic data on Cockpit Displays of Traffic Information (CDTIs). The data are used by flightcrews to perform defined self-separation procedures, such as the in-t...

  19. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    International Nuclear Information System (INIS)

    Dowling, Jason A.; Sun, Jidi; Pichler, Peter; Rivest-Hénault, David; Ghose, Soumya; Richardson, Haylea; Wratten, Chris; Martin, Jarad; Arm, Jameen; Best, Leah; Chandra, Shekhar S.; Fripp, Jurgen; Menk, Frederick W.; Greer, Peter B.

    2015-01-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s

  20. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); University of Newcastle, Callaghan, New South Wales (Australia); Sun, Jidi [University of Newcastle, Callaghan, New South Wales (Australia); Pichler, Peter [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Rivest-Hénault, David; Ghose, Soumya [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Richardson, Haylea [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Wratten, Chris; Martin, Jarad [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Arm, Jameen [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Best, Leah [Department of Radiology, Hunter New England Health, New Lambton, New South Wales (Australia); Chandra, Shekhar S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland (Australia); Fripp, Jurgen [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Menk, Frederick W. [University of Newcastle, Callaghan, New South Wales (Australia); Greer, Peter B. [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia)

    2015-12-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s

  1. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants; Extracsion: un systeme de controle automatique par courants de Foucault des tubes de generateurs de vapeur de centrales nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Georgel, B.; Zorgati, R.

    1994-12-31

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs.

  2. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  3. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  4. Development of a new generation of high-resolution anatomical models for medical device evaluation: the Virtual Population 3.0

    Science.gov (United States)

    Gosselin, Marie-Christine; Neufeld, Esra; Moser, Heidi; Huber, Eveline; Farcito, Silvia; Gerber, Livia; Jedensjö, Maria; Hilber, Isabel; Di Gennaro, Fabienne; Lloyd, Bryn; Cherubini, Emilio; Szczerba, Dominik; Kainz, Wolfgang; Kuster, Niels

    2014-09-01

    The Virtual Family computational whole-body anatomical human models were originally developed for electromagnetic (EM) exposure evaluations, in particular to study how absorption of radiofrequency radiation from external sources depends on anatomy. However, the models immediately garnered much broader interest and are now applied by over 300 research groups, many from medical applications research fields. In a first step, the Virtual Family was expanded to the Virtual Population to provide considerably broader population coverage with the inclusion of models of both sexes ranging in age from 5 to 84 years old. Although these models have proven to be invaluable for EM dosimetry, it became evident that significantly enhanced models are needed for reliable effectiveness and safety evaluations of diagnostic and therapeutic applications, including medical implants safety. This paper describes the research and development performed to obtain anatomical models that meet the requirements necessary for medical implant safety assessment applications. These include implementation of quality control procedures, re-segmentation at higher resolution, more-consistent tissue assignments, enhanced surface processing and numerous anatomical refinements. Several tools were developed to enhance the functionality of the models, including discretization tools, posing tools to expand the posture space covered, and multiple morphing tools, e.g., to develop pathological models or variations of existing ones. A comprehensive tissue properties database was compiled to complement the library of models. The results are a set of anatomically independent, accurate, and detailed models with smooth, yet feature-rich and topologically conforming surfaces. The models are therefore suited for the creation of unstructured meshes, and the possible applications of the models are extended to a wider range of solvers and physics. The impact of these improvements is shown for the MRI exposure of an adult

  5. An Anatomically Validated Brachial Plexus Contouring Method for Intensity Modulated Radiation Therapy Planning

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Audenaert, Emmanuel [Department of Physical Medicine and Orthopedic Surgery, Ghent University, Ghent (Belgium); Speleers, Bruno; Vercauteren, Tom; Mulliez, Thomas [Department of Radiotherapy, Ghent University, Ghent (Belgium); Vandemaele, Pieter; Achten, Eric [Department of Radiology, Ghent University, Ghent (Belgium); Kerckaert, Ingrid; D' Herde, Katharina [Department of Anatomy, Ghent University, Ghent (Belgium); De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2013-11-15

    Purpose: To develop contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver datasets. Magnetic resonance imaging (MRI) and computed tomography (CT) were used to obtain detailed visualizations of the BP region, with the goal of achieving maximal inclusion of the actual BP in a small contoured volume while also accommodating for anatomic variations. Methods and Materials: CT and MRI were obtained for 8 cadavers positioned for intensity modulated radiation therapy. 3-dimensional reconstructions of soft tissue (from MRI) and bone (from CT) were combined to create 8 separate enhanced CT project files. Dissection of the corresponding cadavers anatomically validated the reconstructions created. Seven enhanced CT project files were then automatically fitted, separately in different regions, to obtain a single dataset of superimposed BP regions that incorporated anatomic variations. From this dataset, improved BP contouring guidelines were developed. These guidelines were then applied to the 7 original CT project files and also to 1 additional file, left out from the superimposing procedure. The percentage of BP inclusion was compared with the published guidelines. Results: The anatomic validation procedure showed a high level of conformity for the BP regions examined between the 3-dimensional reconstructions generated and the dissected counterparts. Accurate and detailed BP contouring guidelines were developed, which provided corresponding guidance for each level in a clinical dataset. An average margin of 4.7 mm around the anatomically validated BP contour is sufficient to accommodate for anatomic variations. Using the new guidelines, 100% inclusion of the BP was achieved, compared with a mean inclusion of 37.75% when published guidelines were applied. Conclusion: Improved guidelines for BP delineation were developed using combined MRI and CT imaging with validation by anatomic dissection.

  6. Expert system for the automatic analysis of the Eddy current signals from the monitoring of vapor generators of a PWR type reactor

    International Nuclear Information System (INIS)

    Benoist, P.; David, B.; Pigeon, M.

    1990-01-01

    An expert system for the automatic analysis of signals from Eddy currents is presented. The system was developed in order to detect and analyse the defects which may exist in vapor generators. The extraction of a signal from a high level background noise is possible. The organization of the work during the system's development, the results of the technique for the extraction of the signal from the background noise, and an example concerning the interpretation of the signal from a defect are presented [fr

  7. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  8. Anatomical entity mention recognition at literature scale.

    Science.gov (United States)

    Pyysalo, Sampo; Ananiadou, Sophia

    2014-03-15

    Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. sophia.ananiadou@manchester.ac.uk Supplementary data are available at Bioinformatics online.

  9. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  10. MO-G-BRE-04: Automatic Verification of Daily Treatment Deliveries and Generation of Daily Treatment Reports for a MR Image-Guided Treatment Machine

    International Nuclear Information System (INIS)

    Yang, D; Li, X; Li, H; Wooten, H; Green, O; Rodriguez, V; Mutic, S

    2014-01-01

    Purpose: Two aims of this work were to develop a method to automatically verify treatment delivery accuracy immediately after patient treatment and to develop a comprehensive daily treatment report to provide all required information for daily MR-IGRT review. Methods: After systematically analyzing the requirements for treatment delivery verification and understanding the available information from a novel MR-IGRT treatment machine, we designed a method to use 1) treatment plan files, 2) delivery log files, and 3) dosimetric calibration information to verify the accuracy and completeness of daily treatment deliveries. The method verifies the correctness of delivered treatment plans and beams, beam segments, and for each segment, the beam-on time and MLC leaf positions. Composite primary fluence maps are calculated from the MLC leaf positions and the beam-on time. Error statistics are calculated on the fluence difference maps between the plan and the delivery. We also designed the daily treatment delivery report by including all required information for MR-IGRT and physics weekly review - the plan and treatment fraction information, dose verification information, daily patient setup screen captures, and the treatment delivery verification results. Results: The parameters in the log files (e.g. MLC positions) were independently verified and deemed accurate and trustable. A computer program was developed to implement the automatic delivery verification and daily report generation. The program was tested and clinically commissioned with sufficient IMRT and 3D treatment delivery data. The final version has been integrated into a commercial MR-IGRT treatment delivery system. Conclusion: A method was developed to automatically verify MR-IGRT treatment deliveries and generate daily treatment reports. Already in clinical use since December 2013, the system is able to facilitate delivery error detection, and expedite physician daily IGRT review and physicist weekly chart

  11. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    International Nuclear Information System (INIS)

    Gloger, Oliver; Völzke, Henry; Tönnies, Klaus; Mensel, Birger

    2015-01-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches. (paper)

  12. Some Behavioral Considerations on the GPS4GEF Cloud-Based Generator of Evaluation Forms with Automatic Feedback and References to Interactive Support Content

    Directory of Open Access Journals (Sweden)

    Daniel HOMOCIANU

    2015-01-01

    Full Text Available The paper introduces some considerations on a previously defined general purpose system used to dynamically generate online evaluation forms with automatic feedback immediately after submitting responses and working with a simple and well-known data source format able to store questions, answers and links to additional support materials in order to increase the productivity of evaluation and assessment. Beyond presenting a short description of the prototype’s components and underlining advantages and limitations of using it for any user involved in assessment and evaluation processes, this paper promotes the use of such a system together with a simple technique of generating and referencing interactive support content cited within this paper and defined together with the LIVES4IT approach. This type of content means scenarios having adhoc documentation and interactive simulation components useful when emulating concrete examples of working with real world objects, operating with devices or using software applications from any activity field.

  13. Dosimetric Evaluation of Automatic Segmentation for Adaptive IMRT for Head-and-Neck Cancer

    International Nuclear Information System (INIS)

    Tsuji, Stuart Y.; Hwang, Andrew; Weinberg, Vivian; Yom, Sue S.; Quivey, Jeanne M.; Xia Ping

    2010-01-01

    Purpose: Adaptive planning to accommodate anatomic changes during treatment requires repeat segmentation. This study uses dosimetric endpoints to assess automatically deformed contours. Methods and Materials: Sixteen patients with head-and-neck cancer had adaptive plans because of anatomic change during radiotherapy. Contours from the initial planning computed tomography (CT) were deformed to the mid-treatment CT using an intensity-based free-form registration algorithm then compared with the manually drawn contours for the same CT using the Dice similarity coefficient and an overlap index. The automatic contours were used to create new adaptive plans. The original and automatic adaptive plans were compared based on dosimetric outcomes of the manual contours and on plan conformality. Results: Volumes from the manual and automatic segmentation were similar; only the gross tumor volume (GTV) was significantly different. Automatic plans achieved lower mean coverage for the GTV: V95: 98.6 ± 1.9% vs. 89.9 ± 10.1% (p = 0.004) and clinical target volume: V95: 98.4 ± 0.8% vs. 89.8 ± 6.2% (p 3 of the spinal cord 39.9 ± 3.7 Gy vs. 42.8 ± 5.4 Gy (p = 0.034), but no difference for the remaining structures. Conclusions: Automatic segmentation is not robust enough to substitute for physician-drawn volumes, particularly for the GTV. However, it generates normal structure contours of sufficient accuracy when assessed by dosimetric end points.

  14. Stacouf: A new system for automatic processing of eddy current signal from steam generator testing of PWR power plants

    International Nuclear Information System (INIS)

    Ducreux, J.; Eyrolles, P.; Meylogan, T.

    1990-01-01

    A new system called STACOUF will be soon industrialized. The aim is to improve on-site signal processing for eddy testing of steam generators. Testing time, quality and productivity will be improved [fr

  15. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries.

    Science.gov (United States)

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, p<0.0001), but a similar F-measure to that of the SAAP (0.89 vs 0.87). For the Procedure terms, the F-measure was not significantly different among the three pipelines. The combination of a semi-automatic annotation approach and the NLP application seems to be a solution for generating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.

  16. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.

    Science.gov (United States)

    Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus

    2018-01-05

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.

  17. The Generator of the Event Structure Lexicon (GESL): Automatic Annotation of Event Structure for Textual Inference Tasks

    Science.gov (United States)

    Im, Seohyun

    2013-01-01

    This dissertation aims to develop the Generator of the Event Structure Lexicon (GESL) which is a tool to automate annotating the event structure of verbs in text to support textual inference tasks related to lexically entailed subevents. The output of the GESL is the Event Structure Lexicon (ESL), which is a lexicon of verbs in text which includes…

  18. Generation of alloreactivity-reduced donor lymphocyte products retaining memory function by fully automatic depletion of CD45RA-positive cells.

    Science.gov (United States)

    Müller, Nina; Landwehr, Katharina; Langeveld, Kirsten; Stenzel, Joanna; Pouwels, Walter; van der Hoorn, Menno A W G; Seifried, Erhard; Bonig, Halvard

    2018-02-28

    For patients needing allogeneic stem cell transplantation but lacking a major histocompatibility complex (MHC)-matched donor, haplo-identical (family) donors may be an alternative. Stringent T-cell depletion required in these cases to avoid lethal graft-versus-host disease (GVHD) can delay immune reconstitution, thus impairing defense against virus reactivation and attenuating graft-versus-leukemia (GVL) activity. Several groups reported that GVHD is caused by cells residing within the naive (CD45RA + ) T-cell compartment and proposed use of CD45RA-depleted donor lymphocyte infusion (DLI) to accelerate immune reconstitution. We developed and tested the performance of a CD45RA depletion module for the automatic cell-processing device CliniMACS Prodigy and investigated quality attributes of the generated products. Unstimulated apheresis products from random volunteer donors were depleted of CD45RA + cells on CliniMACS Prodigy, using Good Manufacturing Practice (GMP)-compliant reagents and methods throughout. Using phenotypic and functional in vitro assays, we assessed the cellular constitution of CD45RA-depleted products, including T-cell subset analyses, immunological memory function and allo-reactivity. Selections were technically uneventful and proceeded automatically with minimal hands-on time beyond tubing set installation. Products were near-qualitatively CD45RA + depleted, that is, largely devoid of CD45RA + T cells but also of almost all B and natural killer cells. Naive and effector as well as γ/δ T cells were greatly reduced. The CD4:CD8 ratio was fivefold increased. Mixed lymphocyte reaction assays of the product against third-party leukocytes revealed reduced allo-reactivity compared to starting material. Anti-pathogen responses were retained. The novel, closed, fully GMP-compatible process on Prodigy generates highly CD45RA-depleted cellular products predicted to be clinically meaningfully depleted of GvH reactivity. Copyright © 2018 International

  19. Reenganche automático en circuitos de distribución con generación distribuida; Automatic reclosing in distribution circuits with distributed generation

    Directory of Open Access Journals (Sweden)

    Marta Bravo de las Casas

    2015-04-01

    Full Text Available Las redes de distribución han sido diseñadas tradicionalmente para que la potencia fluya en un solo sentido. La introducción de las unidades de generación distribuida hace que esta consideración ya no sea cierta, lo que traerá consigo nuevos retos para la operación y el diseño de estas redes. Una de las áreas afectadas en este sentido son la de las protecciones eléctricas, sobre todo la protección anti-aislamiento o separadora, y en especial cuando se utiliza reenganche automático, típico en las redes eléctricas de media tensión. El presente artículo realiza un estudio del reenganche automático en una subestación típica cubana que presenta generación distribuida fuel y diesel. Inicialmente se hace una breve revisión de la literatura y los resultados se presentan por medio de simulaciones en el software Matlab – Simulik (versión 7.4. La simulación confirma la existencia del problema y para ello se plantean las posibles soluciones. Distribution networks traditionally have been designed so that the power flows in one direction only. The introduction of distributed generation units makes this consideration is no longer true, which will bring new challenges for the operation and design of these networks. One of the areas affected in this regard are the electrical protections, especially the anti-isolating or separating, especially when automatic reclosing is used. The automatic reclosing is typical in middle voltage networks. In present article is carried out a study of automatic reclosing on a Cuban typical substation that presents distributed generation diesel and fuel. Initially a short review of the literature is made and the results are presented by means of the simulations from Matlab -Simulik (version 7.4 software. The simulation confirms the existence of this problem and possible solutions arise.

  20. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)

    2017-12-21

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  1. AVID: Automatic Visualization Interface Designer

    National Research Council Canada - National Science Library

    Chuah, Mei

    2000-01-01

    .... Automatic generation offers great flexibility in performing data and information analysis tasks, because new designs are generated on a case by case basis to suit current and changing future needs...

  2. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    Science.gov (United States)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  3. Applying fractional order PID to design TCSC-based damping controller in coordination with automatic generation control of interconnected multi-source power system

    Directory of Open Access Journals (Sweden)

    Javad Morsali

    2017-02-01

    Full Text Available In this paper, fractional order proportional-integral-differential (FOPID controller is employed in the design of thyristor controlled series capacitor (TCSC-based damping controller in coordination with the secondary integral controller as automatic generation control (AGC loop. In doing so, the contribution of the TCSC in tie-line power exchange is extracted mathematically for small load disturbance. Adjustable parameters of the proposed FOPID-based TCSC damping controller and the AGC loop are optimized concurrently via an improved particle swarm optimization (IPSO algorithm which is reinforced by chaotic parameter and crossover operator to obtain a globally optimal solution. The powerful FOMCON toolbox is used along with MATLAB for handling fractional order modeling and control. An interconnected multi-source power system is simulated regarding the physical constraints of generation rate constraint (GRC nonlinearity and governor dead band (GDB effect. Simulation results using FOMCON toolbox demonstrate that the proposed FOPID-based TCSC damping controller achieves the greatest dynamic performance under different load perturbation patterns in comparison with phase lead-lag and classical PID-based TCSC damping controllers, all in coordination with the integral AGC. Moreover, sensitivity analyses are performed to show the robustness of the proposed controller under various uncertainty scenarios.

  4. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Weili; Kim, Joshua P. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Kadbi, Mo [Philips Healthcare, Cleveland, Ohio (United States); Movsas, Benjamin; Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States)

    2015-11-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated

  5. A segmentation framework towards automatic generation of boost subvolumes for FDG-PET tumors: A digital phantom study

    International Nuclear Information System (INIS)

    Yang, Fei; Grigsby, Perry W.

    2012-01-01

    Potential benefits of administering nonuniform radiation dose to heterogeneous tumors imaged with FDG-PET have been widely demonstrated; whereas the number of discrete dose levels to be utilized and corresponding locations for prescription inside tumors vary significantly with current existing methods. In this paper, an automated and unsupervised segmentation framework constituted mainly by an image restoration mechanism based on variational decomposition and a voxel clustering scheme based on spectral clustering was presented towards partitioning FDG-PET imaged tumors into subvolumes characterized with the total intra-subvolume activity similarity and the total inter-subvolume activity dissimilarity being simultaneously maximized. Experiments to evaluate the proposed system were carried out with using FDG-PET data generated from a digital phantom that employed SimSET (Simulation System for Emission Tomography) to simulate PET acquisition of tumors. The obtained results show the feasibility of the proposed system in dividing FDG-PET imaged tumor volumes into subvolumes with intratumoral heterogeneity being properly characterized, irrespective of variation in tumor morphology as well as diversity in intratumoral heterogeneity pattern.

  6. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  7. A program for assisting automatic generation control of the ELETRONORTE using artificial neural network; Um programa para assistencia ao controle automatico de geracao da Eletronorte usando rede neuronal artificial

    Energy Technology Data Exchange (ETDEWEB)

    Brito Filho, Pedro Rodrigues de; Nascimento Garcez, Jurandyr do [Para Univ., Belem, PA (Brazil). Centro Tecnologico; Charone Junior, Wady [Centrais Eletricas do Nordeste do Brasil S.A. (ELETRONORTE), Belem, PA (Brazil)

    1994-12-31

    This work presents an application of artificial neural network as a support to decision making in the automatic generation control (AGC) of the ELETRONORTE. It uses a software to auxiliary in the decisions in real time of the AGC. (author) 2 refs., 6 figs., 1 tab.

  8. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  9. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain-spe...

  10. A hierarchical scheme for geodesic anatomical labeling of airway trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Petersen, Jens; Owen, Megan

    2012-01-01

    . In tree-space, the airway tree topology and geometry change continuously, giving a natural way to automatically handle anatomical differences and noise. The algorithm is made efficient using a hierarchical approach, in which labels are assigned from the top down. We only use features of the airway...

  11. Anatomical terminology in Ophthalmology

    OpenAIRE

    Abib, Fernando César; Oréfice, Fernando

    2005-01-01

    O objetivo deste artigo é informar à classe oftalmológica a existência da edição em língua portuguesa da Terminologia Anatômica Internacional, editada pela Federation Committee on Anatomical Terminology (FCAT). No Brasil a Terminologia Anatômica Internacional é traduzida pela Comissão de Terminologia Anatômica (CTA) da Sociedade Brasileira de Anatomia (SBA).The purpose of this article is inform ophthalmologists of the International Anatomical Terminology in the Portuguese language edited by t...

  12. Early fetal anatomical sonography.

    LENUS (Irish Health Repository)

    Donnelly, Jennifer C

    2012-10-01

    Over the past decade, prenatal screening and diagnosis has moved from the second into the first trimester, with aneuploidy screening becoming both feasible and effective. With vast improvements in ultrasound technology, sonologists can now image the fetus in greater detail at all gestational ages. In the hands of experienced sonographers, anatomic surveys between 11 and 14 weeks can be carried out with good visualisation rates of many structures. It is important to be familiar with the normal development of the embryo and fetus, and to be aware of the major anatomical landmarks whose absence or presence may be deemed normal or abnormal depending on the gestational age. Some structural abnormalities will nearly always be detected, some will never be and some are potentially detectable depending on a number of factors.

  13. Reference Man anatomical model

    Energy Technology Data Exchange (ETDEWEB)

    Cristy, M.

    1994-10-01

    The 70-kg Standard Man or Reference Man has been used in physiological models since at least the 1920s to represent adult males. It came into use in radiation protection in the late 1940s and was developed extensively during the 1950s and used by the International Commission on Radiological Protection (ICRP) in its Publication 2 in 1959. The current Reference Man for Purposes of Radiation Protection is a monumental book published in 1975 by the ICRP as ICRP Publication 23. It has a wealth of information useful for radiation dosimetry, including anatomical and physiological data, gross and elemental composition of the body and organs and tissues of the body. The anatomical data includes specified reference values for an adult male and an adult female. Other reference values are primarily for the adult male. The anatomical data include much data on fetuses and children, although reference values are not established. There is an ICRP task group currently working on revising selected parts of the Reference Man document.

  14. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    International Nuclear Information System (INIS)

    Pescarini, M.; Orsi, R.; Martinelli, T.

    2003-01-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  15. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    1995-01-01

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  16. Anatomical imaging for radiotherapy

    International Nuclear Information System (INIS)

    Evans, Philip M

    2008-01-01

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  17. Multi-template analysis of human perirhinal cortex in brain MRI: Explicitly accounting for anatomical variability

    Science.gov (United States)

    Xie, Long; Pluta, John B.; Das, Sandhitsu R.; Wisse, Laura E.M.; Wang, Hongzhi; Mancuso, Lauren; Kliot, Dasha; Avants, Brian B.; Ding, Song-Lin; Manjón, José V.; Wolk, David A.; Yushkevich, Paul A.

    2016-01-01

    Rational The human perirhinal cortex (PRC) plays critical roles in episodic and semantic memory and visual perception. The PRC consists of Brodmann areas 35 and 36 (BA35, BA36). In Alzheimer's disease (AD), BA35 is the first cortical site affected by neurofibrillary tangle pathology, which is closely linked to neural injury in AD. Large anatomical variability, manifested in the form of different cortical folding and branching patterns, makes it difficult to segment the PRC in MRI scans. Pathology studies have found that in ~97% of specimens, the PRC falls into one of three discrete anatomical variants. However, current methods for PRC segmentation and morphometry in MRI are based on single-template approaches, which may not be able to accurately model these discrete variants Methods A multi-template analysis pipeline that explicitly accounts for anatomical variability is used to automatically label the PRC and measure its thickness in T2-weighted MRI scans. The pipeline uses multi-atlas segmentation to automatically label medial temporal lobe cortices including entorhinal cortex, PRC and the parahippocampal cortex. Pairwise registration between label maps and clustering based on residual dissimilarity after registration are used to construct separate templates for the anatomical variants of the PRC. An optimal path of deformations linking these templates is used to establish correspondences between all the subjects. Experimental evaluation focuses on the ability of single-template and multi-template analyses to detect differences in the thickness of medial temporal lobe cortices between patients with amnestic mild cognitive impairment (aMCI, n=41) and age-matched controls (n=44). Results The proposed technique is able to generate templates that recover the three dominant discrete variants of PRC and establish more meaningful correspondences between subjects than a single-template approach. The largest reduction in thickness associated with aMCI, in absolute terms

  18. Development of a automatic positioning system of photovoltaic panels for electric energy generation; Desenvolvimento de um sistema de posicionamento automatico de placas fotovoltaicas para a geracao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Alceu F.; Cagnon, Odivaldo Jose [Universidade Estadual Paulista (DEE/FEB/UNESP), Bauru, SP (Brazil). Fac. de Engenharia. Dept. de Engenharia Eletrica; Seraphin, Odivaldo Jose [Universidade Estadual Paulista (DER/FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas. Dept. de Engenharia Rural

    2008-07-01

    This work presents an automatic positioning system for photovoltaic panels, in order to improve the conversion of solar energy to electric energy. A prototype with automatic movement was developed, and its efficiency in generating electric energy was compared to another one with the same characteristics, but fixed in space. Preliminary results point to a significant increase in efficiency, obtained from a simplified process of movement, in which sensors are not used to determine the apparent sun's position, but instead of it, the relative Sun-Earth's position equations are used. An innovative movement mechanical system is also presented, using two stepper motors to move the panel along two-axis, but with independent movement, contributing, this way, to save energy during the positioning times. The use of this proposed system in rural areas is suggested. (author)

  19. Understanding anatomical terms.

    Science.gov (United States)

    Mehta, L A; Natrajan, M; Kothari, M L

    1996-01-01

    Words are our masters and words are our slaves, all depending on how we use them. The whole of medical science owes its origin to Greco-Roman culture and is replete with terms whose high sound is not necessarily accompanied by sound meaning. This is even more the case in the initial, pre-clinical years. Anatomical terminology seems bewildering to the initiate; and maybe that is a reason why love of anatomy as a subject does not always spill over through later years. Employing certain classifications of the origin of the anatomical terms, we have prepared an anthology that we hope will ease the student's task and also heighten the student's appreciation of the new terms. This centers on revealing the Kiplingian "how, why, when, where, what, and who" of a given term. This presentation should empower students to independently formulate a wide network of correlations once they understand a particular term. The article thus hopes to stimulate students' analytic and synthetic faculties as well. A small effort can reap large rewards in terms of enjoyment of the study of anatomy and the related subjects of histology, embryology, and genetics. It is helpful to teachers and students alike. This exercise in semantics and etymology does not demand of the student or his teacher any background in linguistics, grammar, Greek, Latin, Sanskrit, anatomy, or medicine.

  20. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    Science.gov (United States)

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  1. Automatic learning-based beam angle selection for thoracic IMRT.

    Science.gov (United States)

    Amit, Guy; Purdie, Thomas G; Levinshtein, Alex; Hope, Andrew J; Lindsay, Patricia; Marshall, Andrea; Jaffray, David A; Pekar, Vladimir

    2015-04-01

    The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose-volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner's clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk

  2. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  3. Contours, 2 foot contours automatically generated from 2008 LIDAR for the purpose of supporting FEMA floodplain mapping. Limited manual editing, breaklines for waterbodies greater than 5 acres created and use.10' index contours labeled., Published in 2008, 1:1200 (1in=100ft) scale, City of Portage Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Contours dataset current as of 2008. 2 foot contours automatically generated from 2008 LIDAR for the purpose of supporting FEMA floodplain mapping. Limited manual...

  4. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    Science.gov (United States)

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook

    2014-03-01

    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  5. An interoperable standard system for the automatic generation and publication of the fire risk maps based on Fire Weather Index (FWI)

    Science.gov (United States)

    Julià Selvas, Núria; Ninyerola Casals, Miquel

    2015-04-01

    It has been implemented an automatic system to predict the fire risk in the Principality of Andorra, a small country located in the eastern Pyrenees mountain range, bordered by Catalonia and France, due to its location, his landscape is a set of a rugged mountains with an average elevation around 2000 meters. The system is based on the Fire Weather Index (FWI) that consists on different components, each one, measuring a different aspect of the fire danger calculated by the values of the weather variables at midday. CENMA (Centre d'Estudis de la Neu i de la Muntanya d'Andorra) has a network around 10 automatic meteorological stations, located in different places, peeks and valleys, that measure weather data like relative humidity, wind direction and speed, surface temperature, rainfall and snow cover every ten minutes; this data is sent daily and automatically to the system implemented that will be processed in the way to filter incorrect measurements and to homogenizer measurement units. Then this data is used to calculate all components of the FWI at midday and for the level of each station, creating a database with the values of the homogeneous measurements and the FWI components for each weather station. In order to extend and model this data to all Andorran territory and to obtain a continuous map, an interpolation method based on a multiple regression with spline residual interpolation has been implemented. This interpolation considerer the FWI data as well as other relevant predictors such as latitude, altitude, global solar radiation and sea distance. The obtained values (maps) are validated using a cross-validation leave-one-out method. The discrete and continuous maps are rendered in tiled raster maps and published in a web portal conform to Web Map Service (WMS) Open Geospatial Consortium (OGC) standard. Metadata and other reference maps (fuel maps, topographic maps, etc) are also available from this geoportal.

  6. Automatic construction of subject-specific human airway geometry including trifurcations based on a CT-segmented airway skeleton and surface

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Wenzel, Sally E.; Lin, Ching-Long

    2016-01-01

    We propose a method to construct three-dimensional airway geometric models based on airway skeletons, or centerlines (CLs). Given a CT-segmented airway skeleton and surface, the proposed CL-based method automatically constructs subject-specific models that contain anatomical information regarding branches, include bifurcations and trifurcations, and extend from the trachea to terminal bronchioles. The resulting model can be anatomically realistic with the assistance of an image-based surface; alternatively a model with an idealized skeleton and/or branch diameters is also possible. This method systematically identifies and classifies trifurcations to successfully construct the models, which also provides the number and type of trifurcations for the analysis of the airways from an anatomical point of view. We applied this method to 16 normal and 16 severe asthmatic subjects using their computed tomography images. The average distance between the surface of the model and the image-based surface was 11% of the average voxel size of the image. The four most frequent locations of trifurcations were the left upper division bronchus, left lower lobar bronchus, right upper lobar bronchus, and right intermediate bronchus. The proposed method automatically constructed accurate subject-specific three-dimensional airway geometric models that contain anatomical information regarding branches using airway skeleton, diameters, and image-based surface geometry. The proposed method can construct (i) geometry automatically for population-based studies, (ii) trifurcations to retain the original airway topology, (iii) geometry that can be used for automatic generation of computational fluid dynamics meshes, and (iv) geometry based only on a skeleton and diameters for idealized branches. PMID:27704229

  7. Anatomical adaptations of aquatic mammals.

    Science.gov (United States)

    Reidenberg, Joy S

    2007-06-01

    This special issue of the Anatomical Record explores many of the anatomical adaptations exhibited by aquatic mammals that enable life in the water. Anatomical observations on a range of fossil and living marine and freshwater mammals are presented, including sirenians (manatees and dugongs), cetaceans (both baleen whales and toothed whales, including dolphins and porpoises), pinnipeds (seals, sea lions, and walruses), the sea otter, and the pygmy hippopotamus. A range of anatomical systems are covered in this issue, including the external form (integument, tail shape), nervous system (eye, ear, brain), musculoskeletal systems (cranium, mandible, hyoid, vertebral column, flipper/forelimb), digestive tract (teeth/tusks/baleen, tongue, stomach), and respiratory tract (larynx). Emphasis is placed on exploring anatomical function in the context of aquatic life. The following topics are addressed: evolution, sound production, sound reception, feeding, locomotion, buoyancy control, thermoregulation, cognition, and behavior. A variety of approaches and techniques are used to examine and characterize these adaptations, ranging from dissection, to histology, to electron microscopy, to two-dimensional (2D) and 3D computerized tomography, to experimental field tests of function. The articles in this issue are a blend of literature review and new, hypothesis-driven anatomical research, which highlight the special nature of anatomical form and function in aquatic mammals that enables their exquisite adaptation for life in such a challenging environment. 2007 Wiley-Liss, Inc.

  8. Automatic NC-Data generation method for 5-axis cutting of turbine-blades by finding Safe heel-angles and adaptive path-intervals

    International Nuclear Information System (INIS)

    Piao, Cheng Dao; Lee, Cheol Soo; Cho, Kyu Zong; Park, Gwang Ryeol

    2004-01-01

    In this paper, an efficient method for generating 5-axis cutting data for a turbine blade is presented. The interference elimination of 5-axis cutting currently is very complicated, and it takes up a lot of time. The proposed method can generate an interference-free tool path, within an allowance range. Generating the cutting data just point to the cutting process and using it to obtain NC data by calculating the feed rate, allows us to maintain the proper feed rate of the 5-axis machine. This paper includes the algorithms for: (1) CL data generation by detecting an interference-free heel angle, (2) finding the optimal tool path interval considering the cusp-height, (3) finding the adaptive feed rate values for each cutter path, and (4) the inverse kinematics depending on the structure of the 5-axis machine, for generating the NC data

  9. Feature-based morphometry: discovering group-related anatomical patterns.

    Science.gov (United States)

    Toews, Matthew; Wells, William; Collins, D Louis; Arbel, Tal

    2010-02-01

    This paper presents feature-based morphometry (FBM), a new fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer's (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1). Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. On the new anatomical nomenclature.

    Science.gov (United States)

    Marecková, E; Simon, F; Cervený, L

    2001-05-01

    The present paper is concerned with the linguistic aspect of the new anatomical nomenclature (Terminologia Anatomica 1998). Orthographic, morphological, syntactic, lexical, and terminological comments are presented. In the authors' opinion, shortcomings might have been effectively avoided by cooperation with linguists.

  11. Anatomic Preformed Post: Case Report

    OpenAIRE

    Lamas Lara, César; Cirujano Dentista, Docente del Área de Operatoria Dental y Endodoncia de la Facultad de OdontoIogía de la UNMSM.; Alvarado Menacho, Sergio; Cirujano Dentista, Especialista en Rehabilitación Oral, Profesor Asociado del Área de Prótesis y Oclusión de la Facultad de Odontología de la UNMSM.; Pari Espinoza, Rosa; Alumna del 5to año de Odontología de la UNMSM.

    2014-01-01

    Nowadays, preformed posts are being used frequently, but they do not follow root canal anatomy. Obtaining a more anatomical form of the root canal and reducing the space of the cement, it would help to reduce the possibility of its eviction. This article details the process of making of an anatomical preformed post and the advantages that would represent its clinical use. En la actualidad los postes preformados se utilizan con mucha frecuencia, pero tienenla dificultad de no seguir la anat...

  12. Implementation of an Automatic System for the Monitoring of Start-up and Operating Regimes of the Cooling Water Installations of a Hydro Generator

    Directory of Open Access Journals (Sweden)

    Ioan Pădureanu

    2015-07-01

    Full Text Available The safe operation of a hydro generator depends on its thermal regime, the basic conditions being that the temperature in the stator winding fall within the limits of the insulation class. As the losses in copper depend on the square current in the stator winding, it is necessary that the cooling water debit should be adapted to the values of these losses, so that the winding temperature falls within the range of the values prescribed in the specifications. This paper presents an efficient solution of commanding and monitoring the water cooling installations of two high-power hydro generators.

  13. An Automatic Mosaicking Algorithm for the Generation of a Large-Scale Forest Height Map Using Spaceborne Repeat-Pass InSAR Correlation Magnitude

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2015-05-01

    Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.

  14. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast Technology

    Directory of Open Access Journals (Sweden)

    Santiago Álvarez de Toledo

    2017-01-01

    Full Text Available Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  15. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology.

    Science.gov (United States)

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M; Lara, Juan A; Lizcano, David

    2017-01-19

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  16. Anatomical influences on internally coupled ears in reptiles.

    Science.gov (United States)

    Young, Bruce A

    2016-10-01

    Many reptiles, and other vertebrates, have internally coupled ears in which a patent anatomical connection allows pressure waves generated by the displacement of one tympanic membrane to propagate (internally) through the head and, ultimately, influence the displacement of the contralateral tympanic membrane. The pattern of tympanic displacement caused by this internal coupling can give rise to novel sensory cues. The auditory mechanics of reptiles exhibit more anatomical variation than in any other vertebrate group. This variation includes structural features such as diverticula and septa, as well as coverings of the tympanic membrane. Many of these anatomical features would likely influence the functional significance of the internal coupling between the tympanic membranes. Several of the anatomical components of the reptilian internally coupled ear are under active motor control, suggesting that in some reptiles the auditory system may be more dynamic than previously recognized.

  17. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  18. Anatomical pathology is dead? Long live anatomical pathology.

    Science.gov (United States)

    Nicholls, John M; Francis, Glenn D

    2011-10-01

    The standard diagnostic instrument used for over 150 years by anatomical pathologists has been the optical microscope and glass slide. The advent of immunohistochemistry in the routine laboratory in the 1980s, followed by in situ hybridisation in the 1990s, has increased the armamentaria available to the diagnostic pathologist, and this technology has led to changed patient management in a limited number of neoplastic diseases. The first decade of the 21 century has seen an increasing number of publications using proteomic technologies that promise to change disease diagnosis and management, the traditional role of an anatomical pathologist. Despite the plethora of publications on proteomics and pathology, to date there are actually limited data where proteomic technologies do appear to be of greater diagnostic value than the standard histological slide. Though proteomic techniques will become more prevalent in the future, it will need the expertise of an anatomical pathologist to dissect out and validate this added information.

  19. A hierarchical scheme for geodesic anatomical labeling of airway trees.

    Science.gov (United States)

    Feragen, Aasa; Petersen, Jens; Owen, Megan; Lo, Pechin; Thomsen, Laura H; Wille, Mathilde M W; Dirksen, Asger; de Bruijne, Marleen

    2012-01-01

    We present a fast and robust supervised algorithm for labeling anatomical airway trees, based on geodesic distances in a geometric tree-space. Possible branch label configurations for a given tree are evaluated based on distances to a training set of labeled trees. In tree-space, the tree topology and geometry change continuously, giving a natural way to automatically handle anatomical differences and noise. The algorithm is made efficient using a hierarchical approach, in which labels are assigned from the top down. We only use features of the airway centerline tree, which are relatively unaffected by pathology. A thorough leave-one-patient-out evaluation of the algorithm is made on 40 segmented airway trees from 20 subjects labeled by 2 medical experts. We evaluate accuracy, reproducibility and robustness in patients with chronic obstructive pulmonary disease (COPD). Performance is statistically similar to the inter- and intra-expert agreement, and we found no significant correlation between COPD stage and labeling accuracy.

  20. Automatic fluid dispenser

    Science.gov (United States)

    Sakellaris, P. C. (Inventor)

    1977-01-01

    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  1. Automatic segmentation of male pelvic anatomy on computed tomography images: a comparison with multiple observers in the context of a multicentre clinical trial

    International Nuclear Information System (INIS)

    Geraghty, John P; Grogan, Garry; Ebert, Martin A

    2013-01-01

    This study investigates the variation in segmentation of several pelvic anatomical structures on computed tomography (CT) between multiple observers and a commercial automatic segmentation method, in the context of quality assurance and evaluation during a multicentre clinical trial. CT scans of two prostate cancer patients (‘benchmarking cases’), one high risk (HR) and one intermediate risk (IR), were sent to multiple radiotherapy centres for segmentation of prostate, rectum and bladder structures according to the TROG 03.04 “RADAR” trial protocol definitions. The same structures were automatically segmented using iPlan software for the same two patients, allowing structures defined by automatic segmentation to be quantitatively compared with those defined by multiple observers. A sample of twenty trial patient datasets were also used to automatically generate anatomical structures for quantitative comparison with structures defined by individual observers for the same datasets. There was considerable agreement amongst all observers and automatic segmentation of the benchmarking cases for bladder (mean spatial variations < 0.4 cm across the majority of image slices). Although there was some variation in interpretation of the superior-inferior (cranio-caudal) extent of rectum, human-observer contours were typically within a mean 0.6 cm of automatically-defined contours. Prostate structures were more consistent for the HR case than the IR case with all human observers segmenting a prostate with considerably more volume (mean +113.3%) than that automatically segmented. Similar results were seen across the twenty sample datasets, with disagreement between iPlan and observers dominant at the prostatic apex and superior part of the rectum, which is consistent with observations made during quality assurance reviews during the trial. This study has demonstrated quantitative analysis for comparison of multi-observer segmentation studies. For automatic segmentation

  2. Automatic schema evolution in Root

    International Nuclear Information System (INIS)

    Brun, R.; Rademakers, F.

    2001-01-01

    ROOT version 3 (spring 2001) supports automatic class schema evolution. In addition this version also produces files that are self-describing. This is achieved by storing in each file a record with the description of all the persistent classes in the file. Being self-describing guarantees that a file can always be read later, its structure browsed and objects inspected, also when the library with the compiled code of these classes is missing. The schema evolution mechanism supports the frequent case when multiple data sets generated with many different class versions must be analyzed in the same session. ROOT supports the automatic generation of C++ code describing the data objects in a file

  3. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J. (Elemental Technologies, American Fort, UT)

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  4. Automatically-Programed Machine Tools

    Science.gov (United States)

    Purves, L.; Clerman, N.

    1985-01-01

    Software produces cutter location files for numerically-controlled machine tools. APT, acronym for Automatically Programed Tools, is among most widely used software systems for computerized machine tools. APT developed for explicit purpose of providing effective software system for programing NC machine tools. APT system includes specification of APT programing language and language processor, which executes APT statements and generates NC machine-tool motions specified by APT statements.

  5. Unifying the analyses of anatomical and diffusion tensor images using volume-preserved warping

    DEFF Research Database (Denmark)

    Xu, Dongrong; Hao, Xuejun; Bansal, Ravi

    2007-01-01

    morphologies of detected fiber bundles. We tested our framework using datasets from a group of patients with Tourette's syndrome (TS) and normal controls. RESULTS: Our framework automatically identified regions of localized volumetric differences across groups and then used those regions as seed points......PURPOSE: To introduce a framework that automatically identifies regions of anatomical abnormality within anatomical MR images and uses those regions in hypothesis-driven selection of seed points for fiber tracking with diffusion tensor (DT) imaging (DTI). MATERIALS AND METHODS: Regions of interest...... (ROIs) are first extracted from MR images using an automated algorithm for volume-preserved warping (VPW) that identifies localized volumetric differences across groups. ROIs then serve as seed points for fiber tracking in coregistered DT images. Another algorithm automatically clusters and compares...

  6. Employing anatomical knowledge in vertebral column labeling

    Science.gov (United States)

    Yao, Jianhua; Summers, Ronald M.

    2009-02-01

    The spinal column constitutes the central axis of human torso and is often used by radiologists to reference the location of organs in the chest and abdomen. However, visually identifying and labeling vertebrae is not trivial and can be timeconsuming. This paper presents an approach to automatically label vertebrae based on two pieces of anatomical knowledge: one vertebra has at most two attached ribs, and ribs are attached only to thoracic vertebrae. The spinal column is first extracted by a hybrid method using the watershed algorithm, directed acyclic graph search and a four-part vertebra model. Then curved reformations in sagittal and coronal directions are computed and aggregated intensity profiles along the spinal cord are analyzed to partition the spinal column into vertebrae. After that, candidates for rib bones are detected using features such as location, orientation, shape, size and density. Then a correspondence matrix is established to match ribs and vertebrae. The last vertebra (from thoracic to lumbar) with attached ribs is identified and labeled as T12. The rest of vertebrae are labeled accordingly. The method was tested on 50 CT scans and successfully labeled 48 of them. The two failed cases were mainly due to rudimentary ribs.

  7. Automatic generation of zonal models to study air movement and temperature distribution in buildings; Generation automatique de modeles zonaux pour l'etude du comportement thermo-aeraulique des batiments

    Energy Technology Data Exchange (ETDEWEB)

    Musy, M.

    1999-07-01

    This study consists in showing that it is possible to automatically build zonal models that allow to predict air movement, temperature distribution and air quality in the whole building. Zonal models are based on a rough partitioning of the rooms. It is an intermediate approach between one-node models and CFD models. One-node models consider an homogeneous temperature in each room, and for that reason, do not permit to predict the thermal comfort in a room whereas CFD models require a great amount of simulation time To achieve this aim, the zonal model was entirely reformulated as the connection of small sets of equations. The equations describe, either the state of a sub-zone of the partitioning (such sets of equations are called 'cells'), or mass and energy transfers that occur between two sub-zones (then, they are called 'interfaces'). There are various 'cells' and 'interfaces' to represent different air flows that occur in buildings. They all have been translated into SPARK objects that form a model library. Building a simulation consists in choosing the appropriate models to represent the rooms, and connecting them. The last stage has been automated. So, the only thing the user has to do is to give the partitioning and to choose the models to be implemented. The resulting set of equations is solved iteratively with SPARK. Results of simulations in 3D-rooms are presented and compared with experimental data. examples of zonal models are also given. They are applied to the study of a group of two rooms, a building, and a room the geometry of which is complex. (author)

  8. SubClonal Hierarchy Inference from Somatic Mutations: Automatic Reconstruction of Cancer Evolutionary Trees from Multi-region Next Generation Sequencing.

    Directory of Open Access Journals (Sweden)

    Noushin Niknafs

    2015-10-01

    Full Text Available Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones--cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8 can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can

  9. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  10. Anatomical structure of Polystichum Roth ferns rachises

    Directory of Open Access Journals (Sweden)

    Oksana V. Tyshchenko

    2012-03-01

    Full Text Available The morpho-anatomical characteristics of rachis cross sections of five Polystichum species is presented. The main and auxiliary anatomical features which help to distinguish investigated species are revealed.

  11. Unification of Sinonasal Anatomical Terminology

    Directory of Open Access Journals (Sweden)

    Voegels, Richard Louis

    2015-07-01

    Full Text Available The advent of endoscopy and computed tomography at the beginning of the 1980s brought to rhinology a revival of anatomy and physiology study. In 1994, the International Conference of Sinus Disease was conceived because the official “Terminologia Anatomica”[1] had little information on the detailed sinonasal anatomy. In addition, there was a lack of uniformity of terminology and definitions. After 20 years, a new conference has been held. The need to use the same terminology led to the publication by the European Society of Rhinology of the “European Position Paper on the Anatomical Terminology of the Internal Nose and Paranasal Sinuses,” that can be accessed freely at www.rhinologyjournal.com. Professor Valerie Lund et al[2] wrote this document reviewing the anatomical terms, comparing to the “Terminology Anatomica” official order to define the structures without eponyms, while respecting the embryological development and especially universalizing and simplifying the terms. A must-read! The text's purpose lies beyond the review of anatomical terminology to universalize the language used to refer to structures of the nasal and paranasal cavities. Information about the anatomy, based on extensive review of the current literature, is arranged in just over 50 pages, which are direct and to the point. The publication may be pleasant reading for learners and teachers of rhinology. This text can be a starting point and enables searching the universal terminology used in Brazil, seeking to converge with this new European proposal for a nomenclature to help us communicate with our peers in Brazil and the rest of the world. The original text of the European Society of Rhinology provides English terms that avoided the use of Latin, and thus fall beyond several national personal translations. It would be admirable if we created our own cross-cultural adaptation of this new suggested anatomical terminology.

  12. [Cellular subcutaneous tissue. Anatomic observations].

    Science.gov (United States)

    Marquart-Elbaz, C; Varnaison, E; Sick, H; Grosshans, E; Cribier, B

    2001-11-01

    We showed in a companion paper that the definition of the French "subcutaneous cellular tissue" considerably varied from the 18th to the end of the 20th centuries and has not yet reached a consensus. To address the anatomic reality of this "subcutaneous cellular tissue", we investigated the anatomic structures underlying the fat tissue in normal human skin. Sixty specimens were excised from the surface to the deep structures (bone, muscle, cartilage) on different body sites of 3 cadavers from the Institut d'Anatomie Normale de Strasbourg. Samples were paraffin-embedded, stained and analysed with a binocular microscope taking x 1 photographs. Specimens were also excised and fixed after subcutaneous injection of Indian ink, after mechanic tissue splitting and after performing artificial skin folds. The aspects of the deep parts of the skin greatly varied according to their anatomic localisation. Below the adipose tissue, we often found a lamellar fibrous layer which extended from the interlobular septa and contained horizontally distributed fat cells. No specific tissue below the hypodermis was observed. Artificial skin folds concerned either exclusively the dermis, when they were superficial or included the hypodermis, but no specific structure was apparent in the center of the fold. India ink diffused to the adipose tissue, mainly along the septa, but did not localise in a specific subcutaneous compartment. This study shows that the histologic aspects of the deep part of the skin depend mainly on the anatomic localisation. Skin is composed of epidermis, dermis and hypodermis and thus the hypodermis can not be considered as being "subcutaneous". A difficult to individualise, fibrous lamellar structure in continuity with the interlobular septa is often found under the fat lobules. This structure is a cleavage line, as is always the case with loose connective tissues, but belongs to the hypodermis (i.e. fat tissue). No specific tissue nor any virtual space was

  13. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  14. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    Science.gov (United States)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  15. Automatic control systems engineering

    International Nuclear Information System (INIS)

    Shin, Yun Gi

    2004-01-01

    This book gives descriptions of automatic control for electrical electronics, which indicates history of automatic control, Laplace transform, block diagram and signal flow diagram, electrometer, linearization of system, space of situation, state space analysis of electric system, sensor, hydro controlling system, stability, time response of linear dynamic system, conception of root locus, procedure to draw root locus, frequency response, and design of control system.

  16. Neural Bases of Automaticity

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F.; Logan, Gordon D.

    2018-01-01

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this…

  17. Focusing Automatic Code Inspections

    NARCIS (Netherlands)

    Boogerd, C.J.

    2010-01-01

    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  18. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  19. AUTOMATIC INTRAVENOUS DRIP CONTROLLER*

    African Journals Online (AJOL)

    Both the nursing staff shortage and the need for precise control in the administration of dangerous drugs intra- venously have led to the development of various devices to achieve an automatic system. The continuous automatic control of the drip rate eliminates errors due to any physical effect such as movement of the ...

  20. Utilization management in anatomic pathology.

    Science.gov (United States)

    Lewandrowski, Kent; Black-Schaffer, Steven

    2014-01-01

    There is relatively little published literature concerning utilization management in anatomic pathology. Nonetheless there are many utilization management opportunities that currently exist and are well recognized. Some of these impact only the cost structure within the pathology department itself whereas others reduce charges for third party payers. Utilization management may result in medical legal liabilities for breaching the standard of care. For this reason it will be important for pathology professional societies to develop national utilization guidelines to assist individual practices in implementing a medically sound approach to utilization management. © 2013.

  1. Methodology for Automatic Generation of Models for Large Urban Spaces Based on GIS Data/Metodología para la generación automática de modelos de grandes espacios urbanos desde información SIG/

    Directory of Open Access Journals (Sweden)

    Sergio Arturo Ordóñez Medina

    2012-12-01

    Full Text Available In the planning and evaluation stages of infrastructure projects, it is necessary to manage huge quantities of information. Cities are very complex systems, which need to be modeled when an intervention is required. Suchmodels allow us to measure the impact of infrastructure changes, simulating hypothetic scenarios and evaluating results. This paper describes a methodology for the automatic generation of urban space models from GIS sources. A Voronoi diagram is used to partition large urban regions and subsequently define zones of interest. Finally, some examples of application models are presented, one used for microsimulation of traffic and another for air pollution simulation.En las etapas de planeación y evaluación de proyectos de infraestructura es necesario manejar grandes cantidades de información. Las ciudades son sistemas complejos que deben ser modeladas para ser intervenidas. Estos modelos permitirón medir el impacto de los cambios de infraestructura, simular escenarios hipotéticos y evaluar resultados. Este artículo describe una metodología para generar automáticamente modelos espaciales urbanos desde fuentes SIG: Un diagrama de Voronoi es usado para dividir grandes regiones urbanas, y a continuación serán definidas las zonas de interés. Finalmente, algunos ejemplos de modelos de aplicación serán presentados, uno usado para microsimulación de tráfico y el otro para simular contaminación atmosférica.

  2. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    Science.gov (United States)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  3. Automatic creation of simulation configuration

    International Nuclear Information System (INIS)

    Oudot, G.; Poizat, F.

    1993-01-01

    SIPA, which stands for 'Simulator for Post Accident', includes: 1) a sophisticated software oriented workshop SWORD (which stands for 'Software Workshop Oriented towards Research and Development') designed in the ADA language including integrated CAD system and software tools for automatic generation of simulation software and man-machine interface in order to operate run-time simulation; 2) a 'simulator structure' based on hardware equipment and software for supervision and communications; 3) simulation configuration generated by SWORD, operated under the control of the 'simulator structure' and run on a target computer. SWORD has already been used to generate two simulation configurations (French 900 MW and 1300 MW nuclear power plants), which are now fully operational on the SIPA training simulator. (Z.S.) 1 ref

  4. Automatic Test Systems Aquisition

    National Research Council Canada - National Science Library

    1994-01-01

    We are providing this final memorandum report for your information and use. This report discusses the efforts to achieve commonality in standards among the Military Departments as part of the DoD policy for automatic test systems (ATS...

  5. Automatic requirements traceability

    OpenAIRE

    Andžiulytė, Justė

    2017-01-01

    This paper focuses on automatic requirements traceability and algorithms that automatically find recommendation links for requirements. The main objective of this paper is the evaluation of these algorithms and preparation of the method defining algorithms to be used in different cases. This paper presents and examines probabilistic, vector space and latent semantic indexing models of information retrieval and association rule mining using authors own implementations of these algorithms and o...

  6. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  7. ANATOMIC STRUCTURE OF CAMPANULA ROTUNDIFOLIA L. GRASS

    OpenAIRE

    V. N. Bubenchikova; E. A. Nikitin

    2017-01-01

    The article present results of the study for a anatomic structure of Campanula rotundifolia grass from Campanulaceae family. Despite its dispersion and application in folk medicine, there are no data about its anatomic structure, therefore to estimate the indices of authenticity and quality of raw materials it is necessary to develop microdiagnostical features in the first place, which could help introducing of thisplant in a medical practice. The purpose of this work is to study anatomical s...

  8. MR urography: Anatomical and quantitative information on ...

    African Journals Online (AJOL)

    MR urography: Anatomical and quantitative information on congenital malformations in children. Maria Karaveli, Dimitrios Katsanidis, Ioannis Kalaitzoglou, Afroditi Haritanti, Anastasios Sioundas, Athanasios Dimitriadis, Kyriakos Psarrakos ...

  9. Reduction of Dutch Sentences for Automatic Subtitling

    NARCIS (Netherlands)

    Tjong Kim Sang, E.F.; Daelemans, W.; Höthker, A.

    2004-01-01

    We compare machine learning approaches for sentence length reduction for automatic generation of subtitles for deaf and hearing-impaired people with a method which relies on hand-crafted deletion rules. We describe building the necessary resources for this task: a parallel corpus of examples of news

  10. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    Tsikrika, T.; Diou, C.; De Vries, A.P.; Delopoulos, A.

    2010-01-01

    Automatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the expensive

  11. Reliability and effectiveness of clickthrough data for automatic image annotation

    NARCIS (Netherlands)

    T. Tsikrika (Theodora); C. Diou; A.P. de Vries (Arjen); A. Delopoulos

    2010-01-01

    htmlabstractAutomatic image annotation using supervised learning is performed by concept classifiers trained on labelled example images. This work proposes the use of clickthrough data collected from search logs as a source for the automatic generation of concept training data, thus avoiding the

  12. Geodesic atlas-based labeling of anatomical trees

    DEFF Research Database (Denmark)

    Feragen, Aasa; Petersen, Jens; Owen, Megan

    2015-01-01

    topology and geometry change continuously, giving a natural automatic handling of anatomical differences and noise. A hierarchical approach makes the algorithm efficient, assigning labels from the trachea and downwards. Only the airway centerline tree is used, which is relatively unaffected by pathology....... The algorithm is evaluated on 80 segmented airway trees from 40 subjects at two time points, labeled by 3 medical experts each, testing accuracy, reproducibility and robustness in patients with Chronic Obstructive Pulmonary Disease (COPD). The accuracy of the algorithm is statistically similar...... to that of the experts and not significantly correlated with COPD severity. The reproducibility of the algorithm is significantly better than that of the experts, and negatively correlated with COPD severity. Evaluation of the algorithm on a longitudinal set of 8724 trees from a lung cancer screening trial shows...

  13. Statistical, Morphometric, Anatomical Shape Model (Atlas) of Calcaneus

    Science.gov (United States)

    Melinska, Aleksandra U.; Romaszkiewicz, Patryk; Wagel, Justyna; Sasiadek, Marek; Iskander, D. Robert

    2015-01-01

    The aim was to develop a morphometric and anatomically accurate atlas (statistical shape model) of calcaneus. The model is based on 18 left foot and 18 right foot computed tomography studies of 28 male individuals aged from 17 to 62 years, with no known foot pathology. A procedure for automatic atlas included extraction and identification of common features, averaging feature position, obtaining mean geometry, mathematical shape description and variability analysis. Expert manual assistance was included for the model to fulfil the accuracy sought by medical professionals. The proposed for the first time statistical shape model of the calcaneus could be of value in many orthopaedic applications including providing support in diagnosing pathological lesions, pre-operative planning, classification and treatment of calcaneus fractures as well as for the development of future implant procedures. PMID:26270812

  14. Design and use of numerical anatomical atlases for radiotherapy

    International Nuclear Information System (INIS)

    Commowick, O.

    2007-02-01

    The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)

  15. Anatomical and palynological characteristics of Salvia willeana ...

    African Journals Online (AJOL)

    In this study, anatomical and palynological features of the roots, stems, petiole and leaves of Salvia willeana (Holmboe) Hedge and Salvia veneris Hedge, Salvia species endemic to Cyprus, were investigated. In the anatomical characteristics of stem structures, it was found that the chlorenchyma composed of 6 or 7 rows of ...

  16. Morphological and anatomical response of Acacia ehrenbergiana ...

    African Journals Online (AJOL)

    ajl user 3

    2012-02-20

    Feb 20, 2012 ... The response of Acacia ehrenbergiana Hayne and Acacia tortilis (Forssk) Haynes subspp. raddiana seedlings to 100, 50 and 25% field capacity (FC) watering regimes was studied to determine their morphological and anatomical behaviour. Both species responded morphologically as well as anatomically ...

  17. An automatic virtual patient reconstruction from CT-scans for hepatic surgical planning.

    Science.gov (United States)

    Soler, L; Delingette, H; Malandain, G; Ayache, N; Koehl, C; Clément, J M; Dourthe, O; Marescaux, J

    2000-01-01

    PROBLEM/BACKGROUND: In order to help hepatic surgical planning we perfected automatic 3D reconstruction of patients from conventional CT-scan, and interactive visualization and virtual resection tools. From a conventional abdominal CT-scan, we have developed several methods allowing the automatic 3D reconstruction of skin, bones, kidneys, lung, liver, hepatic lesions, and vessels. These methods are based on deformable modeling or thresholding algorithms followed by the application of mathematical morphological operators. From these anatomical and pathological models, we have developed a new framework for translating anatomical knowledge into geometrical and topological constraints. More precisely, our approach allows to automatically delineate the hepatic and portal veins but also to label the portal vein and finally to build an anatomical segmentation of the liver based on Couinaud definition which is currently used by surgeons all over the world. Finally, we have developed a user friendly interface for the 3D visualization of anatomical and pathological structures, the accurate evaluation of volumes and distances and for the virtual hepatic resection along a user-defined cutting plane. A validation study on a 30 patients database gives 2 mm of precision for liver delineation and less than 1 mm for all other anatomical and pathological structures delineation. An in vivo validation performed during surgery also showed that anatomical segmentation is more precise than the delineation performed by a surgeon based on external landmarks. This surgery planning system has been routinely used by our medical partner, and this has resulted in an improvement of the planning and performance of hepatic surgery procedures. We have developed new tools for hepatic surgical planning allowing a better surgery through an automatic delineation and visualization of anatomical and pathological structures. These tools represent a first step towards the development of an augmented

  18. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  19. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...... on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...

  20. Towards automatic proofs of lock-free algorithms

    OpenAIRE

    Fejoz , Loïc; Merz , Stephan

    2008-01-01

    International audience; The verification of lock-free data structures has traditionally been considered as difficult. We propose a formal model for describing such algorithms. The verification conditions generated from this model can often be handled by automatic theorem provers.

  1. Neural network for automatic analysis of motility data

    DEFF Research Database (Denmark)

    Jakobsen, Erik; Kruse-Andersen, S; Kolberg, Jens Godsk

    1994-01-01

    comparable. However, the neural network recognized pressure peaks clearly generated by muscular activity that had escaped detection by the conventional program. In conclusion, we believe that neurocomputing has potential advantages for automatic analysis of gastrointestinal motility data....

  2. Anatomical pathways involved in generating and sensing rhythmic whisker movements

    Directory of Open Access Journals (Sweden)

    Laurens W.J. Bosman

    2011-10-01

    Full Text Available The rodent whisker system is widely used as a model system for investigating sensorimotor integration, neural mechanisms of complex cognitive tasks, neural development, and robotics. The whisker pathways to the barrel cortex have received considerable attention. However, many subcortical structures are paramount to the whisker system. They contribute to important processes, like filtering out salient features, integration with other senses and adaptation of the whisker system to the general behavioral state of the animal. We present here an overview of the brain regions and their connections involved in the whisker system. We do not only describe the anatomy and functional roles of the cerebral cortex, but also those of subcortical structures like the striatum, superior colliculus, cerebellum, pontomedullary reticular formation, zona incerta and anterior pretectal nucleus as well as those of level setting systems like the cholinergic, histaminergic, serotonergic and noradrenergic pathways. We conclude by discussing how these brain regions may affect each other and how they together may control the precise timing of whisker movements and coordinate whisker perception.

  3. Fast left ventricle tracking using localized anatomical affine optical flow.

    Science.gov (United States)

    Queirós, Sandro; Vilaça, João L; Morais, Pedro; Fonseca, Jaime C; D'hooge, Jan; Barbosa, Daniel

    2017-11-01

    In daily clinical cardiology practice, left ventricle (LV) global and regional function assessment is crucial for disease diagnosis, therapy selection, and patient follow-up. Currently, this is still a time-consuming task, spending valuable human resources. In this work, a novel fast methodology for automatic LV tracking is proposed based on localized anatomically constrained affine optical flow. This novel method can be combined to previously proposed segmentation frameworks or manually delineated surfaces at an initial frame to obtain fully delineated datasets and, thus, assess both global and regional myocardial function. Its feasibility and accuracy were investigated in 3 distinct public databases, namely in realistically simulated 3D ultrasound, clinical 3D echocardiography, and clinical cine cardiac magnetic resonance images. The method showed accurate tracking results in all databases, proving its applicability and accuracy for myocardial function assessment. Moreover, when combined to previous state-of-the-art segmentation frameworks, it outperformed previous tracking strategies in both 3D ultrasound and cardiac magnetic resonance data, automatically computing relevant cardiac indices with smaller biases and narrower limits of agreement compared to reference indices. Simultaneously, the proposed localized tracking method showed to be suitable for online processing, even for 3D motion assessment. Importantly, although here evaluated for LV tracking only, this novel methodology is applicable for tracking of other target structures with minimal adaptations. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Doses to organs at cerebral risks: optimization by robotized stereotaxic radiotherapy and automatic segmentation atlas versus three dimensional conformal radiotherapy; Doses aux organes a risque cerebraux: optimisation par radiotherapie stereotaxique robotisee et atlas de segmentation automatique versus radiotherapie conformationnelle tridimensionnelle

    Energy Technology Data Exchange (ETDEWEB)

    Bondiau, P.Y.; Thariat, J.; Benezery, K.; Herault, J.; Dalmasso, C.; Marcie, S. [Centre Antoine-Lacassagne, 06 - Nice (France); Malandain, G. [Institut National de Recherche en Informatique et en Automatique (INRIA), Sophia-Antipolis, 06 - Nice (France)

    2007-11-15

    The stereotaxic radiotherapy robotized by 'Cyberknife fourth generation' allows a dosimetric optimization with a high conformity index on the tumor and radiation doses limited on organs at risk. A cerebral automatic anatomic segmentation atlas of organs at risk are used in routine in three dimensions. This study evaluated the superiority of the stereotaxic radiotherapy in comparison with the three dimensional conformal radiotherapy on the preservation of organs at risk in regard of the delivered dose to tumors justifying an accelerated hypo fractionation and a dose escalation. This automatic segmentation atlas should allow to establish correlations between anatomy and cerebral dosimetry; This atlas allows to underline the dosimetry optimization by stereotaxic radiotherapy robotized for organs at risk. (N.C.)

  5. From medical imaging data to 3D printed anatomical models.

    Science.gov (United States)

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  6. Reactor component automatic grapple

    International Nuclear Information System (INIS)

    Greenaway, P.R.

    1982-01-01

    A grapple for handling nuclear reactor components in a medium such as liquid sodium which, upon proper seating and alignment of the grapple with the component as sensed by a mechanical logic integral to the grapple, automatically seizes the component. The mechanical logic system also precludes seizure in the absence of proper seating and alignment. (author)

  7. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstrac...

  8. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Science.gov (United States)

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  9. Automatic welding machine for piping

    International Nuclear Information System (INIS)

    Yoshida, Kazuhiro; Koyama, Takaichi; Iizuka, Tomio; Ito, Yoshitoshi; Takami, Katsumi.

    1978-01-01

    A remotely controlled automatic special welding machine for piping was developed. This machine is utilized for long distance pipe lines, chemical plants, thermal power generating plants and nuclear power plants effectively from the viewpoint of good quality control, reduction of labor and good controllability. The function of this welding machine is to inspect the shape and dimensions of edge preparation before welding work by the sense of touch, to detect the temperature of melt pool, inspect the bead form by the sense of touch, and check the welding state by ITV during welding work, and to grind the bead surface and inspect the weld metal by ultrasonic test automatically after welding work. The construction of this welding system, the main specification of the apparatus, the welding procedure in detail, the electrical source of this welding machine, the cooling system, the structure and handling of guide ring, the central control system and the operating characteristics are explained. The working procedure and the effect by using this welding machine, and the application to nuclear power plants and the other industrial field are outlined. The HIDIC 08 is used as the controlling computer. This welding machine is useful for welding SUS piping as well as carbon steel piping. (Nakai, Y.)

  10. Automatic detection system for multiple region of interest registration to account for posture changes in head and neck radiotherapy

    Science.gov (United States)

    Mencarelli, A.; van Beek, S.; Zijp, L. J.; Rasch, C.; van Herk, M.; Sonke, J.-J.

    2014-04-01

    Despite immobilization of head and neck (H and N) cancer patients, considerable posture changes occur over the course of radiotherapy (RT). To account for the posture changes, we previously implemented a multiple regions of interest (mROIs) registration system tailored to the H and N region for image-guided RT correction strategies. This paper is focused on the automatic segmentation of the ROIs in the H and N region. We developed a fast and robust automatic detection system suitable for an online image-guided application and quantified its performance. The system was developed to segment nine high contrast structures from the planning CT including cervical vertebrae, mandible, hyoid, manubrium of sternum, larynx and occipital bone. It generates nine 3D rectangular-shaped ROIs and informs the user in case of ambiguities. Two observers evaluated the robustness of the segmentation on 188 H and N cancer patients. Bland-Altman analysis was applied to a sub-group of 50 patients to compare the registration results using only the automatically generated ROIs and those manually set by two independent experts. Finally the time performance and workload were evaluated. Automatic detection of individual anatomical ROIs had a success rate of 97%/53% with/without user notifications respectively. Following the notifications, for 38% of the patients one or more structures were manually adjusted. The processing time was on average 5 s. The limits of agreement between the local registrations of manually and automatically set ROIs was comprised between ±1.4 mm, except for the manubrium of sternum (-1.71 mm and 1.67 mm), and were similar to the limits agreement between the two experts. The workload to place the nine ROIs was reduced from 141 s (±20 s) by the manual procedure to 59 s (±17 s) using the automatic method. An efficient detection system to segment multiple ROIs was developed for Cone-Beam CT image-guided applications in the H and N region and is clinically implemented in

  11. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  12. Spinning gland transcriptomics from two main clades of spiders (order: Araneae--insights on their molecular, anatomical and behavioral evolution.

    Directory of Open Access Journals (Sweden)

    Francisco Prosdocimi

    Full Text Available Characterized by distinctive evolutionary adaptations, spiders provide a comprehensive system for evolutionary and developmental studies of anatomical organs, including silk and venom production. Here we performed cDNA sequencing using massively parallel sequencers (454 GS-FLX Titanium to generate ∼80,000 reads from the spinning gland of Actinopus spp. (infraorder: Mygalomorphae and Gasteracantha cancriformis (infraorder: Araneomorphae, Orbiculariae clade. Actinopus spp. retains primitive characteristics on web usage and presents a single undifferentiated spinning gland while the orbiculariae spiders have seven differentiated spinning glands and complex patterns of web usage. MIRA, Celera Assembler and CAP3 software were used to cluster NGS reads for each spider. CAP3 unigenes passed through a pipeline for automatic annotation, classification by biological function, and comparative transcriptomics. Genes related to spider silks were manually curated and analyzed. Although a single spidroin gene family was found in Actinopus spp., a vast repertoire of specialized spider silk proteins was encountered in orbiculariae. Astacin-like metalloproteases (meprin subfamily were shown to be some of the most sampled unigenes and duplicated gene families in G. cancriformis since its evolutionary split from mygalomorphs. Our results confirm that the evolution of the molecular repertoire of silk proteins was accompanied by the (i anatomical differentiation of spinning glands and (ii behavioral complexification in the web usage. Finally, a phylogenetic tree was constructed to cluster most of the known spidroins in gene clades. This is the first large-scale, multi-organism transcriptome for spider spinning glands and a first step into a broad understanding of spider web systems biology and evolution.

  13. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his ...... a renewed stimulus for continuing and deepening Bob's research visions. A familiar touch is given to the book by some pictures kindly provided to us by his wife Nieba, the personal recollections of his brother Gary and some of his colleagues and friends....... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...

  14. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychological...... aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  15. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  16. Automatic Language Identification

    Science.gov (United States)

    2000-08-01

    hundreds guish one language from another. The reader is referred of input languages would need to be supported , the cost of to the linguistics literature...eventually obtained bet- 108 TRAINING FRENCH GERMAN ITRAIING FRENCH M- ALGORITHM - __ GERMAN NHSPANISH TRAINING SPEECH SET OF MODELS: UTTERANCES ONE MODEL...i.e. vowels ) for each speech utterance are located malized to be insensitive to overall amplitude, pitch and automatically. Next, feature vectors

  17. Medical Image Processing for Fully Integrated Subject Specific Whole Brain Mesh Generation

    Directory of Open Access Journals (Sweden)

    Chih-Yang Hsu

    2015-05-01

    Full Text Available Currently, anatomically consistent segmentation of vascular trees acquired with magnetic resonance imaging requires the use of multiple image processing steps, which, in turn, depend on manual intervention. In effect, segmentation of vascular trees from medical images is time consuming and error prone due to the tortuous geometry and weak signal in small blood vessels. To overcome errors and accelerate the image processing time, we introduce an automatic image processing pipeline for constructing subject specific computational meshes for entire cerebral vasculature, including segmentation of ancillary structures; the grey and white matter, cerebrospinal fluid space, skull, and scalp. To demonstrate the validity of the new pipeline, we segmented the entire intracranial compartment with special attention of the angioarchitecture from magnetic resonance imaging acquired for two healthy volunteers. The raw images were processed through our pipeline for automatic segmentation and mesh generation. Due to partial volume effect and finite resolution, the computational meshes intersect with each other at respective interfaces. To eliminate anatomically inconsistent overlap, we utilized morphological operations to separate the structures with a physiologically sound gap spaces. The resulting meshes exhibit anatomically correct spatial extent and relative positions without intersections. For validation, we computed critical biometrics of the angioarchitecture, the cortical surfaces, ventricular system, and cerebrospinal fluid (CSF spaces and compared against literature values. Volumina and surface areas of the computational mesh were found to be in physiological ranges. In conclusion, we present an automatic image processing pipeline to automate the segmentation of the main intracranial compartments including a subject-specific vascular trees. These computational meshes can be used in 3D immersive visualization for diagnosis, surgery planning with haptics

  18. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  19. A multi-institution evaluation of deformable image registration algorithms for automatic organ delineation in adaptive head and neck radiotherapy

    Directory of Open Access Journals (Sweden)

    Hardcastle Nicholas

    2012-06-01

    Full Text Available Abstract Background Adaptive Radiotherapy aims to identify anatomical deviations during a radiotherapy course and modify the treatment plan to maintain treatment objectives. This requires regions of interest (ROIs to be defined using the most recent imaging data. This study investigates the clinical utility of using deformable image registration (DIR to automatically propagate ROIs. Methods Target (GTV and organ-at-risk (OAR ROIs were non-rigidly propagated from a planning CT scan to a per-treatment CT scan for 22 patients. Propagated ROIs were quantitatively compared with expert physician-drawn ROIs on the per-treatment scan using Dice scores and mean slicewise Hausdorff distances, and center of mass distances for GTVs. The propagated ROIs were qualitatively examined by experts and scored based on their clinical utility. Results Good agreement between the DIR-propagated ROIs and expert-drawn ROIs was observed based on the metrics used. 94% of all ROIs generated using DIR were scored as being clinically useful, requiring minimal or no edits. However, 27% (12/44 of the GTVs required major edits. Conclusion DIR was successfully used on 22 patients to propagate target and OAR structures for ART with good anatomical agreement for OARs. It is recommended that propagated target structures be thoroughly reviewed by the treating physician.

  20. A multi-institution evaluation of deformable image registration algorithms for automatic organ delineation in adaptive head and neck radiotherapy

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Kumar, Prashant; Oechsner, Markus; Richter, Anne; Song, Shiyu; Myers, Michael; Polat, Bülent; Bzdusek, Karl; Tomé, Wolfgang A; Cannon, Donald M; Brouwer, Charlotte L; Wittendorp, Paul WH; Dogan, Nesrin; Guckenberger, Matthias; Allaire, Stéphane; Mallya, Yogish

    2012-01-01

    Adaptive Radiotherapy aims to identify anatomical deviations during a radiotherapy course and modify the treatment plan to maintain treatment objectives. This requires regions of interest (ROIs) to be defined using the most recent imaging data. This study investigates the clinical utility of using deformable image registration (DIR) to automatically propagate ROIs. Target (GTV) and organ-at-risk (OAR) ROIs were non-rigidly propagated from a planning CT scan to a per-treatment CT scan for 22 patients. Propagated ROIs were quantitatively compared with expert physician-drawn ROIs on the per-treatment scan using Dice scores and mean slicewise Hausdorff distances, and center of mass distances for GTVs. The propagated ROIs were qualitatively examined by experts and scored based on their clinical utility. Good agreement between the DIR-propagated ROIs and expert-drawn ROIs was observed based on the metrics used. 94% of all ROIs generated using DIR were scored as being clinically useful, requiring minimal or no edits. However, 27% (12/44) of the GTVs required major edits. DIR was successfully used on 22 patients to propagate target and OAR structures for ART with good anatomical agreement for OARs. It is recommended that propagated target structures be thoroughly reviewed by the treating physician

  1. A high-resolution anatomical framework of the neonatal mouse brain for managing gene expression data

    Directory of Open Access Journals (Sweden)

    Jyl Boline

    2007-11-01

    Full Text Available This study aims to provide a high-resolution atlas and use it as an anatomical framework to localize the gene expression data for mouse brain on postnatal day 0 (P0. A color Nissl-stained volume with a resolution of 13.3×50×13.3 µm3 was constructed and co-registered to a standard anatomical space defined by an averaged geometry of C57BL/6J P0 mouse brains. A 145 anatomical structures were delineated based on the histological images. Anatomical relationships of delineated structures were established based on the hierarchical relations defined in the atlas of adult mouse brain (MacKenzie-Graham et al., 2004 so the P0 atlas can be related to the database associated with the adult atlas. The co-registered multimodal atlas as well as the original anatomical delineations is available for download at http://www.loni.ucla.edu/Atlases/. The region-specific anatomical framework based on the neonatal atlas allows for the analysis of gene activity within a high-resolution anatomical space at an early developmental stage. We demonstrated the potential application of this framework by incorporating gene expression data generated using in situ hybridization to the atlas space. By normalizing the gene expression patterns revealed by different images, experimental results from separate studies can be compared and summarized in an anatomical context. Co-displaying multiple registered datasets in the atlas space allows for 3D reconstruction of the co-expression patterns of the different genes in the atlas space, hence providing better insight into the relationship between the differentiated distribution pattern of gene products and specific anatomical systems.

  2. Automatic Segmentation and Online virtualCT in Head-and-Neck Adaptive Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Peroni, Marta, E-mail: marta.peroni@mail.polimi.it [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Ciardo, Delia [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Spadea, Maria Francesca [Department of Experimental and Clinical Medicine, Universita degli Studi Magna Graecia, Catanzaro (Italy); Riboldi, Marco [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Comi, Stefania; Alterio, Daniela [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Baroni, Guido [Department of Bioengineering, Politecnico di Milano, Milano (Italy); Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy); Orecchia, Roberto [Advanced Radiotherapy Center, European Institute of Oncology, Milano (Italy); Universita degli Studi di Milano, Milano (Italy); Medical Department, Centro Nazionale di Adroterapia Oncologica, Pavia (Italy)

    2012-11-01

    Purpose: The purpose of this work was to develop and validate an efficient and automatic strategy to generate online virtual computed tomography (CT) scans for adaptive radiation therapy (ART) in head-and-neck (HN) cancer treatment. Method: We retrospectively analyzed 20 patients, treated with intensity modulated radiation therapy (IMRT), for an HN malignancy. Different anatomical structures were considered: mandible, parotid glands, and nodal gross tumor volume (nGTV). We generated 28 virtualCT scans by means of nonrigid registration of simulation computed tomography (CTsim) and cone beam CT images (CBCTs), acquired for patient setup. We validated our approach by considering the real replanning CT (CTrepl) as ground truth. We computed the Dice coefficient (DSC), center of mass (COM) distance, and root mean square error (RMSE) between correspondent points located on the automatically segmented structures on CBCT and virtualCT. Results: Residual deformation between CTrepl and CBCT was below one voxel. Median DSC was around 0.8 for mandible and parotid glands, but only 0.55 for nGTV, because of the fairly homogeneous surrounding soft tissues and of its small volume. Median COM distance and RMSE were comparable with image resolution. No significant correlation between RMSE and initial or final deformation was found. Conclusion: The analysis provides evidence that deformable image registration may contribute significantly in reducing the need of full CT-based replanning in HN radiation therapy by supporting swift and objective decision-making in clinical practice. Further work is needed to strengthen algorithm potential in nGTV localization.

  3. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  4. Automated Analysis of 123I-beta-CIT SPECT Images with Statistical Probabilistic Anatomical Mapping

    International Nuclear Information System (INIS)

    Eo, Jae Seon; Lee, Hoyoung; Lee, Jae Sung; Kim, Yu Kyung; Jeon, Bumseok; Lee, Dong Soo

    2014-01-01

    Population-based statistical probabilistic anatomical maps have been used to generate probabilistic volumes of interest for analyzing perfusion and metabolic brain imaging. We investigated the feasibility of automated analysis for dopamine transporter images using this technique and evaluated striatal binding potentials in Parkinson's disease and Wilson's disease. We analyzed 2β-Carbomethoxy-3β-(4- 123 I-iodophenyl)tropane ( 123 I-beta-CIT) SPECT images acquired from 26 people with Parkinson's disease (M:F=11:15,mean age=49±12 years), 9 people with Wilson's disease (M: F=6:3, mean age=26±11 years) and 17 normal controls (M:F=5:12, mean age=39±16 years). A SPECT template was created using striatal statistical probabilistic map images. All images were spatially normalized onto the template, and probability-weighted regional counts in striatal structures were estimated. The binding potential was calculated using the ratio of specific and nonspecific binding activities at equilibrium. Voxel-based comparisons between groups were also performed using statistical parametric mapping. Qualitative assessment showed that spatial normalizations of the SPECT images were successful for all images. The striatal binding potentials of participants with Parkinson's disease and Wilson's disease were significantly lower than those of normal controls. Statistical parametric mapping analysis found statistically significant differences only in striatal regions in both disease groups compared to controls. We successfully evaluated the regional 123 I-beta-CIT distribution using the SPECT template and probabilistic map data automatically. This procedure allows an objective and quantitative comparison of the binding potential, which in this case showed a significantly decreased binding potential in the striata of patients with Parkinson's disease or Wilson's disease

  5. Large-scale subject-specific cerebral arterial tree modeling using automated parametric mesh generation for blood flow simulation.

    Science.gov (United States)

    Ghaffari, Mahsa; Tangen, Kevin; Alaraj, Ali; Du, Xinjian; Charbel, Fady T; Linninger, Andreas A

    2017-12-01

    In this paper, we present a novel technique for automatic parametric mesh generation of subject-specific cerebral arterial trees. This technique generates high-quality and anatomically accurate computational meshes for fast blood flow simulations extending the scope of 3D vascular modeling to a large portion of cerebral arterial trees. For this purpose, a parametric meshing procedure was developed to automatically decompose the vascular skeleton, extract geometric features and generate hexahedral meshes using a body-fitted coordinate system that optimally follows the vascular network topology. To validate the anatomical accuracy of the reconstructed vasculature, we performed statistical analysis to quantify the alignment between parametric meshes and raw vascular images using receiver operating characteristic curve. Geometric accuracy evaluation showed an agreement with area under the curves value of 0.87 between the constructed mesh and raw MRA data sets. Parametric meshing yielded on-average, 36.6% and 21.7% orthogonal and equiangular skew quality improvement over the unstructured tetrahedral meshes. The parametric meshing and processing pipeline constitutes an automated technique to reconstruct and simulate blood flow throughout a large portion of the cerebral arterial tree down to the level of pial vessels. This study is the first step towards fast large-scale subject-specific hemodynamic analysis for clinical applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    Energy Technology Data Exchange (ETDEWEB)

    Maitree, R; Guzman, G; Chundury, A; Roach, M; Yang, D [Washington University School of Medicine, St Louis, MO (United States)

    2016-06-15

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness of available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and

  7. Determining customer satisfaction in anatomic pathology.

    Science.gov (United States)

    Zarbo, Richard J

    2006-05-01

    Measurement of physicians' and patients' satisfaction with laboratory services has become a standard practice in the United States, prompted by national accreditation requirements. Unlike other surveys of hospital-, outpatient care-, or physician-related activities, no ongoing, comprehensive customer satisfaction survey of anatomic pathology services is available for subscription that would allow continual benchmarking against peer laboratories. Pathologists, therefore, must often design their own local assessment tools to determine physician satisfaction in anatomic pathology. To describe satisfaction survey design that would elicit specific information from physician customers about key elements of anatomic pathology services. The author shares his experience in biannually assessing customer satisfaction in anatomic pathology with survey tools designed at the Henry Ford Hospital, Detroit, Mich. Benchmarks for physician satisfaction, opportunities for improvement, and characteristics that correlated with a high level of physician satisfaction were identified nationally from a standardized survey tool used by 94 laboratories in the 2001 College of American Pathologists Q-Probes quality improvement program. In general, physicians are most satisfied with professional diagnostic services and least satisfied with pathology services related to poor communication. A well-designed and conducted customer satisfaction survey is an opportunity for pathologists to periodically educate physician customers about services offered, manage unrealistic expectations, and understand the evolving needs of the physician customer. Armed with current information from physician customers, the pathologist is better able to strategically plan for resources that facilitate performance improvements in anatomic pathology laboratory services that align with evolving clinical needs in health care delivery.

  8. Automatic Evaluation Of Interferograms

    Science.gov (United States)

    Becker, Friedhelm; Meier, Gerd E. A.; Wegner, Horst

    1983-03-01

    A system for the automatic evaluation of interference patterns has been developed. After digitizing the interferograms from classical and holografic interferometers with a television digitizer and performing different picture enhancement operations the fringe loci are extracted by use of a floating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which might appear if there was insufficient contrast in the interferograms. The reconstruction of the object function from the numbered fringe field is achieved by a local polynomial least-squares approximation. Applications are given, demonstrating the evaluation of interferograms of supersonic flow fields and the analysis of holografic interferograms of car-tyres.

  9. Automatic quantitative renal scintigraphy

    International Nuclear Information System (INIS)

    Valeyre, J.; Deltour, G.; Delisle, M.J.; Bouchard, A.

    1976-01-01

    Renal scintigraphy data may be analyzed automatically by the use of a processing system coupled to an Anger camera (TRIDAC-MULTI 8 or CINE 200). The computing sequence is as follows: normalization of the images; background noise subtraction on both images; evaluation of mercury 197 uptake by the liver and spleen; calculation of the activity fractions on each kidney with respect to the injected dose, taking into account the kidney depth and the results referred to normal values; edition of the results. Automation minimizes the scattering parameters and by its simplification is a great asset in routine work [fr

  10. Automated anatomical description of pleural thickening towards improvement of its computer-assisted diagnosis

    Science.gov (United States)

    Chaisaowong, Kraisorn; Jiang, Mingze; Faltin, Peter; Merhof, Dorit; Eisenhawer, Christian; Gube, Monika; Kraus, Thomas

    2016-03-01

    Pleural thickenings are caused by asbestos exposure and may evolve into malignant pleural mesothelioma. An early diagnosis plays a key role towards an early treatment and an increased survival rate. Today, pleural thickenings are detected by visual inspection of CT data, which is time-consuming and underlies the physician's subjective judgment. A computer-assisted diagnosis system to automatically assess pleural thickenings has been developed, which includes not only a quantitative assessment with respect to size and location, but also enhances this information with an anatomical description, i.e. lung side (left, right), part of pleura (pars costalis, mediastinalis, diaphragmatica, spinalis), as well as vertical (upper, middle, lower) and horizontal (ventral, dorsal) position. For this purpose, a 3D anatomical model of the lung surface has been manually constructed as a 3D atlas. Three registration sub-steps including rigid, affine, and nonrigid registration align the input patient lung to the 3D anatomical atlas model of the lung surface. Finally, each detected pleural thickening is assigned a set of labels describing its anatomical properties. Through this added information, an enhancement to the existing computer-assisted diagnosis system is presented in order to assure a higher precision and reproducible assessment of pleural thickenings, aiming at the diagnosis of the pleural mesothelioma in its early stage.

  11. Anatomic breast coordinate system for mammogram analysis

    DEFF Research Database (Denmark)

    Karemore, Gopal; Brandt, S.; Karssemeijer, N.

    2011-01-01

    inside the breast. Most of the risk assessment and CAD modules use a breast region in a image centered Cartesian x,y coordinate system. Nevertheless, anatomical structure follows curve-linear trajectories. We examined an anatomical breast coordinate system that preserves the anatomical correspondence...... between the mammograms and allows extracting not only the aligned position but also the orientation aligned with the anatomy of the breast tissue structure. Materials and Methods The coordinate system used the nipple location as the point A and the border of the pectoral muscle as a line BC. The skin air...... interface was identified as a curve passing through A and intersecting the pectoral muscle line. The nipple was defined as the origin of the coordinate system. A family of second order curves were defined through the nipple and intersecting the pectoral line (AD). Every pixel location in mammogram...

  12. Anatomic Eponyms in Neuroradiology: Head and Neck.

    Science.gov (United States)

    Bunch, Paul M

    2016-10-01

    In medicine, an eponym is a word-typically referring to an anatomic structure, disease, or syndrome-that is derived from a person's name. Medical eponyms are ubiquitous and numerous. They are also at times controversial. Eponyms reflect medicine's rich and colorful history and can be useful for concisely conveying complex concepts. Familiarity with eponyms facilitates correct usage and accurate communication. In this article, 22 eponyms used to describe anatomic structures of the head and neck are discussed. For each structure, the author first provides a biographical account of the individual for whom the structure is named. An anatomic description and brief discussion of the structure's clinical relevance follow. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  13. Automatic measurement of the radioactive mercury uptake by the kidney

    International Nuclear Information System (INIS)

    Zurowski, S.; Raynaud, C.; CEA, 91 - Orsay

    1976-01-01

    An entirely automatic method to measure the Hg uptake by the kidney is proposed. The following operations are carried out in succession: measurement of extrarenal activity, demarcation of uptake areas, anatomical identification of uptake areas, separation of overlapping organ images and measurement of kidney depth. The first results thus calculated on 30 patients are very close to those obtained with a standard manual method and are highly encouraging. Two important points should be stressed: a broad demarcation of the uptake areas is necessary and an original method, that of standard errors, is useful for the background noise determination and uptake area demarcation. This automatic measurement technique is so designed that it can be applied to other special cases [fr

  14. Automatic readout micrometer

    Science.gov (United States)

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  15. Automatic personnel contamination monitor

    International Nuclear Information System (INIS)

    Lattin, Kenneth R.

    1978-01-01

    United Nuclear Industries, Inc. (UNI) has developed an automatic personnel contamination monitor (APCM), which uniquely combines the design features of both portal and hand and shoe monitors. In addition, this prototype system also has a number of new features, including: micro computer control and readout, nineteen large area gas flow detectors, real-time background compensation, self-checking for system failures, and card reader identification and control. UNI's experience in operating the Hanford N Reactor, located in Richland, Washington, has shown the necessity of automatically monitoring plant personnel for contamination after they have passed through the procedurally controlled radiation zones. This final check ensures that each radiation zone worker has been properly checked before leaving company controlled boundaries. Investigation of the commercially available portal and hand and shoe monitors indicated that they did not have the sensitivity or sophistication required for UNI's application, therefore, a development program was initiated, resulting in the subject monitor. Field testing shows good sensitivity to personnel contamination with the majority of alarms showing contaminants on clothing, face and head areas. In general, the APCM has sensitivity comparable to portal survey instrumentation. The inherit stand-in, walk-on feature of the APCM not only makes it easy to use, but makes it difficult to bypass. (author)

  16. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  17. Lacrimal Gland Pathologies from an Anatomical Perspective

    Directory of Open Access Journals (Sweden)

    Mahmut Sinan Abit

    2015-06-01

    Full Text Available Most of the patients in our daily practice have one or more ocular surface disorders including conjucntivitis, keratitis, dry eye disease, meibomian gland dysfunction, contact lens related symptoms, refractive errors,computer vision syndrome. Lacrimal gland has an important role in all above mentioned pathologies due to its major secretory product. An anatomical and physiological knowledge about lacrimal gland is a must in understanding basic and common ophthalmological cases. İn this paper it is aimed to explain the lacrimal gland diseases from an anatomical perspective.

  18. Reachability Games on Automatic Graphs

    Science.gov (United States)

    Neider, Daniel

    In this work we study two-person reachability games on finite and infinite automatic graphs. For the finite case we empirically show that automatic game encodings are competitive to well-known symbolic techniques such as BDDs, SAT and QBF formulas. For the infinite case we present a novel algorithm utilizing algorithmic learning techniques, which allows to solve huge classes of automatic reachability games.

  19. Automatic reactor protection system tester

    International Nuclear Information System (INIS)

    Deliant, J.D.; Jahnke, S.; Raimondo, E.

    1988-01-01

    The object of this paper is to present the automatic tester of reactor protection systems designed and developed by EDF and Framatome. In order, the following points are discussed: . The necessity for reactor protection system testing, . The drawbacks of manual testing, . The description and use of the Framatome automatic tester, . On-site installation of this system, . The positive results obtained using the Framatome automatic tester in France

  20. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  1. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  2. Automatic assessment of ultrasound image usability

    Science.gov (United States)

    Valente, Luca; Funka-Lea, Gareth; Stoll, Jeffrey

    2011-03-01

    We present a novel and efficient approach for evaluating the quality of ultrasound images. Image acquisition is sensitive to skin contact and transducer orientation and requires both time and technical skill to be done properly. Images commonly suffer degradation due to acoustic shadows and signal attenuation, which present as regions of low signal intensity masking anatomical details and making the images partly or totally unusable. As ultrasound image acquisition and analysis becomes increasingly automated, it is beneficial to also automate the estimation of image quality. Towards this end, we present an algorithm that classifies regions of an image as usable or un-usable. Example applications of this algorithm include improved compounding of free-hand 3D ultrasound volumes by eliminating unusable data and improved automatic feature detection by limiting detection to only usable areas. The algorithm operates in two steps. First, it classifies the image into bright areas, likely to have image content, and dark areas, likely to have no content. Second, it classifies the dark areas into unusable (i.e. due to shadowing and/or signal loss) and usable (i.e. anatomically accurate dark regions, such as with a blood vessel) sub-areas. The classification considers several factors, including statistical information, gradient intensity and geometric properties such as shape and relative position. Relative weighting of factors was obtained through the training of a Support Vector Machine. Classification results for both human and phantom images are presented and compared to manual classifications. This method achieves 91% sensitivity and 91% specificity for usable regions of human scans.

  3. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  4. Semi-automatic fluoroscope

    International Nuclear Information System (INIS)

    Tarpley, M.W.

    1976-10-01

    Extruded aluminum-clad uranium-aluminum alloy fuel tubes must pass many quality control tests before irradiation in Savannah River Plant nuclear reactors. Nondestructive test equipment has been built to automatically detect high and low density areas in the fuel tubes using x-ray absorption techniques with a video analysis system. The equipment detects areas as small as 0.060-in. dia with 2 percent penetrameter sensitivity. These areas are graded as to size and density by an operator using electronic gages. Video image enhancement techniques permit inspection of ribbed cylindrical tubes and make possible the testing of areas under the ribs. Operation of the testing machine, the special low light level television camera, and analysis and enhancement techniques are discussed

  5. AUTOMATIC ARCHITECTURAL STYLE RECOGNITION

    Directory of Open Access Journals (Sweden)

    M. Mathias

    2012-09-01

    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  6. Automatic surveying techniques

    International Nuclear Information System (INIS)

    Sah, R.

    1976-01-01

    In order to investigate the feasibility of automatic surveying methods in a more systematic manner, the PEP organization signed a contract in late 1975 for TRW Systems Group to undertake a feasibility study. The completion of this study resulted in TRW Report 6452.10-75-101, dated December 29, 1975, which was largely devoted to an analysis of a survey system based on an Inertial Navigation System. This PEP note is a review and, in some instances, an extension of that TRW report. A second survey system which employed an ''Image Processing System'' was also considered by TRW, and it will be reviewed in the last section of this note. 5 refs., 5 figs., 3 tabs

  7. Anatomical and palynological characteristics of Salvia willeana ...

    African Journals Online (AJOL)

    USER

    2010-04-05

    Apr 5, 2010 ... investigated in this study and the findings obtained were compared with other studies conducted on Salvia genus. Metcalfe and Chalk (1950) found the data on the anatomical characteristics of S. species. These researchers revealed that the species belonging to Labiatae family usually have rectangle or ...

  8. Descriptions of anatomical differences between skulls and ...

    African Journals Online (AJOL)

    The external anatomical differences between the skulls and mandibles of 10 mountain zebras Equus zebra and 10 plains zebras E. burchelli of both sexes were studied. The nomenclature used conforms to Nomina Anatomica Veterinaria (1983). Eleven structural differences are described for the first time and illustrated, viz., ...

  9. HPV Vaccine Effective at Multiple Anatomic Sites

    Science.gov (United States)

    A new study from NCI researchers finds that the HPV vaccine protects young women from infection with high-risk HPV types at the three primary anatomic sites where persistent HPV infections can cause cancer. The multi-site protection also was observed at l

  10. Report of a rare anatomic variant

    DEFF Research Database (Denmark)

    De Brucker, Y; Ilsen, B; Muylaert, C

    2015-01-01

    We report the CT findings in a case of partial anomalous pulmonary venous return (PAPVR) from the left upper lobe in an adult. PAPVR is an anatomic variant in which one to three pulmonary veins drain into the right atrium or its tributaries, rather than into the left atrium. This results in a lef...

  11. Morphological and anatomical response of Acacia ehrenbergiana ...

    African Journals Online (AJOL)

    Both species responded morphologically as well as anatomically to water stress. Water stress caused significant (P=0.05) decrease in relative water content, leaf number and area and leaf water potential, chlorophyll content, and stem height and diameter. Seedlings of both species responded to water stress by the ...

  12. Anatomical characteristics of southern pine stemwood

    Science.gov (United States)

    Elaine T. Howard; Floyd G. Manwiller

    1968-01-01

    To obtain a definitive description of the wood and anatomy of all 10 species of southern pine, juvenile, intermediate, and mature wood was sampled at three heights in one tree of each species and examined under a light microscope. Photographs and three-dimensional drawings were made to illustrate the morphology. No significant anatomical differences were found...

  13. TIBIAL LANDMARKS IN ACL ANATOMIC REPAIR

    Directory of Open Access Journals (Sweden)

    M. V. Demesсhenko

    2016-01-01

    Full Text Available Purpose: to identify anatomical landmarks on tibial articular surface to serve as reference in preparing tibial canal with respect to the center of ACL footprint during single bundle arthroscopic repair.Materials and methods. Twelve frozen knee joint specimens and 68 unpaired macerated human tibia were studied using anatomical, morphometric, statistical methods as well as graphic simulation.Results. Center of the tibial ACL footprint was located 13,1±1,7 mm anteriorly from posterior border of intercondylar eminence, at 1/3 of the distance along the line connecting apexes of internal and external tubercles and 6,1±0,5 mm anteriorly along the perpendicular raised to this point.Conclusion. Internal and external tubercles, as well as posterior border of intercondylar eminence can be considered as anatomical references to determine the center of the tibial ACL footprint and to prepare bone canals for anatomic ligament repair.

  14. Handbook of anatomical models for radiation dosimetry

    CERN Document Server

    Eckerman, Keith F

    2010-01-01

    Covering the history of human model development, this title presents the major anatomical and physical models that have been developed for human body radiation protection, diagnostic imaging, and nuclear medicine therapy. It explores how these models have evolved and the role that modern technologies have played in this development.

  15. Influences on anatomical knowledge: The complete arguments

    NARCIS (Netherlands)

    Bergman, E.M.; Verheijen, I.W.; Scherpbier, A.J.J.A.; Vleuten, C.P.M. van der; Bruin, A.B. De

    2014-01-01

    Eight factors are claimed to have a negative influence on anatomical knowledge of medical students: (1) teaching by nonmedically qualified teachers, (2) the absence of a core anatomy curriculum, (3) decreased use of dissection as a teaching tool, (4) lack of teaching anatomy in context, (5)

  16. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation ...

  17. Automatic transformations in the inference process

    Energy Technology Data Exchange (ETDEWEB)

    Veroff, R. L.

    1980-07-01

    A technique for incorporating automatic transformations into processes such as the application of inference rules, subsumption, and demodulation provides a mechanism for improving search strategies for theorem proving problems arising from the field of program verification. The incorporation of automatic transformations into the inference process can alter the search space for a given problem, and is particularly useful for problems having broad rather than deep proofs. The technique can also be used to permit the generation of inferences that might otherwise be blocked and to build some commutativity or associativity into the unification process. Appropriate choice of transformations, and new literal clashing and unification algorithms for applying them, showed significant improvement on several real problems according to several distinct criteria. 22 references, 1 figure.

  18. Automatic alkaloid removal system.

    Science.gov (United States)

    Yahaya, Muhammad Rizuwan; Hj Razali, Mohd Hudzari; Abu Bakar, Che Abdullah; Ismail, Wan Ishak Wan; Muda, Wan Musa Wan; Mat, Nashriyah; Zakaria, Abd

    2014-01-01

    This alkaloid automated removal machine was developed at Instrumentation Laboratory, Universiti Sultan Zainal Abidin Malaysia that purposely for removing the alkaloid toxicity from Dioscorea hispida (DH) tuber. It is a poisonous plant where scientific study has shown that its tubers contain toxic alkaloid constituents, dioscorine. The tubers can only be consumed after it poisonous is removed. In this experiment, the tubers are needed to blend as powder form before inserting into machine basket. The user is need to push the START button on machine controller for switching the water pump ON by then creating turbulence wave of water in machine tank. The water will stop automatically by triggering the outlet solenoid valve. The powders of tubers are washed for 10 minutes while 1 liter of contaminated water due toxin mixture is flowing out. At this time, the controller will automatically triggered inlet solenoid valve and the new water will flow in machine tank until achieve the desire level that which determined by ultra sonic sensor. This process will repeated for 7 h and the positive result is achieved and shows it significant according to the several parameters of biological character ofpH, temperature, dissolve oxygen, turbidity, conductivity and fish survival rate or time. From that parameter, it also shows the positive result which is near or same with control water and assuming was made that the toxin is fully removed when the pH of DH powder is near with control water. For control water, the pH is about 5.3 while water from this experiment process is 6.0 and before run the machine the pH of contaminated water is about 3.8 which are too acid. This automated machine can save time for removing toxicity from DH compared with a traditional method while less observation of the user.

  19. Using 3D Modeling Techniques to Enhance Teaching of Difficult Anatomical Concepts.

    Science.gov (United States)

    Pujol, Sonia; Baldwin, Michael; Nassiri, Joshua; Kikinis, Ron; Shaffer, Kitt

    2016-04-01

    Anatomy is an essential component of medical education as it is critical for the accurate diagnosis in organs and human systems. The mental representation of the shape and organization of different anatomical structures is a crucial step in the learning process. The purpose of this pilot study is to demonstrate the feasibility and benefits of developing innovative teaching modules for anatomy education of first-year medical students based on three-dimensional (3D) reconstructions from actual patient data. A total of 196 models of anatomical structures from 16 anonymized computed tomography datasets were generated using the 3D Slicer open-source software platform. The models focused on three anatomical areas: the mediastinum, the upper abdomen, and the pelvis. Online optional quizzes were offered to first-year medical students to assess their comprehension in the areas of interest. Specific tasks were designed for students to complete using the 3D models. Scores of the quizzes confirmed a lack of understanding of 3D spatial relationships of anatomical structures despite standard instruction including dissection. Written task material and qualitative review by students suggested that interaction with 3D models led to a better understanding of the shape and spatial relationships among structures, and helped illustrate anatomical variations from one body to another. The study demonstrates the feasibility of one possible approach to the generation of 3D models of the anatomy from actual patient data. The educational materials developed have the potential to supplement the teaching of complex anatomical regions and help demonstrate the anatomical variation among patients. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  20. ANATOMIC STRUCTURE OF CAMPANULA ROTUNDIFOLIA L. GRASS

    Directory of Open Access Journals (Sweden)

    V. N. Bubenchikova

    2017-01-01

    Full Text Available The article present results of the study for a anatomic structure of Campanula rotundifolia grass from Campanulaceae family. Despite its dispersion and application in folk medicine, there are no data about its anatomic structure, therefore to estimate the indices of authenticity and quality of raw materials it is necessary to develop microdiagnostical features in the first place, which could help introducing of thisplant in a medical practice. The purpose of this work is to study anatomical structureof Campanula rotundifolia grass to determine its diagnostic features. Methods. Thestudy for anatomic structure was carried out in accordance with the requirements of State Pharmacopoeia, edition XIII. Micromed laboratory microscope with digital adjutage was used to create microphotoes, Photoshop CC was used for their processing. Result. We have established that stalk epidermis is prosenchymal, slightly winding with straight of splayed end cells. After study for the epidermis cells we established that upper epidermis cells had straight walls and are slightly winding. The cells of lower epidermishave more winding walls with prolong wrinkled cuticule. Presence of simple one-cell, thin wall, rough papillose hair on leaf and stalk epidermis. Cells of epidermis in fauces of corolla are prosenchymal, with winding walls, straight or winding walls in a cup. Papillary excrescences can be found along the cup edges. Stomatal apparatus is anomocytic. Conclusion. As the result of the study we have carried out the research for Campanula rotundifolia grass anatomic structure, and determined microdiagnostic features for determination of raw materials authenticity, which included presence of simple, one-cell, thin-walled, rough papillose hair on both epidermises of a leaf, along the veins, leaf edge, and stalk epidermis, as well as the presence of epidermis cells with papillary excrescences along the edges of leaves and cups. Intercellular canals are situatedalong the

  1. Automatic standard plane adjustment on mobile C-Arm CT images of the calcaneus using atlas-based feature registration

    Science.gov (United States)

    Brehler, Michael; Görres, Joseph; Wolf, Ivo; Franke, Jochen; von Recum, Jan; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana

    2014-03-01

    Intraarticular fractures of the calcaneus are routinely treated by open reduction and internal fixation followed by intraoperative imaging to validate the repositioning of bone fragments. C-Arm CT offers surgeons the possibility to directly verify the alignment of the fracture parts in 3D. Although the device provides more mobility, there is no sufficient information about the device-to-patient orientation for standard plane reconstruction. Hence, physicians have to manually align the image planes in a position that intersects with the articular surfaces. This can be a time-consuming step and imprecise adjustments lead to diagnostic errors. We address this issue by introducing novel semi-/automatic methods for adjustment of the standard planes on mobile C-Arm CT images. With the semi-automatic method, physicians can quickly adjust the planes by setting six points based on anatomical landmarks. The automatic method reconstructs the standard planes in two steps, first SURF keypoints (2D and newly introduced pseudo-3D) are generated for each image slice; secondly, these features are registered to an atlas point set and the parameters of the image planes are transformed accordingly. The accuracy of our method was evaluated on 51 mobile C-Arm CT images from clinical routine with manually adjusted standard planes by three physicians of different expertise. The average time of the experts (46s) deviated from the intermediate user (55s) by 9 seconds. By applying 2D SURF key points 88% of the articular surfaces were intersected correctly by the transformed standard planes with a calculation time of 10 seconds. The pseudo-3D features performed even better with 91% and 8 seconds.

  2. Classifying visemes for automatic lipreading

    NARCIS (Netherlands)

    Visser, Michiel; Poel, Mannes; Nijholt, Antinus; Matousek, Vaclav; Mautner, Pavel; Ocelikovi, Jana; Sojka, Petr

    1999-01-01

    Automatic lipreading is automatic speech recognition that uses only visual information. The relevant data in a video signal is isolated and features are extracted from it. From a sequence of feature vectors, where every vector represents one video image, a sequence of higher level semantic elements

  3. Automatic Synthesis of Panoramic Radiographs from Dental Cone Beam Computed Tomography Data.

    Directory of Open Access Journals (Sweden)

    Ting Luo

    Full Text Available In this paper, we propose an automatic method of synthesizing panoramic radiographs from dental cone beam computed tomography (CBCT data for directly observing the whole dentition without the superimposition of other structures. This method consists of three major steps. First, the dental arch curve is generated from the maximum intensity projection (MIP of 3D CBCT data. Then, based on this curve, the long axial curves of the upper and lower teeth are extracted to create a 3D panoramic curved surface describing the whole dentition. Finally, the panoramic radiograph is synthesized by developing this 3D surface. Both open-bite shaped and closed-bite shaped dental CBCT datasets were applied in this study, and the resulting images were analyzed to evaluate the effectiveness of this method. With the proposed method, a single-slice panoramic radiograph can clearly and completely show the whole dentition without the blur and superimposition of other dental structures. Moreover, thickened panoramic radiographs can also be synthesized with increased slice thickness to show more features, such as the mandibular nerve canal. One feature of the proposed method is that it is automatically performed without human intervention. Another feature of the proposed method is that it requires thinner panoramic radiographs to show the whole dentition than those produced by other existing methods, which contributes to the clarity of the anatomical structures, including the enamel, dentine and pulp. In addition, this method can rapidly process common dental CBCT data. The speed and image quality of this method make it an attractive option for observing the whole dentition in a clinical setting.

  4. Gap-free segmentation of vascular networks with automatic image processing pipeline.

    Science.gov (United States)

    Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas

    2017-03-01

    Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Automatic R.O.I. calculation for bone densitometry; Determination automatique de regions d'interet en osteodensitometrie

    Energy Technology Data Exchange (ETDEWEB)

    Darboux, M.; Gonon, G.; Dinten, J.M

    2005-07-01

    We propose in this paper an automatic detection method of anatomic zones (lumbar vertebrae, hip) for the two kind of images hip and lumbar spinal cord. The strongness of this method has been evaluated on clinical images series. (N.C.)

  6. Historical evolution of anatomical terminology from ancient to modern.

    Science.gov (United States)

    Sakai, Tatsuo

    2007-06-01

    The historical development of anatomical terminology from the ancient to the modern can be divided into five stages. The initial stage is represented by the oldest extant anatomical treatises by Galen of Pergamon in the Roman Empire. The anatomical descriptions by Galen utilized only a limited number of anatomical terms, which were essentially colloquial words in the Greek of this period. In the second stage, Vesalius in the early 16th century described the anatomical structures in his Fabrica with the help of detailed magnificent illustrations. He coined substantially no anatomical terms, but devised a system that distinguished anatomical structures with ordinal numbers. The third stage of development in the late 16th century was marked by innovation of a large number of specific anatomical terms especially for the muscles, vessels and nerves. The main figures at this stage were Sylvius in Paris and Bauhin in Basel. In the fourth stage between Bauhin and the international anatomical terminology, many anatomical textbooks were written mainly in Latin in the 17th century, and in modern languages in the 18th and 19th centuries. Anatomical terms for the same structure were differently expressed by different authors. The last stage began at the end of the 19th century, when the first international anatomical terminology in Latin was published as Nomina anatomica. The anatomical terminology was revised repeatedly until the current Terminologia anatomica both in Latin and English.

  7. A Unification of Inheritance and Automatic Program Specialization

    DEFF Research Database (Denmark)

    Schultz, Ulrik Pagh

    2004-01-01

    , inheritance is used to control the automatic application of program specialization to class members during compilation to obtain an efficient implementation. This paper presents the language JUST, which integrates object-oriented concepts, block structure, and techniques from automatic program specialization......The object-oriented style of programming facilitates program adaptation and enhances program genericness, but at the expense of efficiency. Automatic program specialization can be used to generate specialized, efficient implementations for specific scenarios, but requires the program...... to be structured appropriately for specialization and is yet another new concept for the programmer to understand and apply. We have unified automatic program specialization and inheritance into a single concept, and implemented this approach in a modified version of Java named JUST. When programming in JUST...

  8. ATIPS: Automatic Travel Itinerary Planning System for Domestic Areas.

    Science.gov (United States)

    Chang, Hsien-Tsung; Chang, Yi-Ming; Tsai, Meng-Tze

    2016-01-01

    Leisure travel has become a topic of great interest to Taiwanese residents in recent years. Most residents expect to be able to relax on a vacation during the holidays; however, the complicated procedure of travel itinerary planning is often discouraging and leads them to abandon the idea of traveling. In this paper, we design an automatic travel itinerary planning system for the domestic area (ATIPS) using an algorithm to automatically plan a domestic travel itinerary based on user intentions that allows users to minimize the process of trip planning. Simply by entering the travel time, the departure point, and the destination location, the system can automatically generate a travel itinerary. According to the results of the experiments, 70% of users were satisfied with the result of our system, and 82% of users were satisfied with the automatic user preference learning mechanism of ATIPS. Our algorithm also provides a framework for substituting modules or weights and offers a new method for travel planning.

  9. The Guide-based Automatic Creation of Verified Test Scenarious

    Directory of Open Access Journals (Sweden)

    P. D. Drobintsev

    2013-01-01

    Full Text Available This paper presents an overview of technology of the automated generation of test scenarios based on guides. The usage of this technology can significantly improve the quality of the developed program products. In order to ground the technology creation, the main problems that occur during the development and testing of the large industrial systems, are described, as well as the methodologies of software verification on conformity to product requirements. The potentialities of tools for automatic and semi-automatic generation of a test suite by using a formal model in UCM notation are demonstrated, as well as tools for verification and automation of testing.

  10. Automatic exposure for xeromammography

    International Nuclear Information System (INIS)

    Aichinger, H.

    1977-01-01

    During mammography without intensifying screens, exposure measurements are carried out behind the film. It is, however, difficult to construct an absolutely shadow-free ionization chamber of adequate sensitivity working in the necessary range of 25 to 50 kV. Repeated attempts have been made to utilize the advantages of automatic exposure for xero-mammography. In this case also the ionization chamber was placed behind the Xerox plate. Depending on tube filtration, object thickness and tube voltage, more than 80%, sometimes even 90%, of the radiation is absorbed by the Xerox plate. Particularly the characteristic Mo radiation of 17.4 keV and 19.6 keV is almost totally absorbed by the plate and cannot therefore be registered by the ionization chamber. This results in a considerable dependence of the exposure on kV and object thickness. Dependence on tube voltage and object thickness have been examined dosimetrically and spectroscopically with a Ge(Li)-spectrometer. Finally, the successful use of a shadow-free chamber is described; this has been particularly adapted for xero-mammography and is placed in front of the plate. (orig) [de

  11. Historical Review and Perspective on Automatic Journalizing

    OpenAIRE

    Kato, Masaki

    2017-01-01

    ContentsIntroduction1. EDP Accounting and Automatic Journalizing2. Learning System of Automatic Journalizing3. Automatic Journalizing by the Artificial Intelligence4. Direction of the Progress of the Accounting Information System

  12. AnatomicalTerms.info: heading for an online solution to the anatomical synonym problem hurdles in data-reuse from the Terminologia Anatomica and the foundational model of anatomy and potentials for future development.

    Science.gov (United States)

    Gobée, O Paul; Jansma, Daniël; DeRuiter, Marco C

    2011-10-01

    The many synonyms for anatomical structures confuse medical students and complicate medical communication. Easily accessible translations would alleviate this problem. None of the presently available resources-Terminologia Anatomica (TA), digital terminologies such as the Foundational Model of Anatomy (FMA), and websites-are fully satisfactory to this aim. Internet technologies offer new possibilities to solve the problem. Several authors have called for an online TA. An online translation resource should be easily accessible, user-friendly, comprehensive, expandable, and its quality determinable. As first step towards this goal, we built a translation website that we named www.AnatomicalTerms.info, based on the database of the FMA. It translates between English, Latin, eponyms, and to a lesser extent other languages, and presently contains over 31,000 terms for 7,250 structures, covering 95% of TA. In addition, it automatically presents searches for images, documents and anatomical variations regarding the sought structure. Several terminological and conceptual issues were encountered in transferring data from TA and FMA into AnatomicalTerms.info, resultant from these resources' different set-ups (paper versus digital) and targets (machine versus human-user). To the best of our knowledge, AnatomicalTerms.info is unique in its combination of user-friendliness and comprehensiveness. As next step, wiki-like expandability will be added to enable open contribution of clinical synonyms and terms in different languages. Specific quality measures will be taken to strike a balance between open contribution and quality assurance. AnatomicalTerms.info's mechanism that "translates" terms to structures furthermore may enhance targeted searching by linking images, descriptions, and other anatomical resources to the structures. Copyright © 2011 Wiley-Liss, Inc.

  13. An anatomically comprehensive atlas of the adult human brain transcriptome

    Science.gov (United States)

    Guillozet-Bongaarts, Angela L.; Shen, Elaine H.; Ng, Lydia; Miller, Jeremy A.; van de Lagemaat, Louie N.; Smith, Kimberly A.; Ebbert, Amanda; Riley, Zackery L.; Abajian, Chris; Beckmann, Christian F.; Bernard, Amy; Bertagnolli, Darren; Boe, Andrew F.; Cartagena, Preston M.; Chakravarty, M. Mallar; Chapin, Mike; Chong, Jimmy; Dalley, Rachel A.; David Daly, Barry; Dang, Chinh; Datta, Suvro; Dee, Nick; Dolbeare, Tim A.; Faber, Vance; Feng, David; Fowler, David R.; Goldy, Jeff; Gregor, Benjamin W.; Haradon, Zeb; Haynor, David R.; Hohmann, John G.; Horvath, Steve; Howard, Robert E.; Jeromin, Andreas; Jochim, Jayson M.; Kinnunen, Marty; Lau, Christopher; Lazarz, Evan T.; Lee, Changkyu; Lemon, Tracy A.; Li, Ling; Li, Yang; Morris, John A.; Overly, Caroline C.; Parker, Patrick D.; Parry, Sheana E.; Reding, Melissa; Royall, Joshua J.; Schulkin, Jay; Sequeira, Pedro Adolfo; Slaughterbeck, Clifford R.; Smith, Simon C.; Sodt, Andy J.; Sunkin, Susan M.; Swanson, Beryl E.; Vawter, Marquis P.; Williams, Derric; Wohnoutka, Paul; Zielke, H. Ronald; Geschwind, Daniel H.; Hof, Patrick R.; Smith, Stephen M.; Koch, Christof; Grant, Seth G. N.; Jones, Allan R.

    2014-01-01

    Neuroanatomically precise, genome-wide maps of transcript distributions are critical resources to complement genomic sequence data and to correlate functional and genetic brain architecture. Here we describe the generation and analysis of a transcriptional atlas of the adult human brain, comprising extensive histological analysis and comprehensive microarray profiling of ~900 neuroanatomically precise subdivisions in two individuals. Transcriptional regulation varies enormously by anatomical location, with different regions and their constituent cell types displaying robust molecular signatures that are highly conserved between individuals. Analysis of differential gene expression and gene co-expression relationships demonstrates that brain-wide variation strongly reflects the distributions of major cell classes such as neurons, oligodendrocytes, astrocytes and microglia. Local neighbourhood relationships between fine anatomical subdivisions are associated with discrete neuronal subtypes and genes involved with synaptic transmission. The neocortex displays a relatively homogeneous transcriptional pattern, but with distinct features associated selectively with primary sensorimotor cortices and with enriched frontal lobe expression. Notably, the spatial topography of the neocortex is strongly reflected in its molecular topography— the closer two cortical regions, the more similar their transcriptomes. This freely accessible online data resource forms a high-resolution transcriptional baseline for neurogenetic studies of normal and abnormal human brain function. PMID:22996553

  14. From medical imaging data to 3D printed anatomical models.

    Directory of Open Access Journals (Sweden)

    Thore M Bücking

    Full Text Available Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  15. Integration of anatomical and external response mappings explains crossing effects in tactile localization: A probabilistic modeling approach.

    Science.gov (United States)

    Badde, Stephanie; Heed, Tobias; Röder, Brigitte

    2016-04-01

    To act upon a tactile stimulus its original skin-based, anatomical spatial code has to be transformed into an external, posture-dependent reference frame, a process known as tactile remapping. When the limbs are crossed, anatomical and external location codes are in conflict, leading to a decline in tactile localization accuracy. It is unknown whether this impairment originates from the integration of the resulting external localization response with the original, anatomical one or from a failure of tactile remapping in crossed postures. We fitted probabilistic models based on these diverging accounts to the data from three tactile localization experiments. Hand crossing disturbed tactile left-right location choices in all experiments. Furthermore, the size of these crossing effects was modulated by stimulus configuration and task instructions. The best model accounted for these results by integration of the external response mapping with the original, anatomical one, while applying identical integration weights for uncrossed and crossed postures. Thus, the model explained the data without assuming failures of remapping. Moreover, performance differences across tasks were accounted for by non-individual parameter adjustments, indicating that individual participants' task adaptation results from one common functional mechanism. These results suggest that remapping is an automatic and accurate process, and that the observed localization impairments in touch result from a cognitively controlled integration process that combines anatomically and externally coded responses.

  16. Oriental eyelids. Anatomic difference and surgical consideration.

    Science.gov (United States)

    Liu, D; Hsu, W M

    1986-01-01

    Fashions change with time and beauty standards differ in different cultures. In recent years, there has been an increase in the number of immigrants to the United States from the Orient. The creation of an upper eyelid crease has been for the past several decades the most popular cosmetic procedure in many Asian countries. In order to perform this procedure to the satisfaction of an Oriental patient, the surgeon must know what the patient perceives as beautiful and the anatomic differences between the Oriental and the Occidental eyelids. In this paper with data collected from over 3,600 patients, we are presenting important statistics that enables the surgeon to understand better the Oriental mind and facilitate communications. The anatomic difference in the upper eyelid is also discussed.

  17. Accessory mental foramen: a rare anatomical finding

    Science.gov (United States)

    Thakur, Gagan; Thomas, Shaji; Thayil, Sumeeth Cyriac; Nair, Preeti P

    2011-01-01

    Accessory mental foramen (AMF) is a rare anatomical variation with a prevalence ranging from 1.4 to 10%. Even so, in order to avoid neurovascular complications, particular attention should be paid to the possible occurrence of one or more AMF during surgical procedures involving the mandible. Careful surgical dissection should be performed in the region so that the presence of AMF can be detected and the occurrence of a neurosensory disturbance or haemorrhage can be avoided. Although this anatomical variation is rare, it should be kept in mind that an AMF may exist. Trigeminal neuralgia was diagnosed. On the basis of diagnostic test results, peripheral neurectomy of mental nerve was planned. Failure to do neurectomy of mental nerve branch in the reported case, coming out from AMF, would have resulted in recurrence of pain and eventually failure of the procedure. PMID:22707601

  18. Automatic Segmentation of Ultrasound Tomography Image

    Directory of Open Access Journals (Sweden)

    Shibin Wu

    2017-01-01

    Full Text Available Ultrasound tomography (UST image segmentation is fundamental in breast density estimation, medicine response analysis, and anatomical change quantification. Existing methods are time consuming and require massive manual interaction. To address these issues, an automatic algorithm based on GrabCut (AUGC is proposed in this paper. The presented method designs automated GrabCut initialization for incomplete labeling and is sped up with multicore parallel programming. To verify performance, AUGC is applied to segment thirty-two in vivo UST volumetric images. The performance of AUGC is validated with breast overlapping metrics (Dice coefficient (D, Jaccard (J, and False positive (FP and time cost (TC. Furthermore, AUGC is compared to other methods, including Confidence Connected Region Growing (CCRG, watershed, and Active Contour based Curve Delineation (ACCD. Experimental results indicate that AUGC achieves the highest accuracy (D=0.9275 and J=0.8660 and FP=0.0077 and takes on average about 4 seconds to process a volumetric image. It was said that AUGC benefits large-scale studies by using UST images for breast cancer screening and pathological quantification.

  19. Hamstring tendons insertion - an anatomical study

    OpenAIRE

    Cristiano Antonio Grassi; Vagner Messias Fruheling; Joao Caetano Abdo; Marcio Fernando Aparecido de Moura; Mario Namba; Joao Luiz Vieira da Silva; Luiz Antonio Munhoz da Cunha; Ana Paula Gebert de Oliveira Franco; Isabel Ziesemer Costa; Edmar Stieven Filho

    2013-01-01

    OBJECTIVE: To study the anatomy of the hamstring tendons insertion and anatomical rela-tionships. METHODS: Ten cadaver knees with medial and anterior intact structures were selected. The dissection was performed from anteromedial access to exposure of the insertion of the flexor tendons (FT), tibial plateau (TP) and tibial tuberosity (TT). A needle of 40 × 12 and a caliper were used to measure the distance of the tibial plateau of the knee flexor tendons insertion at 15 mm from the ...

  20. Anatomically corrected transposition of great vessels

    International Nuclear Information System (INIS)

    Ivanitskij, A.V.; Sarkisova, T.N.

    1989-01-01

    The paper is concerned with the description of rare congenital heart disease: anatomically corrected malposition of major vessels in a 9-mos 24 day old girl. The diagnosis of this disease was shown on the results of angiocardiography, concomitant congenital heart diseases were descibed. This abnormality is characterized by common atrioventricular and ventriculovascular joints and inversion position of the major vessels, it is always attended by congenital heart diseases. Surgical intervention is aimed at the elimination of concomitant heart dieseases

  1. New semi-automatic ROI setting system for brain PET images based on elastic model

    Energy Technology Data Exchange (ETDEWEB)

    Tanizaki, Naoaki; Okamura, Tetsuya (Sumitomo Heavy Industries Ltd., Kanagawa (Japan). Research and Development Center); Senda, Michio; Toyama, Hinako; Ishii, Kenji

    1994-10-01

    We have developed a semi-automatic ROI setting system for brain PET images. It is based on the elastic network model that fits the standard ROI atlas into individual brain image. The standard ROI atlas is a set of segments that represent each anatomical region. For transformation, the operator needs to set only three kinds of district anatomical features: manually determined midsagittal line, brain contour line determined with SNAKES algorithm semi-automatically, a few manually determined specific ROIs to be used for exact transformation. Improvement of the operation time and the inter-operator variance were demonstrated in the experiment by comparing with the conventional manual ROI setting. The operation time was reduced to 50% in almost all cases. And the inter-operator variance was reduced to one seventh in the maximum case. (author).

  2. Exploring brain function from anatomical connectivity

    Directory of Open Access Journals (Sweden)

    Gorka eZamora-López

    2011-06-01

    Full Text Available The intrinsic relationship between the architecture of the brain and the range of sensory and behavioral phenomena it produces is a relevant question in neuroscience. Here, we review recent knowledge gained on the architecture of the anatomical connectivity by means of complex network analysis. It has been found that corticocortical networks display a few prominent characteristics: (i modular organization, (ii abundant alternative processing paths and (iii the presence of highly connected hubs. Additionally, we present a novel classification of cortical areas of the cat according to the role they play in multisensory connectivity. All these properties represent an ideal anatomical substrate supporting rich dynamical behaviors, as-well-as facilitating the capacity of the brain to process sensory information of different modalities segregated and to integrate them towards a comprehensive perception of the real world. The result here exposed are mainly based in anatomical data of cats’ brain, but we show how further observations suggest that, from worms to humans, the nervous system of all animals might share fundamental principles of organization.

  3. Effectiveness of an Automatic Tracking Software in Underwater Motion Analysis

    Directory of Open Access Journals (Sweden)

    Fabrício A. Magalhaes

    2013-12-01

    Full Text Available Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP, based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers’ positions were manually tracked to determine the markers’ center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM. Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker’s coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4% than for COM (17.8%. Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis.

  4. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  5. Zebrafish Expression Ontology of Gene Sets (ZEOGS): a tool to analyze enrichment of zebrafish anatomical terms in large gene sets.

    Science.gov (United States)

    Prykhozhij, Sergey V; Marsico, Annalisa; Meijsing, Sebastiaan H

    2013-09-01

    The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene expression

  6. Zebrafish Expression Ontology of Gene Sets (ZEOGS): A Tool to Analyze Enrichment of Zebrafish Anatomical Terms in Large Gene Sets

    Science.gov (United States)

    Marsico, Annalisa

    2013-01-01

    Abstract The zebrafish (Danio rerio) is an established model organism for developmental and biomedical research. It is frequently used for high-throughput functional genomics experiments, such as genome-wide gene expression measurements, to systematically analyze molecular mechanisms. However, the use of whole embryos or larvae in such experiments leads to a loss of the spatial information. To address this problem, we have developed a tool called Zebrafish Expression Ontology of Gene Sets (ZEOGS) to assess the enrichment of anatomical terms in large gene sets. ZEOGS uses gene expression pattern data from several sources: first, in situ hybridization experiments from the Zebrafish Model Organism Database (ZFIN); second, it uses the Zebrafish Anatomical Ontology, a controlled vocabulary that describes connected anatomical structures; and third, the available connections between expression patterns and anatomical terms contained in ZFIN. Upon input of a gene set, ZEOGS determines which anatomical structures are overrepresented in the input gene set. ZEOGS allows one for the first time to look at groups of genes and to describe them in terms of shared anatomical structures. To establish ZEOGS, we first tested it on random gene selections and on two public microarray datasets with known tissue-specific gene expression changes. These tests showed that ZEOGS could reliably identify the tissues affected, whereas only very few enriched terms to none were found in the random gene sets. Next we applied ZEOGS to microarray datasets of 24 and 72 h postfertilization zebrafish embryos treated with beclomethasone, a potent glucocorticoid. This analysis resulted in the identification of several anatomical terms related to glucocorticoid-responsive tissues, some of which were stage-specific. Our studies highlight the ability of ZEOGS to extract spatial information from datasets derived from whole embryos, indicating that ZEOGS could be a useful tool to automatically analyze gene

  7. Automatic Visualization of Software Requirements: Reactive Systems

    International Nuclear Information System (INIS)

    Castello, R.; Mili, R.; Tollis, I.G.; Winter, V.

    1999-01-01

    In this paper we present an approach that facilitates the validation of high consequence system requirements. This approach consists of automatically generating a graphical representation from an informal document. Our choice of a graphical notation is statecharts. We proceed in two steps: we first extract a hierarchical decomposition tree from a textual description, then we draw a graph that models the statechart in a hierarchical fashion. The resulting drawing is an effective requirements assessment tool that allows the end user to easily pinpoint inconsistencies and incompleteness

  8. Automatic dose-rate controlling equipment

    International Nuclear Information System (INIS)

    Szasz, T.; Nagy Czirok, Cs.; Batki, L.; Antal, S.

    1977-01-01

    The patent of a dose-rate controlling equipment that can be attached to X-ray image-amplifiers is presented. In the new equipment the current of the photocatode of the image-amplifier is led into the regulating unit, which controls the X-ray generator automatically. The advantages of the equipment are the following: it can be simply attached to any type of X-ray image-amplifier, it accomplishes fast and sensitive regulation, it makes possible the control of both the mA and the kV values, it is attached to the most reliable point of the image-transmission chain. (L.E.)

  9. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  10. Upgradation in N2 gas station for TLD personnel monitoring using gas based semi-automatic badge readers- installation and integration of N2 gas generator plant to the system and its performance appraisal

    International Nuclear Information System (INIS)

    Presently Personnel Monitoring against external radiation is being carried out using CaSO 4 :Dy phosphor based TLD badge and hot N 2 gas based semiautomatic TLD Badge Reader (TLDBR-7B). This system requires the supply of high purity N 2 gas for the operation of the reader. This gas is normally obtained from N 2 gas cylinders use of which is a cumbersome, tedious, cost intensive and involves safety hazards. To minimize the dependence on conventional gas cylinders, a medium sized plant for generation of N 2 gas directly from air has been installed and coupled to gas station and distribution system. The paper describes the comparative study of performance of TLD reading system using gas from the generator with that using gas cylinders, which has been found to be quite comparable. This upgradation has helped in drastic reduction in cost, labour and improved safety in TLD Laboratory working. (author)

  11. Automatic identification of cell files in light microscopic images of conifer wood

    OpenAIRE

    Kennel, Pol; Subsol, Gérard; Guéroult, Michaël; Borianne, Philippe

    2010-01-01

    International audience; In this paper, we present an automatic method to recognize cell files in light microscopic images of conifer wood. This original method is decomposed into three steps: the segmentation step which extracts some anatomical structures in the image, the classification step which identifies in these structures the interesting cells, and the cell files recognition step. Some preliminary results obtained on several species of conifers are presented and analyzed.

  12. AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2016-06-01

    Full Text Available In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  13. Automatic Construction of Hypotheses for Linear Objects in Digital and Laser Scanning Images

    Directory of Open Access Journals (Sweden)

    Quintino Dalmolin

    2004-12-01

    Full Text Available This paper presents an automatic road hypotheses approach using digital image and laser scanning image combining various Digital Image Processing techniques. The semantic objects, in this work, are linear features, such as, roads and streets. The aim of this paper is extract automatically road hypotheses on image space and object space for use the information in automatic absolute orientation process. The results show that methodology is efficiency and the roads hypotheses are generate and validate.

  14. A multivariate pattern analysis study of the HIV-related white matter anatomical structural connections alterations

    Science.gov (United States)

    Tang, Zhenchao; Liu, Zhenyu; Li, Ruili; Cui, Xinwei; Li, Hongjun; Dong, Enqing; Tian, Jie

    2017-03-01

    It's widely known that HIV infection would cause white matter integrity impairments. Nevertheless, it is still unclear that how the white matter anatomical structural connections are affected by HIV infection. In the current study, we employed a multivariate pattern analysis to explore the HIV-related white matter connections alterations. Forty antiretroviraltherapy- naïve HIV patients and thirty healthy controls were enrolled. Firstly, an Automatic Anatomical Label (AAL) atlas based white matter structural network, a 90 × 90 FA-weighted matrix, was constructed for each subject. Then, the white matter connections deprived from the structural network were entered into a lasso-logistic regression model to perform HIV-control group classification. Using leave one out cross validation, a classification accuracy (ACC) of 90% (P=0.002) and areas under the receiver operating characteristic curve (AUC) of 0.96 was obtained by the classification model. This result indicated that the white matter anatomical structural connections contributed greatly to HIV-control group classification, providing solid evidence that the white matter connections were affected by HIV infection. Specially, 11 white matter connections were selected in the classification model, mainly crossing the regions of frontal lobe, Cingulum, Hippocampus, and Thalamus, which were reported to be damaged in previous HIV studies. This might suggest that the white matter connections adjacent to the HIV-related impaired regions were prone to be damaged.

  15. Automatic analysis of multiparty meetings

    Indian Academy of Sciences (India)

    AMI) meeting corpus, the development of a meeting speech recognition system, and systems for the automatic segmentation, summarization and social processing of meetings, together with some example applications based on these systems.

  16. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  17. An ``Anatomic approach" to study the Casimir effect

    Science.gov (United States)

    Intravaia, Francesco; Haakh, Harald; Henkel, Carsten

    2010-03-01

    The Casimir effect, in its simplest definition, is a quantum mechanical force between two objects placed in vacuum. In recent years the Casimir force has been the object of an exponentially growing attention both from theorists and experimentalists. A new generation of experiments paved the way for new challenges and spotted some shadows in the comparison to theory. Here we are going to isolate different contributions to the Casimir interaction and perform a detailed study to shine new light on this phenomenon. As an example, the contributions of Foucault (eddy current) modes will be discussed in different configurations. This ``anatomic approach'' allows to clearly put into evidence special features and to explain unusual behaviors. This brings new physical understanding on the undergoing physical mechanisms and suggests new ways to engineer the Casimir effect.

  18. An automatic image recognition approach

    Directory of Open Access Journals (Sweden)

    Tudor Barbu

    2007-07-01

    Full Text Available Our paper focuses on the graphical analysis domain. We propose an automatic image recognition technique. This approach consists of two main pattern recognition steps. First, it performs an image feature extraction operation on an input image set, using statistical dispersion features. Then, an unsupervised classification process is performed on the previously obtained graphical feature vectors. An automatic region-growing based clustering procedure is proposed and utilized in the classification stage.

  19. Automatic lung segmentation in the presence of alveolar collapse

    Directory of Open Access Journals (Sweden)

    Noshadi Areg

    2017-09-01

    Full Text Available Lung ventilation and perfusion analyses using chest imaging methods require a correct segmentation of the lung to offer anatomical landmarks for the physiological data. An automatic segmentation approach simplifies and accelerates the analysis. However, the segmentation of the lungs has shown to be difficult if collapsed areas are present that tend to share similar gray values with surrounding non-pulmonary tissue. Our goal was to develop an automatic segmentation algorithm that is able to approximate dorsal lung boundaries even if alveolar collapse is present in the dependent lung areas adjacent to the pleura. Computed tomography data acquired in five supine pigs with injured lungs were used for this purpose. First, healthy lung tissue was segmented using a standard 3D region growing algorithm. Further, the bones in the chest wall surrounding the lungs were segmented to find the contact points of ribs and pleura. Artificial boundaries of the dorsal lung were set by spline interpolation through these contact points. Segmentation masks of the entire lung including the collapsed regions were created by combining the splines with the segmentation masks of the healthy lung tissue through multiple morphological operations. The automatically segmented images were then evaluated by comparing them to manual segmentations and determining the Dice similarity coefficients (DSC as a similarity measure. The developed method was able to accurately segment the lungs including the collapsed regions (DSCs over 0.96.

  20. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    Directory of Open Access Journals (Sweden)

    Jun Yi Wang

    Full Text Available Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation to 0.978 (for SegAdapter-corrected segmentation for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large

  1. Poster - Thurs Eve-17: Stand alone software for deforming delivered dose distributions to account for daily anatomical variations in prostate patients treated on the TomoTherapy Hi-Art II system.

    Science.gov (United States)

    Rivest, R; Riauka, T; Murtha, A; Fallone, G

    2008-07-01

    The acquisition of daily megavoltage (MV)-CT images provides an invaluable tool in the delivery of adaptive radiotherapy (ART) on the TomoTherapy Hi-ART II system. Using TomoTherapy's Planned Adaptive software, delivery sinograms can be applied to pre-treatment MVCT images to generate daily delivered dose distributions, allowing for the potential comparison of planned and delivered doses. However, daily patient anatomical variations complicate the task and accurate comparison requires that daily doses be evaluated in the same references frame as the planned dose. Each anatomical point in daily MVCT images must be mapped to its corresponding point in the patient planning CT and that deformation map must be applied to the daily dose distribution. Stand alone software has been developed for the comparison of planned and delivered doses for TomoTherapy prostate patients. Software inputs are the planning CT, planning structure data, planned dose distribution, daily MVCT and delivered dose distribution. The software uses an in-house developed automatic voxel-based deformable registration algorithm designed and optimized specifically for the registration of prostate CT images to achieve anatomical correspondence between MVCT and planning images. The resultant deformation map is applied to the daily dose distribution and the software outputs the deformed daily dose distribution in the planning CT's reference frame, as well as a delivered DVH for each of the planning CT's ROI. The software allows for a number of potential research opportunities, in particular, the calculation of the cumulative dose delivered over the course of treatment for prostate patients treated on the Hi-Art II system. © 2008 American Association of Physicists in Medicine.

  2. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  3. Probabilistic anatomical labeling of brain structures using statistical probabilistic anatomical maps

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Dong Soo; Lee, Byung Il; Lee, Jae Sung; Shin, Hee Won; Chung, June Key; Lee, Myung Chul

    2002-01-01

    The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal neurological institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the statistical probabilistic anatomical map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for the easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was performed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. These programs will be useful for the result interpretation of the image analysis performed on MNI coordinate, as done in SPM program

  4. TOPICAL REVIEW: Anatomical imaging for radiotherapy

    Science.gov (United States)

    Evans, Philip M.

    2008-06-01

    The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of

  5. Generación automática del diagrama entidad-relación y su representación en SQL desde un lenguaje controlado (UN-LENCEP Automatic generation of entity-relationship diagram and its representation in SQL from a controlled language (UN-LENCEP

    Directory of Open Access Journals (Sweden)

    Carlos Mario Zapata Jaramillo

    2011-01-01

    Full Text Available Entidad-relación es uno de los diagramas que se utilizan en el desarrollo de modelos para representar la información de un dominio. Con el fin de agilizar y mejorar el proceso de desarrollo de software, diferentes propuestas surgieron para contribuir en la obtención automática o semiautomática del diagrama entidad-relación. Varias de estas propuestas utilizan como punto de partida lenguaje natural o lenguaje controlado, mientras otras propuestas utilizan representaciones intermedias. Los interesados en el desarrollo de una aplicación de software no suelen comprender varias de las representaciones utilizadas sin tener previa capacitación, lo cual restringe la participación activa del interesado en todas las etapas del desarrollo. Con el fin de solucionar estos problemas, en este artículo se propone un conjunto de reglas heurísticas para la obtención automática del diagrama entidad-relación y su representación en SQL. Se toma como punto de partida el lenguaje controlado UN-Lencep, que ya se emplea para la generación de otros artefactos en el desarrollo de aplicaciones de software.Entity-relationship diagram (ERD is one of the used in modelling the domain information. Several proposals have emerged for speeding up and improving the software development process by either automatically or semi-automatically obtain the ERD. Natural language, controlled languages, and intermediate representations have been used in such a task. The stakeholders (people with some concern in application development, when untrained, are incapable to understand several of such representations. As a consequence, stakeholder active participation in software development is highly restricted. Trying to solve these problems, a set of heuristic rules for automatically obtaining ERD and its SQL-based equivalence is proposed in this paper. The starting point is UN-Lencep, a controlled language already used for generating other artefacts belonging to software

  6. UNA MIRADA CONCEPTUAL A LA GENERACIÓN AUTOMÁTICA DE CÓDIGO UMA ABORDAGEM CONCEITUAL À GERAÇÃO AUTOMÁTICA DE CÓDIGO A CONCEPTUAL APPROACH TO AUTOMATIC GENERATION OF CODE

    Directory of Open Access Journals (Sweden)

    Carlos Mario Zapata

    2010-07-01

    ícil compreensão para o cliente, o que impede que se tenha uma validação em períodos prévios do desenvolvimento.Automated code generation is fostered by several software development methods. This generation is often supplied by well-known CASE (Computer-Aided Software Engineering tools. However, automation is still so far and some CASE tools are complemented by non-standard modeling projects. In this paper, we conceptualize projects related to automated code generation, starting from discourse representations in either controlled or natural language, or in conceptual schemas. In this way, we present a graphical summary of crucial concepts related to this issue, by means of a state-of-the-art review. We conclude that automated code generation usually begins from solution-based representations of the problem instead of domain-based representations. Also, we summarize that these starting points are misunderstood by the client and this situation leads to poor validation in early stages of software development lifecycle.

  7. Chronic ankle instability: Arthroscopic anatomical repair.

    Science.gov (United States)

    Arroyo-Hernández, M; Mellado-Romero, M; Páramo-Díaz, P; García-Lamas, L; Vilà-Rico, J

    Ankle sprains are one of the most common injuries. Despite appropriate conservative treatment, approximately 20-40% of patients continue to have chronic ankle instability and pain. In 75-80% of cases there is an isolated rupture of the anterior talofibular ligament. A retrospective observational study was conducted on 21 patients surgically treated for chronic ankle instability by means of an arthroscopic anatomical repair, between May 2012 and January 2013. There were 15 men and 6 women, with a mean age of 30.43 years (range 18-48). The mean follow-up was 29 months (range 25-33). All patients were treated by arthroscopic anatomical repair of anterior talofibular ligament. Four (19%) patients were found to have varus hindfoot deformity. Associated injuries were present in 13 (62%) patients. There were 6 cases of osteochondral lesions, 3 cases of posterior ankle impingement syndrome, and 6 cases of peroneal pathology. All these injuries were surgically treated in the same surgical time. A clinical-functional study was performed using the American Orthopaedic Foot and Ankle Society (AOFAS) score. The mean score before surgery was 66.12 (range 60-71), and after surgery it increased up to a mean of 96.95 (range 90-100). All patients were able to return to their previous sport activity within a mean of 21.5 weeks (range 17-28). Complications were found in 3 (14%) patients. Arthroscopic anatomical ligament repair technique has excellent clinical-functional results with a low percentage of complications, and enables patients to return to their previous sport activity within a short period of time. Copyright © 2016 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. [Anatomical studying of the tear trough area].

    Science.gov (United States)

    Yang, Ningze; Qiu, Wei; Wang, Zhijun; Su, Xiaowei; Jia, Huafeng; Shi, Heng

    2014-01-01

    To explore the mechanism of the aging deformity of tear trough through the anatomic study of the tear trough region. 13 adult cadaveric heads (26 sides), including 9 male heads (18 sides) and 4 female heads (8 sides), aged 22-78 years old, were used. Anatomic study was performed around the orbital, especially tear trough region, with microsurgery instrument under microscope( x 10 times). The lower orbicularis retaining ligament was dissected and exposed. The anatomic location was recorded and photographed. (1) The anatomic layers of the tear trough region contains skin, subcutaneous tissue, orbicularis oculi muscle, periosteal membrane. There is no subcutaneous fat above the tear trough, while it exists below the tear trough, called malar fat pad. (2) There is a natural boundary between the septal and the orbital portions of the orbicularis oculi muscle of lower eyelid at surface of the orbital bone. The natural boundary, projected on the body surface corresponds to tear trough. The width of boundary is (2.06 +/- 0.15) mm on the vertical line through inner canthus and (3.25 +/- 0.12) mm on the vertical line through the lateral margin of the ala. The septal portion and the orbital portion of the orbicularis oculi muscle began to merge in (16.56 +/- 0.51) mm to inner canthus. (3) There is ligament attachment in the medial, upper and lower orbital and no ligament attachment in the lateral orbital. Orbicularis retaining ligament of lower eyelid is divided into two layers. (4) The medial of the upper layer of the orbicularis retaining ligament in lower eyelid originates from orbital margin and from preorbital walls laterally in (16.10 +/- 0.43) mm to the medial of lateral orbital margin, through orbicularis oculi muscle and ends at the skin. The lower layer of the orbicularis retaining ligament of lower eyelid originates from preorbital walls through orbicularis oculi muscle and its superficial fat, then ends at the skin. The length of tear trough is (16.56 +/- 0.51) mm

  9. A Technique: Generating Alternative Thoughts

    Directory of Open Access Journals (Sweden)

    Serkan AKKOYUNLU

    2013-03-01

    Full Text Available Introduction: One of the basic techniques of cognitive therapy is examination of automatic thoughts and reducing the belief in them. By employing this, we can overcome the cognitive bias apparent in mental disorders. Despite this view, according to another cognitive perspective in a given situation, there are distinct cognitive representations competing for retrieval from memory just like positive and negative schemas. In this sense generating or strengthening alternative explanations or balanced thoughts that explain the situation better than negative automatic thoughts is one of the important process goals of cognitive therapy.Objective: Aim of this review is to describe methods used to generate alternative/balanced thoughts that are used in examining automatic thoughts and also a part of automatic thought records. Alternative/balanced thoughts are the summary and end point of automatic thought work. In this text different ways including listing alternative thoughts, using examining the evidence for generating balanced thoughts, decatastrophizing in anxiety and a meta-cognitive method named two explanations are discussed. Different ways to use this technique as a homework assignment is also reviewed. Remarkable aspects of generating alternative explanations and realistic/balanced thoughts are also reviewed and exemplified using therapy transcripts. Conclusion: Generating alternative explanations and balanced thoughts are the end point and important part of therapy work on automatic thoughts. When applied properly and rehearsed as homework between sessions, these methods may lead to improvement in many mental disorders

  10. Automatic Color Sorting of Hardwood Edge-Glued Panel Parts

    Science.gov (United States)

    D. Earl Kline; Richard Conners; Qiang Lu; Philip A. Araman

    1997-01-01

    This paper describes an automatic color sorting system for red oak edge-glued panel parts. The color sorting system simultaneously examines both faces of a panel part and then determines which face has the "best" color, and sorts the part into one of a number of color classes at plant production speeds. Initial test results show that the system generated over...

  11. Sign language perception research for improving automatic sign language recognition

    NARCIS (Netherlands)

    Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.

    2009-01-01

    Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of

  12. Integrated Engineering Policy in the Branch of Computer Science and Automatization,

    Science.gov (United States)

    A short description is given of the position of Czechoslovakia in the region of automatization and computer production. Automatization and computer ... science problems in Czechoslovakia are given and data on the future development of third-generation computers in socialistic countries, are described. (Author)

  13. Localization of anatomical point landmarks in 3D medical images by fitting 3D parametric intensity models.

    Science.gov (United States)

    Wörz, Stefan; Rohr, Karl

    2006-02-01

    We introduce a new approach for the localization of 3D anatomical point landmarks. This approach is based on 3D parametric intensity models which are directly fitted to 3D images. To efficiently model tip-like, saddle-like, and sphere-like anatomical structures we introduce analytic intensity models based on the Gaussian error function in conjunction with 3D rigid transformations as well as deformations. To select a suitable size of the region-of-interest (ROI) where model fitting is performed, we also propose a new scheme for automatic selection of an optimal 3D ROI size based on the dominant gradient direction. In addition, to achieve a higher level of automation we present an algorithm for automatic initialization of the model parameters. Our approach has been successfully applied to accurately localize anatomical landmarks in 3D synthetic data as well as 3D MR and 3D CT image data. We have also compared the experimental results with the results of a previously proposed 3D differential approach. It turns out that the new approach significantly improves the localization accuracy.

  14. Global localization of 3D anatomical structures by pre-filtered Hough forests and discrete optimization.

    Science.gov (United States)

    Donner, René; Menze, Bjoern H; Bischof, Horst; Langs, Georg

    2013-12-01

    The accurate localization of anatomical landmarks is a challenging task, often solved by domain specific approaches. We propose a method for the automatic localization of landmarks in complex, repetitive anatomical structures. The key idea is to combine three steps: (1) a classifier for pre-filtering anatomical landmark positions that (2) are refined through a Hough regression model, together with (3) a parts-based model of the global landmark topology to select the final landmark positions. During training landmarks are annotated in a set of example volumes. A classifier learns local landmark appearance, and Hough regressors are trained to aggregate neighborhood information to a precise landmark coordinate position. A non-parametric geometric model encodes the spatial relationships between the landmarks and derives a topology which connects mutually predictive landmarks. During the global search we classify all voxels in the query volume, and perform regression-based agglomeration of landmark probabilities to highly accurate and specific candidate points at potential landmark locations. We encode the candidates' weights together with the conformity of the connecting edges to the learnt geometric model in a Markov Random Field (MRF). By solving the corresponding discrete optimization problem, the most probable location for each model landmark is found in the query volume. We show that this approach is able to consistently localize the model landmarks despite the complex and repetitive character of the anatomical structures on three challenging data sets (hand radiographs, hand CTs, and whole body CTs), with a median localization error of 0.80 mm, 1.19 mm and 2.71 mm, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Fast correspondences search in anatomical trees

    Science.gov (United States)

    dos Santos, Thiago R.; Gergel, Ingmar; Meinzer, Hans-Peter; Maier-Hein, Lena

    2010-03-01

    Registration of multiple medical images commonly comprises the steps feature extraction, correspondences search and transformation computation. In this paper, we present a new method for a fast and pose independent search of correspondences using as features anatomical trees such as the bronchial system in the lungs or the vessel system in the liver. Our approach scores the similarities between the trees' nodes (bifurcations) taking into account both, topological properties extracted from their graph representations and anatomical properties extracted from the trees themselves. The node assignment maximizes the global similarity (sum of the scores of each pair of assigned nodes), assuring that the matches are distributed throughout the trees. Furthermore, the proposed method is able to deal with distortions in the data, such as noise, motion, artifacts, and problems associated with the extraction method, such as missing or false branches. According to an evaluation on swine lung data sets, the method requires less than one second on average to compute the matching and yields a high rate of correct matches compared to state of the art work.

  16. Anatomical modeling of the bronchial tree

    Science.gov (United States)

    Hentschel, Gerrit; Klinder, Tobias; Blaffert, Thomas; Bülow, Thomas; Wiemker, Rafael; Lorenz, Cristian

    2010-02-01

    The bronchial tree is of direct clinical importance in the context of respective diseases, such as chronic obstructive pulmonary disease (COPD). It furthermore constitutes a reference structure for object localization in the lungs and it finally provides access to lung tissue in, e.g., bronchoscope based procedures for diagnosis and therapy. This paper presents a comprehensive anatomical model for the bronchial tree, including statistics of position, relative and absolute orientation, length, and radius of 34 bronchial segments, going beyond previously published results. The model has been built from 16 manually annotated CT scans, covering several branching variants. The model is represented as a centerline/tree structure but can also be converted in a surface representation. Possible model applications are either to anatomically label extracted bronchial trees or to improve the tree extraction itself by identifying missing segments or sub-trees, e.g., if located beyond a bronchial stenosis. Bronchial tree labeling is achieved using a naïve Bayesian classifier based on the segment properties contained in the model in combination with tree matching. The tree matching step makes use of branching variations covered by the model. An evaluation of the model has been performed in a leaveone- out manner. In total, 87% of the branches resulting from preceding airway tree segmentation could be correctly labeled. The individualized model enables the detection of missing branches, allowing a targeted search, e.g., a local rerun of the tree-segmentation segmentation.

  17. Sleep Disturbance and Anatomic Shoulder Arthroplasty.

    Science.gov (United States)

    Morris, Brent J; Sciascia, Aaron D; Jacobs, Cale A; Edwards, T Bradley

    2017-05-01

    Sleep disturbance is commonly encountered in patients with glenohumeral joint arthritis and can be a factor that drives patients to consider surgery. The prevalence of sleep disturbance before or after anatomic total shoulder arthroplasty has not been reported. The authors identified 232 eligible patients in a prospective shoulder arthroplasty registry following total shoulder arthroplasty for primary glenohumeral joint arthritis with 2- to 5-year follow-up. Sleep disturbance secondary to the affected shoulder was characterized preoperatively and postoperatively as no sleep disturbance, frequent sleep disturbance, or nightly sleep disturbance. A total of 211 patients (91%) reported sleep disturbance prior to surgery. Patients with nightly sleep disturbance had significantly worse (Psleep disturbance, with 186 patients (80%) reporting no sleep disturbance (Psleep disturbance group had significantly greater patient-reported outcome scores and range of motion following surgery compared with the other sleep disturbance groups for nearly all outcome measures (P≤.01). Patients have significant improvements in sleep after anatomic shoulder arthroplasty. There was a high prevalence of sleep disturbance preoperatively (211 patients, 91%) compared with postoperatively (46 patients, 20%). [Orthopedics. 2017; 40(3):e450-e454.]. Copyright 2017, SLACK Incorporated.

  18. Anatomical and roentgenographic features of atlantooccipital instability.

    Science.gov (United States)

    Harris, M B; Duval, M J; Davis, J A; Bernini, P M

    1993-02-01

    An anatomical study using six fresh, human cadaveric cervical spine specimens was performed. After the dissection of all soft tissue, flexion-extension radiographs were obtained to verify initial stability. A sagittal plane bone cut was then made, centered on the odontoid and sparing the alar ligaments, the tectorial membrane, and the atlantooccipital (AO) ligaments. Repeat flexion-extension radiographs and photographs were taken to document maintenance of stability of these hemisections. The occipital-atlantoaxial ligaments were then individually and sequentially incised, maintaining all other structures each time. After the sectioning of each ligament, flexion-extension radiographs and photographs were obtained to identify subsequent motion patterns. Both gross anatomical and roentgenographic examinations demonstrated the important stabilizing role of the tectorial membrane in flexion. Additionally, contact between the posterior arch of C1 and the occiput limited hyperextension as a secondary restraint once the tectorial membrane was sectioned. Furthermore, the AO ligaments proved to play an insignificant role in the preservation of AO stability through a flexion-extension arc of motion. Under normal circumstances, the AO articulation is not excessively stressed. However, acute AO injury, as well as the insidious failure of these ligaments, has been documented in several cases involving various pathologies. This study demonstrates a mechanism of instability and highlights the essential role of the tectorial membrane in maintaining upper cervical spine stability.

  19. Digital automatic gain amplifier

    Science.gov (United States)

    Holley, L. D.; Ward, J. O. (Inventor)

    1978-01-01

    A circuit is described for adjusting the amplitude of a reference signal to a predetermined level so as to permit subsequent data signals to be interpreted correctly. The circuit includes an operational amplifier having a feedback circuit connected between an output terminal and an input terminal; a bank of relays operably connected to a plurality of resistors; and a comparator comparing an output voltage of the amplifier with a reference voltage and generating a compared signal responsive thereto. Means is provided for selectively energizing the relays according to the compared signal from the comparator until the output signal from the amplifier equals to the reference signal. A second comparator is provided for comparing the output of the amplifier with a second voltage source so as to illuminate a lamp when the output signal from the amplifier exceeds the second voltage.

  20. Automatic sample changers maintenance manual

    International Nuclear Information System (INIS)

    Myers, T.A.

    1978-10-01

    This manual describes and provides trouble-shooting aids for the Automatic Sample Changer electronics on the automatic beta counting system, developed by the Los Alamos Scientific Laboratory Group CNC-11. The output of a gas detector is shaped by a preamplifier, then is coupled to an amplifier. Amplifier output is discriminated and is the input to a scaler. An identification number is associated with each sample. At a predetermined count length, the identification number, scaler data plus other information is punched out on a data card. The next sample to be counted is automatically selected. The beta counter uses the same electronics as the prior count did, the only difference being the sample identification number and sample itself. This manual is intended as a step-by-step aid in trouble-shooting the electronics associated with positioning the sample, counting the sample, and getting the needed data punched on an 80-column data card

  1. Performance of automatic scanning microscope for nuclear emulsion experiments

    Science.gov (United States)

    Güler, A. Murat; Altınok, Özgür

    2015-12-01

    The impressive improvements in scanning technology and methods let nuclear emulsion to be used as a target in recent large experiments. We report the performance of an automatic scanning microscope for nuclear emulsion experiments. After successful calibration and alignment of the system, we have reached 99% tracking efficiency for the minimum ionizing tracks that penetrating through the emulsions films. The automatic scanning system is successfully used for the scanning of emulsion films in the OPERA experiment and plan to use for the next generation of nuclear emulsion experiments.

  2. The anatomical tibial axis: reliable rotational orientation in knee replacement.

    Science.gov (United States)

    Cobb, J P; Dixon, H; Dandachli, W; Iranpour, F

    2008-08-01

    The rotational alignment of the tibia is an unresolved issue in knee replacement. A poor functional outcome may be due to malrotation of the tibial component. Our aim was to find a reliable method for positioning the tibial component in knee replacement. CT scans of 19 knees were reconstructed in three dimensions and orientated vertically. An axial plane was identified 20 mm below the tibial spines. The centre of each tibial condyle was calculated from ten points taken round the condylar cortex. The tibial tubercle centre was also generated as the centre of the circle which best fitted eight points on the outside of the tubercle in an axial plane at the level of its most prominent point. The derived points were identified by three observers with errors of 0.6 mm to 1 mm. The medial and lateral tibial centres were constant features (radius 24 mm (SD 3), and 22 mm (SD 3), respectively). An anatomical axis was created perpendicular to the line joining these two points. The tubercle centre was found to be 20 mm (SD 7) lateral to the centre of the medial tibial condyle. Compared with this axis, an axis perpendicular to the posterior condylar axis was internally rotated by 6 degrees (SD 3). An axis based on the tibial tubercle and the tibial spines was also internally rotated by 5 degrees (sd 10). Alignment of the knee when based on this anatomical axis was more reliable than either the posterior surfaces or any axis involving the tubercle which was the least reliable landmark in the region.

  3. Comparison of methodologies for automatic generation of limits and drainage networks for hidrographic basins Comparação entre metodologias para geração automática de limites e redes de drenagem em bacia hidrográfica

    Directory of Open Access Journals (Sweden)

    Samantha A. Alcaraz

    2009-08-01

    Full Text Available The objective of this work was to compare methodologies for the automatic generation of limits and drainage networks, using a geographical information system for basins of low relief variation, such as the Dourados catchment area. Various data/processes were assessed, especially the ArcHydro and AVSWAT interfaces used to process 50 m resolution DTMs formed from the interpolation of digitalized contour lines using ArcInfo, ArcView and Spring GIS, and a 90 m resolution SRTM DTM acquired by interferometry radar. Their accuracy was estimated based upon the pre-processing of small basic sub-basin units of different relief variations, before applying the best combinations to the entire Dourados basin. The accuracy of the automatic stream network generation and watershed delineation depends essentially on the quality of the raw digital terrain model. The selection of the most suitable one then depends completely on the aims of the user and on the work scale.Propôs-se, neste trabalho comparar metodologias para geração automática de limites e de redes de drenagem superficial na bacia hidrográfica do Rio Dourados, com baixa variação de relevo, usando-se sistemas de informações geográficas. Várias associações dados/processos foram testados, dentre os quais as interfaces ArcHydro e AVSWAT, usadas para processar DTMs com resolução de 50 m formados pela interpolação de linhas de contorno digitalizadas através de ArcInfo, ArcView e SPRING e DTMs com 90 m de resolução aplicadas ao SRTM, adquiridas por radar. Estudou-se a precisão com base no processamento de pequenas bacias de diferentes variações de relevo, antes de se aplicar a melhor combinação para toda a bacia do Rio Dourados. A precisão da geração automática da rede de drenagem e a delineação dos divisores de água da bacia, dependeram essencialmente da qualidade da formação das grades nos DTMs. A seleção da melhor combinação dados/processos depende, então, dos

  4. Development of an automatic scaler

    International Nuclear Information System (INIS)

    He Yuehong

    2009-04-01

    A self-designed automatic scaler is introduced. A microcontroller LPC936 is used as the master chip in the scaler. A counter integrated with the micro-controller is configured to operate as external pulse counter. Software employed in the scaler is based on a embedded real-time operating system kernel named Small RTOS. Data storage, calculation and some other functions are also provided. The scaler is designed for applications with low cost, low power consumption solutions. By now, the automatic scaler has been applied in a surface contamination instrument. (authors)

  5. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  6. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology

    Science.gov (United States)

    Dansereau, Jules

    1978-01-01

    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  7. Anatomical study of middle cluneal nerve entrapment

    Directory of Open Access Journals (Sweden)

    Konno T

    2017-06-01

    Full Text Available Tomoyuki Konno,1 Yoichi Aota,2 Tomoyuki Saito,1 Ning Qu,3 Shogo Hayashi,3 Shinichi Kawata,3 Masahiro Itoh3 1Department of Orthopaedic Surgery, Yokohama City University, 2Department of Spine and Spinal Cord, Yokohama Brain and Spine Center, Yokohama City, 3Department of Anatomy, Tokyo Medical University, Tokyo, Japan Object: Entrapment of the middle cluneal nerve (MCN under the long posterior sacroiliac ligament (LPSL is a possible, and underdiagnosed, cause of low-back and/or leg symptoms. To date, detailed anatomical studies of MCN entrapment are few. The purpose of this study was to ascertain, using cadavers, the relationship between the MCN and LPSL and to investigate MCN entrapment. Methods: A total of 30 hemipelves from 20 cadaveric donors (15 female, 5 male designated for education or research, were studied by gross anatomical dissection. The age range of the donors at death was 71–101 years with a mean of 88 years. Branches of the MCN were identified under or over the gluteus maximus fascia caudal to the posterior superior iliac spine (PSIS and traced laterally as far as their finest ramification. Special attention was paid to the relationship between the MCN and LPSL. The distance from the branch of the MCN to the PSIS and to the midline and the diameter of the MCN were measured. Results: A total of 64 MCN branches were identified in the 30 hemipelves. Of 64 branches, 10 (16% penetrated the LPSL. The average cephalocaudal distance from the PSIS to where the MCN penetrated the LPSL was 28.5±11.2 mm (9.1–53.7 mm. The distance from the midline was 36.0±6.4 mm (23.5–45.2 mm. The diameter of the MCN branch traversing the LPSL averaged 1.6±0.5 mm (0.5–3.1 mm. Four of the 10 branches penetrating the LPSL had obvious constriction under the ligament. Conclusion: This is the first anatomical study illustrating MCN entrapment. It is likely that MCN entrapment is not a rare clinical entity. Keywords: middle cluneal nerve, sacroiliac joint

  8. Mistakes in the usage of anatomical terminology in clinical practice.

    Science.gov (United States)

    Kachlik, David; Bozdechova, Ivana; Cech, Pavel; Musil, Vladimir; Baca, Vaclav

    2009-06-01

    Anatomical terminology serves as a basic communication tool in all the medical fields. Therefore Latin anatomical nomenclature has been repetitively issued and revised from 1895 (Basiliensia Nomina Anatomica) until 1998, when the last version was approved and published as the Terminologia Anatomica (International Anatomical Terminology) by the Federative Committee on Anatomical Terminology. A brief history of the terminology and nomenclature development is mentioned, along with the concept and contributions of the Terminologia Anatomica including the employed abbreviations. Examples of obsolete anatomical terms and their current synonyms are listed. Clinicians entered the process of the nomenclature revision and this aspect is demonstrated with several examples of terms used in clinical fields only, some already incorporated in the Terminologia Anatomica and a few obsolete terms still alive in non-theoretical communication. Frequent mistakes in grammar and orthography are stated as well. Authors of the article strongly recommend the use of the recent revision of the Latin anatomical nomenclature both in theoretical and clinical medicine.

  9. Contribution to the anatomical nomenclature concerning lower limb anatomy.

    Science.gov (United States)

    Kachlik, David; Musil, Vladimir; Baca, Vaclav

    2017-09-18

    The aim of this article is to extend and revise the sections of Terminologia Anatomica (TA) dealing with the lower limb structures and to justify the use of newly proposed anatomical terms in clinical medicine, education, and research. Anatomical terms were gathered during our educational experience from anatomical textbooks and journals and compared with the four previous editions of the official Latin anatomical nomenclature. The authors summarise 270 terms with their definitions and explanations for both constant and variable morphological structures (bones, joints, muscles, vessels, nerves and superficial structures) of the hip, thigh, knee, leg, ankle, and foot completed with several grammatical remarks and some general anatomical terms. The proposed terms should be discussed in wider anatomical community and potentially added to next edition of the TA.

  10. Determination of uranium and plutonium in PFBR MOX fuel using automatic potentiometric titrator

    International Nuclear Information System (INIS)

    Kelkar, Anoop; Meena, D.L.; Singh, Mamta; Kapoor, Y.S.; Pabale, Sagar; Fulzele, Ajit; Das, D.K.; Behere, P.G.; Afzal, Mohd

    2014-01-01

    Present paper describes the automatic potentiometric method for the determination of uranium and plutonium in less complexing H 2 SO 4 with scaling down the reagent volumes 15-20 ml in order to minimize the waste generation

  11. Early Results of Anatomic Double Bundle Anterior Cruciate Ligament Reconstruction

    OpenAIRE

    Demet Pepele

    2014-01-01

    Aim: The goal in anterior cruciate ligament reconstruction (ACLR) is to restore the normal anatomic structure and function of the knee. In the significant proportion of patients after the traditional single-bundle ACLR, complaints of instability still continue. Anatomic double bundle ACLR may provide normal kinematics in knees, much closer to the natural anatomy. The aim of this study is to clinically assess the early outcomes of our anatomical double bundle ACLR. Material and Method: In our ...

  12. Total variation with automatic hyper-parameter estimation.

    Science.gov (United States)

    Nascimento, Jacinto; Sanches, João

    2008-01-01

    Medical diagnosis is often hampered by the quality of the images. This happens in a wide range of image modalities. Image noise reduction is a crucial step, however difficult to be accomplished. Bayesian algorithms have been commonly used with success, namely with additive white Gaussian noise (AWGN) model. In fact, the noise corrupting some of the most used medical imaging modalities is not additive neither Gaussian but multiplicative described by Poisson or Rayleigh distributions. This paper proposes a unified framework with automatic hyper parameters estimation. The proposed framework deals with AWGN but also with both Poisson and Rayleigh distributions. The algorithm proposed herein, is based on a maximum a posteriori (MAP) criterion with the edge preserving prior based on the total variation (TV), which avoids the distortion of relevant anatomical details. The denoising technique is performed via single parametric iterative scheme parameterized for each noise model considered. Tests with real data from several medical imaging modalities testify the performance of the algorithm.

  13. Internuclear ophthalmoplegia: MR imaging and anatomic correlation

    International Nuclear Information System (INIS)

    Atlas, S.W.; Grossman, R.I.; Savino, P.J.

    1986-01-01

    Internuclear ophthalmoplegia is a gaze disorder characterized by impaired adduction of the side of a lesion in the medial longitudinal fasciculus (MLF) with dissociated nystagmus of the abducting eye. Eleven patients with internuclear ophthalmoplegia (nine with multiple sclerosis, two with infarction) were examined with spin-echo MR imaging performed at 1.5 T. Nine of the 11 patients also underwent CT. MR imaging was highly sensitive (10 of 11 cases) and CT was of no value (0 of 9 cases) in detecting clinically suspected MLF lesions. These lesions must be distinguished from ''pseudo-MLF hyperintensity,'' which appears as a thin, strictly midline, linear hyperintensity just interior to the fourth ventricle and aqueduct in healthy subjects. True MLF lesions are nodular, more prominent, and slightly off the midline, corresponding to the paramedian anatomic site of the MLF

  14. Anatomic Twist to a Straightforward Ablation

    Directory of Open Access Journals (Sweden)

    Mandeep Singh Randhawa, MD

    2013-03-01

    Full Text Available Atrioventricular (AV junction ablation for treatment of refractory atrial fibrillation is a well defined, standardized procedure and the simplest of commonly performed radiofrequency ablations in the field of cardiac electrophysiology. We report successful AV junction ablation using an inferior approach in a case of inferior vena cava interruption. Inability during the procedure to initially pass the ablation catheter into the right ventricle, combined with low amplitude electrograms, led to suspicion of an anatomic abnormality. This was determined to be a heterotaxy syndrome with inferior vena cava interruption and azygos continuation, draining in turn into the superior vena cava. Advancing Schwartz right 0 (SRO sheath through the venous abnormality into the right atrium allowed adequate catheter stability to successfully induce complete AV block with radiofrequency energy.

  15. [Antique anatomical collections for contemporary museums].

    Science.gov (United States)

    Nesi, Gabriella; Santi, Raffaella

    2013-01-01

    Anatomy and Pathology Museum collections display a great biological value and offer unique samples for research purposes. Pathological specimens may be investigated by means of modern radiological and molecular biology techniques in order to provide the etiological background of disease, with relevance to present-day knowledge. Meanwhile, historical resources provide epidemiologic data regarding the socio-economic conditions of the resident populations, the more frequently encountered illnesses and dietary habits. These multidisciplinary approaches lead to more accurate diagnoses also allowing new strategies in cataloguing and musealization of anatomical specimens. Further, once these data are gathered, they may constitute the basis of riedited Museum catalogues feasible to be digitalized and displayed via the Web.

  16. Training shortest-path tractography: Automatic learning of spatial priors

    DEFF Research Database (Denmark)

    Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde

    2016-01-01

    knowledge. Here we demonstrate how such prior knowledge, or indeed any prior spatial information, can be automatically incorporated into a shortest-path tractography approach to produce more robust results. We describe how such a prior can be automatically generated (learned) from a population, and we......Tractography is the standard tool for automatic delineation of white matter tracts from diffusion weighted images. However, the output of tractography often requires post-processing to remove false positives and ensure a robust delineation of the studied tract, and this demands expert prior...... demonstrate that our framework also retains support for conventional interactive constraints such as waypoint regions. We apply our approach to the open access, high quality Human Connectome Project data, as well as a dataset acquired on a typical clinical scanner. Our results show that the use of a learned...

  17. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  18. Automatically Preparing Safe SQL Queries

    Science.gov (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  19. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  20. The Automatic Measurement of Targets

    DEFF Research Database (Denmark)

    Höhle, Joachim

    1997-01-01

    The automatic measurement of targets is demonstrated by means of a theoretical example and by an interactive measuring program for real imagery from a réseau camera. The used strategy is a combination of two methods: the maximum correlation coefficient and the correlation in the subpixel range...... interactive software is also part of a computer-assisted learning program on digital photogrammetry....

  1. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  2. FaNexer: Persian Keyphrase Automatic Indexer

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Falahati Qadimi Fumani

    2014-06-01

    Full Text Available The main objective of this paper was to design a model of automatic keyphrase indexing for Persian. The train model, consisting of six features – “TF”, “TF × IDF”, “RE”, “RE × IDF”, “Node Degree” and “First Occurrence” – were elaborated on. These six features were defined briefly and for each feature, the discretization ranges applied as well as the Yes/No probability scores of being an index term were reported. Finally, the way the model, and each of its components, performed were demonstrated in a step-by-step manner by running the software on a sample full-text article. The ultimate assessment of the software on 75 test articles revealed that it had a very good performance on full-texts (F-measure = 27.3%, Precision = 31.68%, and recall = 25.45% and abstracts (F-measure = 28%, precision = 32.19%, and recall = 26.27% when default was set at 7. The software also proved successful as regards generation of keyphrases rather than single word index terms at default 7. In all, 58.1% of the index terms generated by the software for full-text documents, and 58.67% of those generated for abstracts were phrases. Finally, 78.86% and 74.48% of the keyterms generated for full-texts and abstracts were judged as relevant by an LIS expert.

  3. Automatic design of digital synthetic gene circuits.

    Directory of Open Access Journals (Sweden)

    Mario A Marchisio

    2011-02-01

    Full Text Available De novo computational design of synthetic gene circuits that achieve well-defined target functions is a hard task. Existing, brute-force approaches run optimization algorithms on the structure and on the kinetic parameter values of the network. However, more direct rational methods for automatic circuit design are lacking. Focusing on digital synthetic gene circuits, we developed a methodology and a corresponding tool for in silico automatic design. For a given truth table that specifies a circuit's input-output relations, our algorithm generates and ranks several possible circuit schemes without the need for any optimization. Logic behavior is reproduced by the action of regulatory factors and chemicals on the promoters and on the ribosome binding sites of biological Boolean gates. Simulations of circuits with up to four inputs show a faithful and unequivocal truth table representation, even under parametric perturbations and stochastic noise. A comparison with already implemented circuits, in addition, reveals the potential for simpler designs with the same function. Therefore, we expect the method to help both in devising new circuits and in simplifying existing solutions.

  4. The new vestibular stimuli: sound and vibration-anatomical, physiological and clinical evidence.

    Science.gov (United States)

    Curthoys, Ian S

    2017-04-01

    The classical view of the otoliths-as flat plates of fairly uniform receptors activated by linear acceleration dragging on otoconia and so deflecting the receptor hair bundles-has been replaced by new anatomical and physiological evidence which shows that the maculae are much more complex. There is anatomical spatial differentiation across the macula in terms of receptor types, hair bundle heights, stiffness and attachment to the overlying otolithic membrane. This anatomical spatial differentiation corresponds to the neural spatial differentiation of response dynamics from the receptors and afferents from different regions of the otolithic maculae. Specifically, receptors in a specialized band of cells, the striola, are predominantly type I receptors, with short, stiff hair bundles and looser attachment to the overlying otoconial membrane than extrastriolar receptors. At the striola the hair bundles project into holes in the otolithic membrane, allowing for fluid displacement to deflect the hair bundles and activate the cell. This review shows the anatomical and physiological evidence supporting the hypothesis that fluid displacement, generated by sound or vibration, deflects the short stiff hair bundles of type I receptors at the striola, resulting in neural activation of the irregular afferents innervating them. So these afferents are activated by sound or vibration and show phase-locking to individual cycles of the sound or vibration stimulus up to frequencies above 2000 Hz, underpinning the use of sound and vibration for clinical tests of vestibular function.

  5. Anatomically asymmetrical runners move more asymmetrically at the same metabolic cost.

    Directory of Open Access Journals (Sweden)

    Elena Seminati

    Full Text Available We hypothesized that, as occurring in cars, body structural asymmetries could generate asymmetry in the kinematics/dynamics of locomotion, ending up in a higher metabolic cost of transport, i.e. more 'fuel' needed to travel a given distance. Previous studies found the asymmetries in horses' body negatively correlated with galloping performance. In this investigation, we analyzed anatomical differences between the left and right lower limbs as a whole by performing 3D cross-correlation of Magnetic Resonance Images of 19 male runners, clustered as Untrained Runners, Occasional Runners and Skilled Runners. Running kinematics of their body centre of mass were obtained from the body segments coordinates measured by a 3D motion capture system at incremental running velocities on a treadmill. A recent mathematical procedure quantified the asymmetry of the body centre of mass trajectory between the left and right steps. During the same sessions, runners' metabolic consumption was measured and the cost of transport was calculated. No correlations were found between anatomical/kinematic variables and the metabolic cost of transport, regardless of the training experience. However, anatomical symmetry significant correlated to the kinematic symmetry, and the most trained subjects showed the highest level of kinematic symmetry during running. Results suggest that despite the significant effects of anatomical asymmetry on kinematics, either those changes are too small to affect economy or some plastic compensation in the locomotor system mitigates the hypothesized change in energy expenditure of running.

  6. Parametric Anatomical Modeling: A method for modeling the anatomical layout of neurons and their projections

    Directory of Open Access Journals (Sweden)

    Martin ePyka

    2014-09-01

    Full Text Available Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM, to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: i the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, ii the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  7. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections.

    Science.gov (United States)

    Pyka, Martin; Klatt, Sebastian; Cheng, Sen

    2014-01-01

    Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  8. [Historical development of modern anatomical education in Japan].

    Science.gov (United States)

    Sakai, Tatsuo

    2008-12-01

    The medical schools in the beginning of Meiji era were diverse both in the founders and in the way of education, frequently employing foreign teachers of various nationalities. In 1871, German teachers were appointed to organized medical education at the medical school of the university of Tokyo. The anatomical education in the school was conducted by German teachers, i.e. Miller (1871-1873), Dönitz (1873-1877), Gierke (1877-1880) and Disse (1880-1885), followed by Koganei who returned from the study in Germany. In 1882 (Meiji 15th), the general rule for medical school was enforced so that the medical schools were practically obliged to employ numbers of graduates of the university of Tokyo. In 1887 (Meiji 20th), the educational system was reformed so that many of the medical schools were closed, and the medical schools were integrated into one university, five national senior high schools and three prefectural ones in addition to four private ones. After that most of anatomical teachers were either graduates of the university of Tokyo or those who studied in the anatomical department of the university. Before 1877 (Meiji 10th), the anatomical books were mainly translated from English books, and foreign teachers of various nationality were employed in many medical schools in Japan. After 1877 (Meiji 10th), the anatomical books based on the lectures by German teachers at the university of Tokyo were published. The anatomical books after 1887 (Meiji 20th) were written based on German books, and the German anatomical terms were utilized. After 1905 (Meiji 38th), the original Japanese anatomical books appeared, employing international anatomical terms. At the first meeting of Japanese Association of Anatomists in 1893 (Meiji 26th), the Japanese anatomical teachers met together and most of them were graduates of the university of Tokyo or fellows of its anatomical department.

  9. Recent advances in standards for collaborative Digital Anatomic Pathology

    Science.gov (United States)

    2011-01-01

    Context Collaborative Digital Anatomic Pathology refers to the use of information technology that supports the creation and sharing or exchange of information, including data and images, during the complex workflow performed in an Anatomic Pathology department from specimen reception to report transmission and exploitation. Collaborative Digital Anatomic Pathology can only be fully achieved using medical informatics standards. The goal of the international integrating the Healthcare Enterprise (IHE) initiative is precisely specifying how medical informatics standards should be implemented to meet specific health care needs and making systems integration more efficient and less expensive. Objective To define the best use of medical informatics standards in order to share and exchange machine-readable structured reports and their evidences (including whole slide images) within hospitals and across healthcare facilities. Methods Specific working groups dedicated to Anatomy Pathology within multiple standards organizations defined standard-based data structures for Anatomic Pathology reports and images as well as informatic transactions in order to integrate Anatomic Pathology information into the electronic healthcare enterprise. Results The DICOM supplements 122 and 145 provide flexible object information definitions dedicated respectively to specimen description and Whole Slide Image acquisition, storage and display. The content profile “Anatomic Pathology Structured Report” (APSR) provides standard templates for structured reports in which textual observations may be bound to digital images or regions of interest. Anatomic Pathology observations are encoded using an international controlled vocabulary defined by the IHE Anatomic Pathology domain that is currently being mapped to SNOMED CT concepts. Conclusion Recent advances in standards for Collaborative Digital Anatomic Pathology are a unique opportunity to share or exchange Anatomic Pathology structured

  10. The anatomical diaspora: evidence of early American anatomical traditions in North Dakota.

    Science.gov (United States)

    Stubblefield, Phoebe R

    2011-09-01

    The current focus in forensic anthropology on increasing scientific certainty in ancestry determination reinforces the need to examine the ancestry of skeletal remains used for osteology instruction. Human skeletal remains were discovered on the University of North Dakota campus in 2007. After recovery, the osteological examination resulted in a profile for a 33- to 46-year-old woman of African descent with stature ranging from 56.3 to 61.0 in. The pattern of postmortem damage indicated that the remains had been prepared for use as an anatomical teaching specimen. Review of the American history of anatomical teaching revealed a preference for Black subjects, which apparently extended to states like North Dakota despite extremely low resident populations of people of African descent. This study emphasizes the need to examine the ancestry of older teaching specimens that lack provenience, rather than assuming they are derived from typical (i.e., Indian) sources of anatomical material. © 2011 American Academy of Forensic Sciences.

  11. An analysis of tools for automatic software development and automatic code generation

    Directory of Open Access Journals (Sweden)

    Viviana Yarel Rosales-Morales

    2015-01-01

    Full Text Available El desarrollo de software es una importante área en la ingeniería de software, por tal motivo han surgido técnicas, enfoques y métodos que permiten la automatización de desarrollo del mismo. En este trabajo se presenta un análisis de las herramientas para el desarrollo automático de software y la generación automática de código fuente, con el fi n de evaluarlas y determinar si cumplen o no con un conjunto de características y funcionalidades en términos de calidad. Dichas características incluyen efi cacia, productividad, seguridad y satisfacción, todo a través de una evaluación cualitativa y cuantitativa. Estas herramientas son 1 herramientas CASE, 2 marcos de trabajo ( frameworks y 3 ambientes de desarrollo integrado (IDEs. La evaluación se llevó a cabo con el fi n de medir no sólo la capacidad de uso, sino también el apoyo que brindan para el desarrollo de software automático y la generación automática de código fuente. El objetivo de este trabajo es proporcionar una metodología y una breve revisión de los trabajos más importantes para, de esta forma, identifi car las principales características de éstos y presentar una evaluación comparativa en términos cualitativos y cuantitativos, con la fi nalidad de proporcionar la información necesaria para el desarrollador de software que facilite la toma de decisiones al considerar herramientas que le pueden ser útiles.

  12. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  13. Automatisms: bridging clinical neurology with criminal law.

    Science.gov (United States)

    Rolnick, Joshua; Parvizi, Josef

    2011-03-01

    The law, like neurology, grapples with the relationship between disease states and behavior. Sometimes, the two disciplines share the same terminology, such as automatism. In law, the "automatism defense" is a claim that action was involuntary or performed while unconscious. Someone charged with a serious crime can acknowledge committing the act and yet may go free if, relying on the expert testimony of clinicians, the court determines that the act of crime was committed in a state of automatism. In this review, we explore the relationship between the use of automatism in the legal and clinical literature. We close by addressing several issues raised by the automatism defense: semantic ambiguity surrounding the term automatism, the presence or absence of consciousness during automatisms, and the methodological obstacles that have hindered the study of cognition during automatisms. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Towards the Automatic Generation of Virtual Presenter Agents

    NARCIS (Netherlands)

    Nijholt, Antinus; Cohen, E.

    There are many ways to present information to visitors and users of 2D and 3D interface environments. In these virtual environments we can provide visitors with simulations of real environments, including simulations of presenters in such environments (a lecturer, a sales agent, a receptionist, a

  15. automatic generation of root locus plots for linear time invariant

    African Journals Online (AJOL)

    user

    Design and analysis of control systems often become difficult due to the complexity of the system model and the design ... theory, it has equally been applied to classical .... open loop poles. Available algorithms for sketching this root locus can be categorized as follows: i. Direct Methods: These are the algorithms in which.

  16. Automatic generation of application specific FPGA multicore accelerators

    DEFF Research Database (Denmark)

    Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo

    2014-01-01

    High performance computing systems make increasing use of hardware accelerators to improve performance and power properties. For large high-performance FPGAs to be successfully integrated in such computing systems, methods to raise the abstraction level of FPGA programming are required...... to identify optimal performance energy trade-offs points for a multicore based FPGA accelerator....

  17. Automatic Parcellation of Brain Images Using Parametric Generative Models

    OpenAIRE

    Puonti, Oula

    2012-01-01

    Vain tiivistelmä. Opinnäytteiden arkistokappaleet ovat luettavissa Helsingin yliopiston kirjastossa. Hae HELKA-tietokannasta (http://www.helsinki.fi/helka/index.htm). Abstract only. The paper copy of the whole thesis is available for reading room use at the Helsinki University Library. Search HELKA online catalog (http://www.helsinki.fi/helka/index.htm). Endast avhandlingens sammandrag. Pappersexemplaret av hela avhandlingen finns för läsesalsbruk i Helsingfors universitets bibliotek. S...

  18. Model-Driven Engineering: Automatic Code Generation and Beyond

    Science.gov (United States)

    2015-03-01

    early in the design process, as discussed by Bergey and Jones [ Bergey 2013]; however, the evaluation scope and criteria may need to be expanded to...TN-005 | 34 References URLs are valid as of the publication date of this document. [ Bergey 2013] Bergey , John & Jones, Larry. “Architecture

  19. Techniques for Automatically Generating Biographical Summaries from News Articles

    Science.gov (United States)

    2007-09-01

    ittehad|iji islamic society of statistical sciences|isoss islamic students league|isl israel defense forces| idf israeli defence forces| idf ...peshawar fcm,nawabshah fia, karachi ficci,fpcci flemington,nj florida,us florida,usa fns,germany frankfurt,germany fremont,california french...switzerland kakul,pakistan kanadahar,afghanistan kananaskis,alberta kandahar,afghanistan karachi ,pakistan karahci,quetta karak,kohat kasur,pakistan

  20. Automatically generated acceptance test: A software reliability experiment

    Science.gov (United States)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.