WorldWideScience

Sample records for assisting automatic generation

  1. Generating Customized Verifiers for Automatically Generated Code

    Science.gov (United States)

    Denney, Ewen; Fischer, Bernd

    2008-01-01

    Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.

  2. Automatic generation of tourist brochures

    KAUST Repository

    Birsak, Michael

    2014-05-01

    We present a novel framework for the automatic generation of tourist brochures that include routing instructions and additional information presented in the form of so-called detail lenses. The first contribution of this paper is the automatic creation of layouts for the brochures. Our approach is based on the minimization of an energy function that combines multiple goals: positioning of the lenses as close as possible to the corresponding region shown in an overview map, keeping the number of lenses low, and an efficient numbering of the lenses. The second contribution is a route-aware simplification of the graph of streets used for traveling between the points of interest (POIs). This is done by reducing the graph consisting of all shortest paths through the minimization of an energy function. The output is a subset of street segments that enable traveling between all the POIs without considerable detours, while at the same time guaranteeing a clutter-free visualization. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  3. Traceability Through Automatic Program Generation

    Science.gov (United States)

    Richardson, Julian; Green, Jeff

    2003-01-01

    Program synthesis is a technique for automatically deriving programs from specifications of their behavior. One of the arguments made in favour of program synthesis is that it allows one to trace from the specification to the program. One way in which traceability information can be derived is to augment the program synthesis system so that manipulations and calculations it carries out during the synthesis process are annotated with information on what the manipulations and calculations were and why they were made. This information is then accumulated throughout the synthesis process, at the end of which, every artifact produced by the synthesis is annotated with a complete history relating it to every other artifact (including the source specification) which influenced its construction. This approach requires modification of the entire synthesis system - which is labor-intensive and hard to do without influencing its behavior. In this paper, we introduce a novel, lightweight technique for deriving traceability from a program specification to the corresponding synthesized code. Once a program has been successfully synthesized from a specification, small changes are systematically made to the specification and the effects on the synthesized program observed. We have partially automated the technique and applied it in an experiment to one of our program synthesis systems, AUTOFILTER, and to the GNU C compiler, GCC. The results are promising: 1. Manual inspection of the results indicates that most of the connections derived from the source (a specification in the case of AUTOFILTER, C source code in the case of GCC) to its generated target (C source code in the case of AUTOFILTER, assembly language code in the case of GCC) are correct. 2. Around half of the lines in the target can be traced to at least one line of the source. 3. Small changes in the source often induce only small changes in the target.

  4. Automatic Chinese Factual Question Generation

    Science.gov (United States)

    Liu, Ming; Rus, Vasile; Liu, Li

    2017-01-01

    Question generation is an emerging research area of artificial intelligence in education. Question authoring tools are important in educational technologies, e.g., intelligent tutoring systems, as well as in dialogue systems. Approaches to generate factual questions, i.e., questions that have concrete answers, mainly make use of the syntactical…

  5. Automatic generation of multilingual sports summaries

    OpenAIRE

    Hasan, Fahim Muhammad

    2011-01-01

    Natural Language Generation is a subfield of Natural Language Processing, which is concerned with automatically creating human readable text from non-linguistic forms of information. A template-based approach to Natural Language Generation utilizes base formats for different types of sentences, which are subsequently transformed to create the final readable forms of the output. In this thesis, we investigate the suitability of a template-based approach to multilingual Natural Language Generat...

  6. Automatic Thesaurus Generation for Chinese Documents.

    Science.gov (United States)

    Tseng, Yuen-Hsien

    2002-01-01

    Reports an approach to automatic thesaurus construction for Chinese documents. Presents an effective Chinese keyword extraction algorithm. Compared to previous studies, this method speeds up the thesaurus generation process drastically. It also achieves a similar percentage level of term relatedness. Includes three tables and four figures.…

  7. Automatic program generation from specifications using PROLOG

    Science.gov (United States)

    Pelin, Alex; Morrow, Paul

    1988-01-01

    An automatic program generator which creates PROLOG programs from input/output specifications is described. The generator takes as input descriptions of the input and output data types, a set of transformations and the input/output relation. Abstract data types are used as models for data. They are defined as sets of terms satisfying a system of equations. The tests, the transformations and the input/output relation are also specified by equations.

  8. Automatic code generator for higher order integrators

    Science.gov (United States)

    Mushtaq, Asif; Olaussen, Kåre

    2014-05-01

    Some explicit algorithms for higher order symplectic integration of a large class of Hamilton's equations have recently been discussed by Mushtaq et al. Here we present a Python program for automatic numerical implementation of these algorithms for a given Hamiltonian, both for double precision and multiprecision computations. We provide examples of how to use this program, and illustrate behavior of both the code generator and the generated solver module(s).

  9. The automatic electromagnetic field generating system

    Science.gov (United States)

    Audone, B.; Gerbi, G.

    1982-07-01

    The technical study and the design approaches adopted for the definition of the automatic electromagnetic field generating system (AEFGS) dedicated to EMC susceptibility testing are presented. The AEFGS covers the frequency range 10 KHz to 40 GHZ and operates successfully in the two EMC shielded chambers at ESTEC. The performance of the generators/amplifiers subsystems, antennas selection, field amplitude and susceptibility feedback and monitoring systems is described. System control modes which guarantee the AEFGS full operability under different test conditions are discussed. Advantages of automation of susceptibility testing include increased measurement accuracy and testing cost reduction.

  10. Automatic generation of combinatorial test data

    CERN Document Server

    Zhang, Jian; Ma, Feifei

    2014-01-01

    This book reviews the state-of-the-art in combinatorial testing, with particular emphasis on the automatic generation of test data. It describes the most commonly used approaches in this area - including algebraic construction, greedy methods, evolutionary computation, constraint solving and optimization - and explains major algorithms with examples. In addition, the book lists a number of test generation tools, as well as benchmarks and applications. Addressing a multidisciplinary topic, it will be of particular interest to researchers and professionals in the areas of software testing, combi

  11. Multiblock grid generation with automatic zoning

    Science.gov (United States)

    Eiseman, Peter R.

    1995-01-01

    An overview will be given for multiblock grid generation with automatic zoning. We shall explore the many advantages and benefits of this exciting technology and will also see how to apply it to a number of interesting cases. The technology is available in the form of a commercial code, GridPro(registered trademark)/az3000. This code takes surface geometry definitions and patterns of points as its primary input and produces high quality grids as its output. Before we embark upon our exploration, we shall first give a brief background of the environment in which this technology fits.

  12. Automatic Quiz Generation for the Elderly.

    Science.gov (United States)

    Chen, Weiqin; Samuelsen, Jeanette

    2015-01-01

    According to the literature, ageing causes declines in sensory, perceptual, motor and cognitive abilities. The combination of reduced vision, hearing, memory and mobility contributes to isolation and depression. We argue that memory games have potential for enhancing the cognitive ability of the elderly and improving their life quality. In our earlier research, we designed tangible tabletop games to help the elderly remember and talk about the past. In this paper, we report on our further research in the automatic generation of quizzes based on Wikipedia and other online resources for entertainment and memory training of the elderly.

  13. Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos.

    Science.gov (United States)

    Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, Liming

    2015-09-01

    Current activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.

  14. Automatic Code Generation for Instrument Flight Software

    Science.gov (United States)

    Wagstaff, Kiri L.; Benowitz, Edward; Byrne, D. J.; Peters, Ken; Watney, Garth

    2008-01-01

    Automatic code generation can be used to convert software state diagrams into executable code, enabling a model- based approach to software design and development. The primary benefits of this process are reduced development time and continuous consistency between the system design (statechart) and its implementation. We used model-based design and code generation to produce software for the Electra UHF radios that is functionally equivalent to software that will be used by the Mars Reconnaissance Orbiter (MRO) and the Mars Science Laboratory to communicate with each other. The resulting software passed all of the relevant MRO flight software tests, and the project provides a useful case study for future work in model-based software development for flight software systems.

  15. Automatic Testcase Generation for Flight Software

    Science.gov (United States)

    Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.

    2008-01-01

    The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to

  16. Evaluation of automatic vacuum- assisted compaction solutions

    Directory of Open Access Journals (Sweden)

    M. Brzeziński

    2011-01-01

    Full Text Available Currently on the mould-making machines market the companies like: DiSA, KUENKEL WAGNER, HAFLINGER, HEINRICH WAGNER SINTO, HUNTER, SAVELLI AND TECHNICAL play significant role. These companies are the manufacturers of various solutions in machines and instalations applied in foundry engineering. Automatic foundry machines for compaction of green sand have the major role in mechanisation and automation processes of making the mould. The concept of operation of automatic machines is based on the static and dynamic methods of compacting the green sand. The method which gains the importance is the compacting method by using the energy of the air pressure. It's the initial stage or the supporting process of compacting the green sand. However in the automatic mould making machines using this method it's essential to use the additional compaction of the mass in order to receive the final parameters of the form. In the constructional solutions of the machines there is the additional division which concerns the method of putting the sand into the mould box. This division distinquishes the transport of the sand with simultaneous compaction or the putting of the sand without the pre-compaction. As the solutions of the major manufacturers are often the subject for application in various foundries, the authors of the paper would like/have the confidence to present their own evaluation process confirmed by their own researches and independent analysis of the producers' solutions.

  17. A System for Automatically Generating Scheduling Heuristics

    Science.gov (United States)

    Morris, Robert

    1996-01-01

    The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.

  18. Automatic term list generation for entity tagging.

    Science.gov (United States)

    Sandler, Ted; Schein, Andrew I; Ungar, Lyle H

    2006-03-15

    Many entity taggers and information extraction systems make use of lists of terms of entities such as people, places, genes or chemicals. These lists have traditionally been constructed manually. We show that distributional clustering methods which group words based on the contexts that they appear in, including neighboring words and syntactic relations extracted using a shallow parser, can be used to aid in the construction of term lists. Experiments on learning lists of terms and using them as part of a gene tagger on a corpus of abstracts from the scientific literature show that our automatically generated term lists significantly boost the precision of a state-of-the-art CRF-based gene tagger to a degree that is competitive with using hand curated lists and boosts recall to a degree that surpasses that of the hand-curated lists. Our results also show that these distributional clustering methods do not generate lists as helpful as those generated by supervised techniques, but that they can be used to complement supervised techniques so as to obtain better performance. The code used in this paper is available from http://www.cis.upenn.edu/datamining/software_dist/autoterm/

  19. Automatic Generation of Minimal Cut Sets

    Directory of Open Access Journals (Sweden)

    Sentot Kromodimoeljo

    2015-06-01

    Full Text Available A cut set is a collection of component failure modes that could lead to a system failure. Cut Set Analysis (CSA is applied to critical systems to identify and rank system vulnerabilities at design time. Model checking tools have been used to automate the generation of minimal cut sets but are generally based on checking reachability of system failure states. This paper describes a new approach to CSA using a Linear Temporal Logic (LTL model checker called BT Analyser that supports the generation of multiple counterexamples. The approach enables a broader class of system failures to be analysed, by generalising from failure state formulae to failure behaviours expressed in LTL. The traditional approach to CSA using model checking requires the model or system failure to be modified, usually by hand, to eliminate already-discovered cut sets, and the model checker to be rerun, at each step. By contrast, the new approach works incrementally and fully automatically, thereby removing the tedious and error-prone manual process and resulting in significantly reduced computation time. This in turn enables larger models to be checked. Two different strategies for using BT Analyser for CSA are presented. There is generally no single best strategy for model checking: their relative efficiency depends on the model and property being analysed. Comparative results are given for the A320 hydraulics case study in the Behavior Tree modelling language.

  20. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  1. A program for assisting automatic generation control of the ELETRONORTE using artificial neural network; Um programa para assistencia ao controle automatico de geracao da Eletronorte usando rede neuronal artificial

    Energy Technology Data Exchange (ETDEWEB)

    Brito Filho, Pedro Rodrigues de; Nascimento Garcez, Jurandyr do [Para Univ., Belem, PA (Brazil). Centro Tecnologico; Charone Junior, Wady [Centrais Eletricas do Nordeste do Brasil S.A. (ELETRONORTE), Belem, PA (Brazil)

    1994-12-31

    This work presents an application of artificial neural network as a support to decision making in the automatic generation control (AGC) of the ELETRONORTE. It uses a software to auxiliary in the decisions in real time of the AGC. (author) 2 refs., 6 figs., 1 tab.

  2. Automatic program generation: future of software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, J.H.

    1979-01-01

    At this moment software development is still more of an art than an engineering discipline. Each piece of software is lovingly engineered, nurtured, and presented to the world as a tribute to the writer's skill. When will this change. When will the craftsmanship be removed and the programs be turned out like so many automobiles from an assembly line. Sooner or later it will happen: economic necessities will demand it. With the advent of cheap microcomputers and ever more powerful supercomputers doubling capacity, much more software must be produced. The choices are to double the number of programers, double the efficiency of each programer, or find a way to produce the needed software automatically. Producing software automatically is the only logical choice. How will automatic programing come about. Some of the preliminary actions which need to be done and are being done are to encourage programer plagiarism of existing software through public library mechanisms, produce well understood packages such as compiler automatically, develop languages capable of producing software as output, and learn enough about the whole process of programing to be able to automate it. Clearly, the emphasis must not be on efficiency or size, since ever larger and faster hardware is coming.

  3. Automatic generation control of interconnected power system with ...

    African Journals Online (AJOL)

    In this paper, automatic generation control (AGC) of two area interconnected power system having diverse sources of power generation is studied. A two area power system comprises power generations from hydro, thermal and gas sources in area-1 and power generations from hydro and thermal sources in area-2. All the ...

  4. Automatic generation control of interconnected power system with ...

    African Journals Online (AJOL)

    user

    Abstract. In this paper, automatic generation control (AGC) of two area interconnected power system having diverse sources of power generation is studied. A two area power system comprises power generations from hydro, thermal and gas sources in area-1 and power generations from hydro and thermal sources in ...

  5. System for Automatic Generation of Examination Papers in Discrete Mathematics

    Science.gov (United States)

    Fridenfalk, Mikael

    2013-01-01

    A system was developed for automatic generation of problems and solutions for examinations in a university distance course in discrete mathematics and tested in a pilot experiment involving 200 students. Considering the success of such systems in the past, particularly including automatic assessment, it should not take long before such systems are…

  6. Design of automatic thruster assisted mooring systems for ships

    Directory of Open Access Journals (Sweden)

    Jan P. Strand

    1998-04-01

    Full Text Available This paper addresses the mathematical modelling and controller design of an automatic thruster assisted position mooring system. Such control systems are applied to anchored floating production offloading and storage vessels and semi-subs. The controller is designed using model based control with a LQG feedback controller in conjunction with a Kalman filter. The controller design is in addition to the environmental loads accounting for the mooring forces acting on the vessel. This is reflected in the model structure and in the inclusion of new functionality.

  7. Automatic Grasp Generation and Improvement for Industrial Bin-Picking

    DEFF Research Database (Denmark)

    Kraft, Dirk; Ellekilde, Lars-Peter; Rytz, Jimmy Alison

    2014-01-01

    This paper presents work on automatic grasp generation and grasp learning for reducing the manual setup time and increase grasp success rates within bin-picking applications. We propose an approach that is able to generate good grasps automatically using a dynamic grasp simulator, a newly developed...... and achieve comparable results and that our learning approach can improve system performance significantly. Automatic bin-picking is an important industrial process that can lead to significant savings and potentially keep production in countries with high labour cost rather than outsourcing it. The presented...... work allows to minimize cycle time as well as setup cost, which are essential factors in automatic bin-picking. It therefore leads to a wider applicability of bin-picking in industry....

  8. Polygraph: Automatically Generating Signatures for Polymorphic Worms

    OpenAIRE

    Newsome, J.; Karp, B.; Song, D.

    2005-01-01

    It is widely believed that content-signature-based intrusion detection systems (IDSes) are easily evaded by polymorphic worms, which vary their payload on every infection attempt. In this paper, we present Polygraph, a signature generation system that successfully produces signatures that match polymorphic worms. Polygraph generates signatures that consist of multiple disjoint content substrings. In doing so, Polygraph leverages our insight that for a real-world exploit to function properly, ...

  9. Automatic generation of matter-of-opinion video documentaries

    NARCIS (Netherlands)

    S. Bocconi; F.-M. Nack (Frank); L. Hardman (Lynda)

    2008-01-01

    textabstractIn this paper we describe a model for automatically generating video documentaries. This allows viewers to specify the subject and the point of view of the documentary to be generated. The domain is matter-of-opinion documentaries based on interviews. The model combines rhetorical

  10. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of ...

  11. Automatic Control of Veno-Venous Extracorporeal Lung Assist.

    Science.gov (United States)

    Kopp, Ruedger; Bensberg, Ralf; Stollenwerk, Andre; Arens, Jutta; Grottke, Oliver; Walter, Marian; Rossaint, Rolf

    2016-10-01

    Veno-venous extracorporeal lung assist (ECLA) can provide sufficient gas exchange even in most severe cases of acute respiratory distress syndrome. Commercially available systems are manually controlled, although an automatically controlled ECLA could allow individualized and continuous adaption to clinical requirements. Therefore, we developed a demonstrator with an integrated control algorithm to keep continuously measured peripheral oxygen saturation and partial pressure of carbon dioxide constant by automatically adjusting extracorporeal blood and gas flow. The "SmartECLA" system was tested in six animal experiments with increasing pulmonary hypoventilation and hypoxic inspiratory gas mixture to simulate progressive acute respiratory failure. During a cumulative evaluation time of 32 h for all experiments, automatic ECLA control resulted in a peripheral oxygen saturation ≥90% for 98% of the time with the lowest value of 82% for 15 s. Partial pressure of venous carbon dioxide was between 40 and 49 mm Hg for 97% of the time with no value 49 mm Hg. With decreasing inspiratory oxygen concentration, extracorporeal oxygen uptake increased from 68 ± 25 to 154 ± 34 mL/min (P respiratory rate resulted in increasing extracorporeal carbon dioxide elimination from 71 ± 37 to 92 ± 37 mL/min (P concept could be demonstrated for this novel automatically controlled veno-venous ECLA circuit. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  12. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  13. Automatically generated code for relativistic inhomogeneous cosmologies

    Science.gov (United States)

    Bentivegna, Eloisa

    2017-02-01

    The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.

  14. Automatic tetrahedral mesh generation for impact computations

    Science.gov (United States)

    Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.

    2017-10-01

    Explicit time integration schemes for dynamic processes simulation in solids put strong demands on the meshes. A technique for mesh generation in complicated three-dimensional bodies is developed. It consists of immersion of the body in a domain with a high quality mesh. Then the knots lying outside of the body are removed and the knot distribution near the boundary is regularized by the bubble packing method. Local transformations of tetrahedra groups allow to improve mesh quality. Parallel execution of the algorithm is discussed.

  15. Automatic Generation of Network Protocol Gateways

    DEFF Research Database (Denmark)

    Bromberg, Yérom-David; Réveillère, Laurent; Lawall, Julia

    2009-01-01

    The emergence of networked devices in the home has made it possible to develop applications that control a variety of household functions. However, current devices communicate via a multitude of incompatible protocols, and thus gateways are needed to translate between them.  Gateway construction......, however, requires an intimate knowledge of the relevant protocols and a substantial understanding of low-level network programming, which can be a challenge for many application programmers. This paper presents a generative approach to gateway construction, z2z, based on a domain-specific language...... for describing protocol behaviors, message structures, and the gateway logic.  Z2z includes a compiler that checks essential correctness properties and produces efficient code. We have used z2z to develop a number of gateways, including SIP to RTSP, SLP to UPnP, and SMTP to SMTP via HTTP, involving a range...

  16. Automatic generation of fuzzy inference systems via unsupervised learning.

    Science.gov (United States)

    Er, Meng Joo; Zhou, Yi

    2008-12-01

    In this paper, a novel approach termed Enhanced Dynamic Self-Generated Fuzzy Q-Learning (EDSGFQL) for automatically generating Fuzzy Inference Systems (FISs) is presented. In the EDSGFQL approach, structure identification and parameter estimations of FISs are achieved via Unsupervised Learning (UL) (including Reinforcement Learning (RL)). Instead of using Supervised Learning (SL), UL clustering methods are adopted for input space clustering when generating FISs. At the same time, structure and preconditioning parts of a FIS are generated in a RL manner in that fuzzy rules are adjusted and deleted according to reinforcement signals. The proposed EDSGFQL methodologies can automatically create, delete and adjust fuzzy rules dynamically. Simulation studies on wall-following and obstacle avoidance tasks by a mobile robot show that the proposed approach is superior in generating efficient FISs.

  17. Automatic Generation of Partitioned Matrix Expressions for Matrix Operations

    Science.gov (United States)

    Fabregat-Traver, Diego; Bientinesi, Paolo

    2010-09-01

    We target the automatic generation of formally correct algorithms and routines for linear algebra operations. Given the broad variety of architectures and configurations with which scientists deal, there does not exist one algorithmic variant that is suitable for all scenarios. Therefore, we aim to generate a family of algorithmic variants to attain high-performance for a broad set of scenarios. One of the authors has previously demonstrated that automatic derivation of a family of algorithms is possible when the Partitioned Matrix Expression (PME) of the target operation is available. The PME is a recursive definition that states the relations between submatrices in the input and the output operands. In this paper we describe all the steps involved in the automatic derivation of PMEs, thus making progress towards a fully automated system.

  18. Towards Automatic Personalized Content Generation for Platform Games

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2010-01-01

    In this paper, we show that personalized levels can be automatically generated for platform games. We build on previous work, where models were derived that predicted player experience based on features of level design and on playing styles. These models are constructed using preference learning...

  19. A quick scan on possibilities for automatic metadata generation

    NARCIS (Netherlands)

    Benneker, Frank

    2006-01-01

    The Quick Scan is a report on research into useable solutions for automatic generation of metadata or parts of metadata. The aim of this study is to explore possibilities for facilitating the process of attaching metadata to learning objects. This document is aimed at developers of digital learning

  20. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  1. Automatic Dance Generation System Considering Sign Language Information

    OpenAIRE

    Asahina, Wakana; Iwamoto, Naoya; Shum, Hubert P. H.; Morishima, Shigeo

    2016-01-01

    In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic ...

  2. MeSH indexing based on automatically generated summaries.

    Science.gov (United States)

    Jimeno-Yepes, Antonio J; Plaza, Laura; Mork, James G; Aronson, Alan R; Díaz, Alberto

    2013-06-26

    MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most

  3. Automatic control system generation for robot design validation

    Science.gov (United States)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  4. Using automatic generation of Labanotation to protect folk dance

    Science.gov (United States)

    Wang, Jiaji; Miao, Zhenjiang; Guo, Hao; Zhou, Ziming; Wu, Hao

    2017-01-01

    Labanotation uses symbols to describe human motion and is an effective means of protecting folk dance. We use motion capture data to automatically generate Labanotation. First, we convert the motion capture data of the biovision hierarchy file into three-dimensional coordinate data. Second, we divide human motion into element movements. Finally, we analyze each movement and find the corresponding notation. Our work has been supervised by an expert in Labanotation to ensure the correctness of the results. At present, the work deals with a subset of symbols in Labanotation that correspond to several basic movements. Labanotation contains many symbols and several new symbols may be introduced for improvement in the future. We will refine our work to handle more symbols. The automatic generation of Labanotation can greatly improve the work efficiency of documenting movements. Thus, our work will significantly contribute to the protection of folk dance and other action arts.

  5. Automatic Generation of Matrix Element Derivatives for Tight Binding Models

    OpenAIRE

    Alin M. Elena; Meister, Matthias

    2005-01-01

    Tight binding (TB) models are one approach to the quantum mechanical many particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to sim...

  6. Automatic path-oriented test data generation by boundary hypercuboids

    Directory of Open Access Journals (Sweden)

    Shahram Moadab

    2016-01-01

    Full Text Available Designing test cases and generating data are very important phases in software engineering these days. In order to generate test data, some generators such as random test data generators, data specification generators and path-oriented (Path-Wise test data generators are employed. One of the most important problems in the path-oriented test data generator is the lack of attention given to discovering faults by the test data. In this paper an approach is proposed to generate some test data automatically so that we can realize the goal of discovering more faults in less time. The number of faults near the boundaries of the input domain is more than the center, according to the Pareto 80–20 principle the test data of this approach will be generated at 20% of the allowable area boundary. To do this, we extracted the boundary hypercuboids and then the test data will be generated by exploiting these hypercuboids. The experimental results show that the fault detection probability and the fault detection speed are improved significantly, compared with the previous approaches. By generating data in this way, more faults are discovered in a short period of time which makes it more possible to deliver products on time.

  7. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  8. MODULEWRITER: a program for automatic generation of database interfaces.

    Science.gov (United States)

    Zheng, Christina L; Fana, Fariba; Udupi, Poornaprajna V; Gribskov, Michael

    2003-05-01

    MODULEWRITER is a PERL object relational mapping (ORM) tool that automatically generates database specific application programming interfaces (APIs) for SQL databases. The APIs consist of a package of modules providing access to each table row and column. Methods for retrieving, updating and saving entries are provided, as well as other generally useful methods (such as retrieval of the highest numbered entry in a table). MODULEWRITER provides for the inclusion of user-written code, which can be preserved across multiple runs of the MODULEWRITER program.

  9. Automatic generation of gene finders for eukaryotic species

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Krogh, A.

    2006-01-01

    Background The number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity...... length distributions. The performance of each individual gene predictor on each individual genome is comparable to the best of the manually optimised species-specific gene finders. It is shown that species-specific gene finders are superior to gene finders trained on other species....

  10. Automatic Generation of Caricatures with Multiple Expressions Using Transformative Approach

    Science.gov (United States)

    Liao, Wen-Hung; Lai, Chien-An

    The proliferation of digital cameras has changed the way we create and share photos. Novel forms of photo composition and reproduction have surfaced in recent years. In this paper, we present an automatic caricature generation system using transformative approaches. By combing facial feature detection, image segmentation and image warping/morphing techniques, the system is able to generate stylized caricature using only one reference image. When more than one reference sample are available, the system can either choose the best fit based on shape matching, or synthesize a composite style using polymorph technique. The system can also produce multiple expressions by controlling a subset of MPEG-4 facial animation parameters (FAP). Finally, to enable flexible manipulation of the synthetic caricature, we also investigate issues such as color quantization and raster-to-vector conversion. A major strength of our method is that the synthesized caricature bears a higher degree of resemblance to the real person than traditional component-based approaches.

  11. Automatic Generation of Facial Expression Using Triangular Geometric Deformation

    Directory of Open Access Journals (Sweden)

    Jia-Shing Sheu

    2014-12-01

    Full Text Available This paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The feature control points are labeled according to the feature points (FPs defined by MPEG-4. After the designation of the deformation expression, the system also increases the image correction points based on the obtained FP coordinates. The FPs are utilized as image deformation units by triangular segmentation. The triangle is split into two vectors. The triangle points are regarded as linear combinations of two vectors, and the coefficients of the linear combinations correspond to the triangular vectors of the original image. Next, the corresponding coordinates are obtained to complete the image correction by image interpolation technology to generate the new expression. As for the proposed deformation algorithm, 10 additional correction points are generated in the positions corresponding to the FPs obtained according to MPEG-4. Obtaining the correction points within a very short operation time is easy. Using a particular triangulation for deformation can extend the material area without narrowing the unwanted material area, thus saving the filling material operation in some areas.

  12. LINGUISTIC DATABASE FOR AUTOMATIC GENERATION SYSTEM OF ENGLISH ADVERTISING TEXTS

    Directory of Open Access Journals (Sweden)

    N. A. Metlitskaya

    2017-01-01

    Full Text Available The article deals with the linguistic database for the system of automatic generation of English advertising texts on cosmetics and perfumery. The database for such a system includes two main blocks: automatic dictionary (that contains semantic and morphological information for each word, and semantic-syntactical formulas of the texts in a special formal language SEMSINT. The database is built on the result of the analysis of 30 English advertising texts on cosmetics and perfumery. First, each word was given a unique code. For example, N stands for nouns, A – for adjectives, V – for verbs, etc. Then all the lexicon of the analyzed texts was distributed into different semantic categories. According to this semantic classification each word was given a special semantic code. For example, the record N01 that is attributed to the word «lip» in the dictionary means that this word refers to nouns of the semantic category «part of a human’s body».The second block of the database includes the semantic-syntactical formulas of the analyzed advertising texts written in a special formal language SEMSINT. The author gives a brief description of this language, presenting its essence and structure. Also, an example of one formalized advertising text in SEMSINT is provided.

  13. Automatic generation of reports at the TELECOM SCC

    Science.gov (United States)

    Beltan, Thierry; Jalbaud, Myriam; Fronton, Jean Francois

    In-orbit satellite follow-up produces a certain amount of reports on a regular basis (daily, weekly, quarterly, annually). Most of these documents use the information of former issues with the increments of the last period of time. They are made up of text, tables, graphs or pictures. The system presented here is the SGMT (Systeme de Gestion de la Memoire Technique), which means Technical Memory Mangement System. It provides the system operators with tools to generate the greatest part of these reports, as automatically as possible. It gives an easy access to the reports and the large amount of available memory enables the user to consult data on the complete lifetime of a satellite family.

  14. Automatic generation of warehouse mediators using an ontology engine

    Energy Technology Data Exchange (ETDEWEB)

    Critchlow, T., LLNL

    1998-04-01

    Data warehouses created for dynamic scientific environments, such as genetics, face significant challenges to their long-term feasibility One of the most significant of these is the high frequency of schema evolution resulting from both technological advances and scientific insight Failure to quickly incorporate these modifications will quickly render the warehouse obsolete, yet each evolution requires significant effort to ensure the changes are correctly propagated DataFoundry utilizes a mediated warehouse architecture with an ontology infrastructure to reduce the maintenance acquirements of a warehouse. Among the things, the ontology is used as an information source for automatically generating mediators, the methods that transfer data between the data sources and the warehouse The identification, definition and representation of the metadata required to perform this task is a primary contribution of this work.

  15. Automatic Generation of OWL Ontology from XML Data Source

    CERN Document Server

    Yahia, Nora; Ahmed, AbdelWahab

    2012-01-01

    The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.

  16. Adaptive neuro-fuzzy inference system based automatic generation control

    Energy Technology Data Exchange (ETDEWEB)

    Hosseini, S.H.; Etemadi, A.H. [Department of Electrical Engineering, Sharif University of Technology, Tehran (Iran)

    2008-07-15

    Fixed gain controllers for automatic generation control are designed at nominal operating conditions and fail to provide best control performance over a wide range of operating conditions. So, to keep system performance near its optimum, it is desirable to track the operating conditions and use updated parameters to compute control gains. A control scheme based on artificial neuro-fuzzy inference system (ANFIS), which is trained by the results of off-line studies obtained using particle swarm optimization, is proposed in this paper to optimize and update control gains in real-time according to load variations. Also, frequency relaxation is implemented using ANFIS. The efficiency of the proposed method is demonstrated via simulations. Compliance of the proposed method with NERC control performance standard is verified. (author)

  17. Intelligent control schemes applied to Automatic Generation Control

    Directory of Open Access Journals (Sweden)

    Dingguo Chen

    2016-04-01

    Full Text Available Integrating ever increasing amount of renewable generating resources to interconnected power systems has created new challenges to the safety and reliability of today‟s power grids and posed new questions to be answered in the power system modeling, analysis and control. Automatic Generation Control (AGC must be extended to be able to accommodate the control of renewable generating assets. In addition, AGC is mandated to operate in accordance with the NERC‟s Control Performance Standard (CPS criteria, which represent a greater flexibility in relaxing the control of generating resources and yet assuring the stability and reliability of interconnected power systems when each balancing authority operates in full compliance. Enhancements in several aspects to the traditional AGC must be made in order to meet the aforementioned challenges. It is the intention of this paper to provide a systematic, mathematical formulation for AGC as a first attempt in the context of meeting the NERC CPS requirements and integrating renewable generating assets, which has not been seen reported in the literature to the best knowledge of the authors. Furthermore, this paper proposes neural network based predictive control schemes for AGC. The proposed controller is capable of handling complicated nonlinear dynamics in comparison with the conventional Proportional Integral (PI controller which is typically most effective to handle linear dynamics. The neural controller is designed in such a way that it has the capability of controlling the system generation in the relaxed manner so the ACE is controlled to a desired range instead of driving it to zero which would otherwise increase the control effort and cost; and most importantly the resulting system control performance meets the NERC CPS requirements and/or the NERC Balancing Authority’s ACE Limit (BAAL compliance requirements whichever are applicable.

  18. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    Directory of Open Access Journals (Sweden)

    Jung-ran Park

    2015-09-01

    Full Text Available Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data. Utilization of (semiautomatic metadata generation is critical in addressing these environmental changes and may be unavoidable in the future considering the costly and complex operation of manual metadata creation. To address such needs, this study examines the range of semi-automatic metadata generation tools (n=39 while providing an analysis of their techniques, features, and functions. The study focuses on open-source tools that can be readily utilized in libraries and other memory institutions.  The challenges and current barriers to implementation of these tools were identified. The greatest area of difficulty lies in the fact that  the piecemeal development of most semi-automatic generation tools only addresses part of the issue of semi-automatic metadata generation, providing solutions to one or a few metadata elements but not the full range elements.  This indicates that significant local efforts will be required to integrate the various tools into a coherent set of a working whole.  Suggestions toward such efforts are presented for future developments that may assist information professionals with incorporation of semi-automatic tools within their daily workflows.

  19. Approaches to the automatic generation and control of finite element meshes

    Science.gov (United States)

    Shephard, Mark S.

    1987-01-01

    The algorithmic approaches being taken to the development of finite element mesh generators capable of automatically discretizing general domains without the need for user intervention are discussed. It is demonstrated that because of the modeling demands placed on a automatic mesh generator, all the approaches taken to date produce unstructured meshes. Consideration is also given to both a priori and a posteriori mesh control devices for automatic mesh generators as well as their integration with geometric modeling and adaptive analysis procedures.

  20. Reversible anonymization of DICOM images using automatically generated policies.

    Science.gov (United States)

    Onken, Michael; Riesmeier, Jörg; Engel, Marcel; Yabanci, Adem; Zabel, Bernhard; Després, Stefan

    2009-01-01

    Many real-world applications in the area of medical imaging like case study databases require separation of identifying (IDATA) and non-identifying (MDATA) data, specifically those offering Internet-based data access. These kinds of projects also must provide a role-based access system, controlling, how patient data must be organized and how it can be accessed. On DICOM image level, different image types support different kind of information, intermixing IDATA and MDATA in a single object. To separate them, it is possible to reversibly anonymize DICOM objects by substituting IDATA by a unique anonymous token. In case that later an authenticated user needs full access to an image, this token can be used for re-linking formerly separated IDATA and MDATA, thus resulting in a dynamically generated, exact copy of the original image. The approach described in this paper is based on the automatic generation of anonymization policies from the DICOM standard text, providing specific support for all kinds of DICOM images. The policies are executed by a newly developed framework based on the DICOM toolkit DCMTK and offer a reliable approach to reversible anonymization. The implementation is evaluated in a German BMBF-supported expert network in the area of skeletal dysplasias, SKELNET, but may generally be applicable to related projects, enormously improving quality and integrity of diagnostics in a field focused on images. It performs effectively and efficiently on real-world test images from the project and other kind of DICOM images.

  1. [Development of a Software for Automatically Generated Contours in Eclipse TPS].

    Science.gov (United States)

    Xie, Zhao; Hu, Jinyou; Zou, Lian; Zhang, Weisha; Zou, Yuxin; Luo, Kelin; Liu, Xiangxiang; Yu, Luxin

    2015-03-01

    The automatic generation of planning targets and auxiliary contours have achieved in Eclipse TPS 11.0. The scripting language autohotkey was used to develop a software for automatically generated contours in Eclipse TPS. This software is named Contour Auto Margin (CAM), which is composed of operational functions of contours, script generated visualization and script file operations. RESULTS Ten cases in different cancers have separately selected, in Eclipse TPS 11.0 scripts generated by the software could not only automatically generate contours but also do contour post-processing. For different cancers, there was no difference between automatically generated contours and manually created contours. The CAM is a user-friendly and powerful software, and can automatically generated contours fast in Eclipse TPS 11.0. With the help of CAM, it greatly save plan preparation time and improve working efficiency of radiation therapy physicists.

  2. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    Science.gov (United States)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  3. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  4. Automatic summary generating technology of vegetable traceability for information sharing

    Science.gov (United States)

    Zhenxuan, Zhang; Minjing, Peng

    2017-06-01

    In order to solve problems of excessive data entries and consequent high costs for data collection in vegetable traceablility for farmers in traceability applications, the automatic summary generating technology of vegetable traceability for information sharing was proposed. The proposed technology is an effective way for farmers to share real-time vegetable planting information in social networking platforms to enhance their brands and obtain more customers. In this research, the influencing factors in the vegetable traceablility for customers were analyzed to establish the sub-indicators and target indicators and propose a computing model based on the collected parameter values of the planted vegetables and standard legal systems on food safety. The proposed standard parameter model involves five steps: accessing database, establishing target indicators, establishing sub-indicators, establishing standard reference model and computing scores of indicators. On the basis of establishing and optimizing the standards of food safety and traceability system, this proposed technology could be accepted by more and more farmers and customers.

  5. Automatically Adapting Home Lighting to Assist Visually Impaired Children

    OpenAIRE

    Freeman, Euan; Wilson, Graham; Brewster, Stephen

    2016-01-01

    For visually impaired children, activities like finding everyday items, locating favourite toys and moving around the home can be challenging. Assisting them during these activities is important because it promotes independence and encourages them to use and develop their remaining visual function. We describe our work towards a system that adapts the lighting conditions at home to help visually impaired children with everyday tasks. We discuss scenarios that show how they may benefit from ad...

  6. Automatic generation of investigator bibliographies for institutional research networking systems.

    Science.gov (United States)

    Johnson, Stephen B; Bales, Michael E; Dine, Daniel; Bakken, Suzanne; Albert, Paul J; Weng, Chunhua

    2014-10-01

    Publications are a key data source for investigator profiles and research networking systems. We developed ReCiter, an algorithm that automatically extracts bibliographies from PubMed using institutional information about the target investigators. ReCiter executes a broad query against PubMed, groups the results into clusters that appear to constitute distinct author identities and selects the cluster that best matches the target investigator. Using information about investigators from one of our institutions, we compared ReCiter results to queries based on author name and institution and to citations extracted manually from the Scopus database. Five judges created a gold standard using citations of a random sample of 200 investigators. About half of the 10,471 potential investigators had no matching citations in PubMed, and about 45% had fewer than 70 citations. Interrater agreement (Fleiss' kappa) for the gold standard was 0.81. Scopus achieved the best recall (sensitivity) of 0.81, while name-based queries had 0.78 and ReCiter had 0.69. ReCiter attained the best precision (positive predictive value) of 0.93 while Scopus had 0.85 and name-based queries had 0.31. ReCiter accesses the most current citation data, uses limited computational resources and minimizes manual entry by investigators. Generation of bibliographies using named-based queries will not yield high accuracy. Proprietary databases can perform well but requite manual effort. Automated generation with higher recall is possible but requires additional knowledge about investigators. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Automatic generation of digital anthropomorphic phantoms from simulated MRI acquisitions

    Science.gov (United States)

    Lindsay, C.; Gennert, M. A.; KÓ§nik, A.; Dasari, P. K.; King, M. A.

    2013-03-01

    In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.

  8. Semi-Automatic Story Generation for a Geographic Server

    Directory of Open Access Journals (Sweden)

    Rizwan Mehmood

    2017-06-01

    Full Text Available Most existing servers providing geographic data tend to offer various numeric data. We started to work on a new type of geographic server, motivated by four major issues: (i How to handle figures when different databases present different values; (ii How to build up sizeable collections of pictures with detailed descriptions; (iii How to update rapidly changing information, such as personnel holding important functions, and (iv how to describe countries not just by using trivial facts, but stories typical of the country involved. We have discussed and partially resolved issues (i and (ii in previous papers; we have decided to deal with (iii, regional updates, by tying in an international consortium whose members would either help themselves or find individuals to do so. It is issue (iv, how to generate non-trivial stories typical of a country, that we decided to tackle both manually (the consortium has by now generated around 200 stories, and by developing techniques for semi-automatic story generation, which is the topic of this paper. The basic idea was first to define sets of reasonably reliable servers that may differ from region to region, to extract “interesting facts” from the servers, and combine them in a raw version of a report that would require some manual cleaning-up (hence: semi-automatic. It may sound difficult to extract “interesting facts” from Web pages, but it is quite possible to define heuristics to do so, never exceeding the few lines allowed for quotation purposes. One very simple rule we adopted was this: ‘Look for sentences with superlatives!’ If a sentence contains words like “biggest”, “highest”, “most impressive” etc. it is likely to contain an interesting fact. With a little imagination, we have been able to establish a set of such rules. We will show that the stories can be completely different. For some countries, historical facts may dominate; for others, the beauty of landscapes; for

  9. User evaluation of a communication system that automatically generates captions to improve telephone communication

    NARCIS (Netherlands)

    Zekveld, A.A.; Kramer, S.E.; Kessens, J.M.; Vlaming, M.S.M.G.; Houtgast, T.

    2009-01-01

    This study examined the subjective benefit obtained from automatically generated captions during telephone-speech comprehension in the presence of babble noise. Short stories were presented by telephone either with or without captions that were generated offline by an automatic speech recognition

  10. ASSIsT: An Automatic SNP ScorIng Tool for in- and outbreeding species

    NARCIS (Netherlands)

    Guardo, Di M.; Micheletti, D.; Bianco, L.; Koehorst-van Putten, H.J.J.; Longhi, S.; Costa, F.; Aranzana, M.J.; Velasco, R.; Arus, P.; Troggio, M.; Weg, van de W.E.

    2015-01-01

    ASSIsT (Automatic SNP ScorIng Tool) is a user-friendly customized pipeline for efficient calling and filtering of SNPs from Illumina Infinium arrays, specifically devised for custom genotyping arrays. Illumina has developed an integrated software for SNP data visualization and inspection called

  11. Is Mobile-Assisted Language Learning Really Useful? An Examination of Recall Automatization and Learner Autonomy

    Science.gov (United States)

    Sato, Takeshi; Murase, Fumiko; Burden, Tyler

    2015-01-01

    The aim of this study is to examine the advantages of Mobile-Assisted Language Learning (MALL), especially vocabulary learning of English as a foreign or second language (L2) in terms of the two strands: automatization and learner autonomy. Previous studies articulate that technology-enhanced L2 learning could bring about some positive effects.…

  12. Automatization of Mathematics Skills via Computer-Assisted Instruction among Students with Mild Mental Handicaps.

    Science.gov (United States)

    Podell, David M.; And Others

    1992-01-01

    This evaluation study with 94 elementary students (50 with mild mental handicap) compared computer-assisted instruction (CAI) and paper-and-pencil practices in promoting automatization of basic addition and subtraction skills. Findings suggested CAI was more effective but that the students with mental handicap required more practice than…

  13. A blood-assisted optical biosensor for automatic glucose determination.

    Science.gov (United States)

    Sanz, Vanesa; de Marcos, Susana; Galbán, Javier

    2009-05-15

    A new approach for glucose determination in blood based on the spectroscopic properties of blood hemoglobin (Hb) is presented. The biosensor consists of a glucose oxidase (GOx) entrapped polyacrylamide (PAA) film placed in a flow cell. Blood is simply diluted with bidistilled water (150:1, v:v) and injected into the carrier solution. When reaching the PAA film, the blood glucose reacts with the GOx and the resulting H(2)O(2) reacts with the blood Hb. This produces an absorbance change in this compound. The GOx-PAA film can be used at least 100 times. Lateral reactions of H(2)O(2) with other blood constituents are easily blocked (by azide addition). The linear response range can be fitted between 20 and 1200 mg dL(-1) glucose (R.S.D. 4%, 77 mg dL(-1)). In addition to the use of untreated blood, two important analytical aspects of the method are: (1) the analyte concentration can be obtained by an absolute calibration method; and (2) the signal is not dependent on the oxygen concentration. A mathematical model relating the Hb absorbance variation during the reaction with the glucose concentration has been developed to provide theoretical support and to predict its application to other compounds after changing the GOx by another enzyme. The method has been applied to direct glucose determination in 10 blood samples, and a correlation coefficient higher than 0.98 was obtained after comparing the results with those determined by an automatic analyzer. As well as sharing some of the advantages of disposable amperometric biosensors, the most significant feature of this approach is its reversibility.

  14. A strategy for automatically generating programs in the lucid programming language

    Science.gov (United States)

    Johnson, Sally C.

    1987-01-01

    A strategy for automatically generating and verifying simple computer programs is described. The programs are specified by a precondition and a postcondition in predicate calculus. The programs generated are in the Lucid programming language, a high-level, data-flow language known for its attractive mathematical properties and ease of program verification. The Lucid programming is described, and the automatic program generation strategy is described and applied to several example problems.

  15. System and Component Software Specification, Run-time Verification and Automatic Test Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The following background technology is described in Part 5: Run-time Verification (RV), White Box Automatic Test Generation (WBATG). Part 5 also describes how WBATG...

  16. A GA-fuzzy automatic generation controller for interconnected power system

    CSIR Research Space (South Africa)

    Boesack, CD

    2011-10-01

    Full Text Available -1 Fourth International Workshop on Advanced Computational Intelligence Wuhan, Hubei, China; October 19-21, 2011 A GA-Fuzzy Automatic Generation Controller for Interconnected Power Systems Craig D. Boesack,Tshilidzi Marwal, and Fulufhelo V. Nelwamondo...

  17. Automatic generation of natural language nursing shift summaries in neonatal intensive care : BT-Nurse

    OpenAIRE

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-01-01

    automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). Methods: A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. Results: In an on-ward evaluation, a substanti...

  18. A Model-Based Method for Content Validation of Automatically Generated Test Items

    Science.gov (United States)

    Zhang, Xinxin; Gierl, Mark

    2016-01-01

    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  19. Next-generation automatic test equipment for military support

    Science.gov (United States)

    Wasserman, M.

    The underlying philosophy and design of automatic testing equipment (ATE) for military systems have undergone modification in view of the increasingly important requirement of forward deployment. ATE stations must accordingly become smaller and lighter for the sake of transportability, as well as hardier and easily reconfigurable. Ease of operation and maintenance also become critical. Among the technologies identified as essential for the implementation of these stringent ATE design requirements are the IEE-488, MIL-STD-1553, VME, VXI, and SCSI data buses, 'instruments on a card' technology, optical disk drives, touch-screen technology, and expert system-related software.

  20. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    2. Literature survey. One of the existing works on lyric generation for the Tamil language concentrates on generat- ing lyrics from a given melody (Sobha & Ananth Ramakrishnan 2010). This is a tune-based lyric generating system wherein the input to the system is a tune in the KNM representation, where 'K' refers to 'Kuril', ...

  1. An automatic stimulus generation system for electroretinogram capture and processing.

    Science.gov (United States)

    Vennat, J C; Doly, M; Sanzelle, S; Ghiazza, D; Bonhomme, B; Gaillard, G

    1986-01-01

    An automated system is presented for on-line capture and processing of the analog signal obtained in response to light or X-ray stimulation of isolated rat retina maintained in survival by perfusion. The most important part of the system is a microcomputer Apple II (48 K Europlus) equipped with interface boards. Basic and assembler programs automatically deliver light or X-ray stimulation every 5 min. Data capture and data processing are carried out following each retinal response. Calculated parameters of the ERG, and 200 values obtained after sampling of an ERG are placed in a data file on a floppy disc. One hundred ERGs can be stored in this way.

  2. Automatic test pattern generation for iterative logic arrays | Boateng ...

    African Journals Online (AJOL)

    test are first formulated. Next, the repetition property of the test patterns is exploited to develop a method for generating C-tests for ILAs under the cell fault model. Based on the results of test generation, the method identifies points of insertion of ...

  3. Research of Automatic Progress Report Generation for Railway Construction Projects in China

    Directory of Open Access Journals (Sweden)

    Qing Li

    2015-01-01

    Full Text Available The rapid construction of railways in China has posed tremendous challenges for managing railway construction projects, especially their progress. Frequently changed schedule, complicated contents, and error-prone data have practically hampered the efforts to automatically generate progress reports on railway construction projects within the country. In this paper, we set out to explore the linkages among current data from Chinese railway construction units on inspection lots, construction drawings, and construction schedules, which are used to establish an engineering quantity computation model and automatic project progress report model. An automatic progress report generation system for railway construction projects was developed to generate a wide range of standard progress reports. Practical applications showed that the proposed system offers an alternative to hardware-based methods of progress report generation and can significantly improve the accuracy of data and the quality of management regarding project progress.

  4. New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sura Jasim Mohammed

    2018-01-01

    Full Text Available Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not, with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.

  5. Group excitation control of generators in state regional electric power plant transformer station automatic control systems

    Energy Technology Data Exchange (ETDEWEB)

    Gumin, M.I.; Rosman, L.V.; Tarnavskii, V.M.

    1983-01-01

    Group excitation control of electric generators according to standard methods is essential for the management of power plant conditions according to voltage and reactive power. A system is described that provides coordinated changes in the automatic excitation controller set point for generators that operate on common buses. The advantages of the excitation control system are discussed.

  6. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through rando...

  7. AUTOMATIC 3D BUILDING MODEL GENERATIONS WITH AIRBORNE LiDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2017-11-01

    Full Text Available LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified

  8. Automatic 3d Building Model Generations with Airborne LiDAR Data

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D

  9. Automatic generation of textual summaries from neonatal intensive care data

    OpenAIRE

    Portet, François; Reiter, Ehud; Gatt, Albert; Hunter, Jim; Sripada, Somayajulu; Freer, Yvonne; Sykes, Cindy

    2009-01-01

    AvImpFact=2.566 estim. in 2012.; International audience; Effective presentation of data for decision support is a major issue when large volumes of data are generated as happens in the Intensive Care Unit (ICU). Although the most common approach is to present the data graphically, it has been shown that textual summarisation can lead to improved decision making. As part of the BabyTalk project, we present a prototype, called BT-45, which generates textual summaries of about 45 minutes of cont...

  10. : Developing an automatic text cartographer tool for assisting dyslexic learners with text comprehension

    OpenAIRE

    Laurent, Mario; Chanier, Thierry

    2013-01-01

    People with language impairement, lixe dyslexia, are in important trouble to read and to write. This trouble persist when they use a computer environment such as a word processor. They need some adapted tools. In this purpose, we are developing the LICI, a tool to generate automatically a map, conceptual or heuristic, from a text. This will facilitate text understanding in a short time and will greatly help dyslexics during school activities or learning tasks.; Les personnes atteintes de trou...

  11. An approach to automatic generation and verification of tonal harmony

    NARCIS (Netherlands)

    Pauws, S.C.; Pisters, R.K.P.; Van Geenen , J.L.

    2008-01-01

    This report concerns the formal specification of an algorithmfor both checking completed exercises in tonal four-part harmony, as used for didactic purposes by the Fontys Conservatory of Tilburg, and generating plausible solutions to such exercises. The intended application of the theory presented

  12. Automatic Keyframe Summarization of User-Generated Video

    Science.gov (United States)

    2014-06-01

    published to social media websites, available for public consumption or for a small circle of friends. 1.1 Problem As user-generated video grows, it is...over longer periods of space and time. Additionally, the storyline may be less crafted or coherent when compared to professional cinema . As such, shot

  13. Designing a story database for use in automatic story generation

    NARCIS (Netherlands)

    Oinonen, K.M.; Theune, Mariet; Nijholt, Antinus; Uijlings, J.R.R.; Harper, R.; Rauterberg, M; Combetto, M.

    In this paper we propose a model for the representation of stories in a story database. The use of such a database will enable computational story generation systems to learn from previous stories and associated user feedback, in order to create believable stories with dramatic plots that invoke an

  14. An automatically generated code for relativistic inhomogeneous cosmologies

    CERN Document Server

    Bentivegna, Eloisa

    2016-01-01

    The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated-code-generation capabilities provided by its component Kranc.

  15. A study on the development of a robot-assisted automatic laser hair removal system.

    Science.gov (United States)

    Lim, Hyoung-Woo; Park, Sungwoo; Noh, Seungwoo; Lee, Dong-Hun; Yoon, Chiyul; Koh, Wooseok; Kim, Youdan; Chung, Jin Ho; Kim, Hee Chan; Kim, Sungwan

    2014-11-01

    Abstract Background and Objective: The robot-assisted automatic laser hair removal (LHR) system is developed to automatically detect any arbitrary shape of the desired LHR treatment area and to provide uniform laser irradiation to the designated skin area. For uniform delivery of laser energy, a unit of a commercial LHR device, a laser distance sensor, and a high-resolution webcam are attached at the six axis industrial robot's end-effector, which can be easily controlled using a graphical user interface (GUI). During the treatment, the system provides real-time treatment progress as well as the total number of "pick and place" automatically. During the test, it was demonstrated that the arbitrary shapes were detected, and that the laser was delivered uniformly. The localization error test and the area-per-spot test produced satisfactory outcome averages of 1.04 mm error and 38.22 mm(2)/spot, respectively. RESULTS showed that the system successfully demonstrated accuracy and effectiveness. The proposed system is expected to become a promising device in LHR treatment.

  16. A study on ship automatic berthing with assistance of auxiliary devices

    Directory of Open Access Journals (Sweden)

    Van Luong Tran

    2012-09-01

    Full Text Available The recent researches on the automatic berthing control problems have used various kinds of tools as a control method such as expert system, fuzzy logic controllers and artificial neural network (ANN. Among them, ANN has proved to be one of the most effective and attractive options. In a marine context, the berthing maneuver is a complicated procedure in which both human experience and intensive control operations are involved. Nowadays, in most cases of berthing operation, auxiliary devices are used to make the schedule safer and faster but none of above researches has taken into account. In this study, ANN is applied to design the controllers for automatic ship berthing using assistant devices such as bow thruster and tug. Using back-propagation algorithm, we trained ANN with set of teaching data to get a minimal error between output values and desired values of four control outputs including rudder, propeller revolution, bow thruster and tug. Then, computer simulations of automatic berthing were carried out to verify the effecttiveness of the system. The results of the simulations showed good performance for the proposed berthing control system.

  17. Training IBM Watson using Automatically Generated Question-Answer Pairs

    OpenAIRE

    Lee, Jangho; Kim, Gyuwan; Yoo, Jaeyoon; Jung, Changwoo; Kim, Minseok; Yoon, Sungroh

    2016-01-01

    IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of well-prepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and...

  18. Automatic capture of attention by conceptually generated working memory templates.

    Science.gov (United States)

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  19. Carbon assisted water electrolysis for hydrogen generation

    Science.gov (United States)

    Sabareeswaran, S.; Balaji, R.; Ramya, K.; Rajalakshmi, N.; Dhathathereyan, K. S.

    2013-06-01

    Carbon Assisted Water Electrolysis (CAWE) is an energy efficient process in that H2 can be produced at lower applied voltage (˜1.0 V) compared to nearly 2.0 V needed for ordinary water electrolysis for the same H2 evolution rate. In this process, carbon is oxidized to oxides of carbon at the anode of an electrochemical cell and hydrogen is produced at the cathode. These gases are produced in relatively pure state and would be collected in a separate chamber. In this paper, we present the results of influence of various operating parameters on efficiency of CAWE process. The results showed that H2 can be produced at applied voltages Eo as low as 1.0V (vs. SHE) and its production rate is strongly dependent on the type of the carbon used and its concentration in the electrolyte. It has also been found that the performance of CAWE process is higher in acidic electrolyte than in alkaline electrolyte.

  20. Contribution of supraspinal systems to generation of automatic postural responses

    Directory of Open Access Journals (Sweden)

    Tatiana G Deliagina

    2014-10-01

    Full Text Available Different species maintain a particular body orientation in space due to activity of the closed-loop postural control system. In this review we discuss the role of neurons of descending pathways in operation of this system as revealed in animal models of differing complexity: lower vertebrate (lamprey and higher vertebrates (rabbit and cat.In the lamprey and quadruped mammals, the role of spinal and supraspinal mechanisms in the control of posture is different. In the lamprey, the system contains one closed-loop mechanism consisting of supraspino-spinal networks. Reticulospinal (RS neurons play a key role in generation of postural corrections. Due to vestibular input, any deviation from the stabilized body orientation leads to activation of a specific population of RS neurons. Each of the neurons activates a specific motor synergy. Collectively, these neurons evoke the motor output necessary for the postural correction. In contrast to lampreys, postural corrections in quadrupeds are primarily based not on the vestibular input but on the somatosensory input from limb mechanoreceptors. The system contains two closed-loop mechanisms – spinal and spino-supraspinal networks, which supplement each other. Spinal networks receive somatosensory input from the limb signaling postural perturbations, and generate spinal postural limb reflexes. These reflexes are relatively weak, but in intact animals they are enhanced due to both tonic supraspinal drive and phasic supraspinal commands. Recent studies of these supraspinal influences are considered in this review. A hypothesis suggesting common principles of operation of the postural systems stabilizing body orientation in a particular plane in the lamprey and quadrupeds, that is interaction of antagonistic postural reflexes, is discussed.

  1. A novel excitation assistance switched reluctance wind power generator

    DEFF Research Database (Denmark)

    Liu, Xiao; Park, Kiwoo; Chen, Zhe

    2014-01-01

    The high inductance of a general switched reluctance generator (SRG) may prevent the excitation of the magnetic field from being quickly established enough, which may further limit the output power of the SRG. A novel excitation assistance SRG (EASRG) for wind power generation is proposed...

  2. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  3. Cross-cultural assessment of automatically generated multimodal referring expressions in a virtual world

    NARCIS (Netherlands)

    van der Sluis, Ielka; Luz, Saturnino; Breitfuss, Werner; Ishizuka, Mitsuru; Prendinger, Helmut

    This paper presents an assessment of automatically generated multimodal referring expressions as produced by embodied conversational agents in a virtual world. The algorithm used for this purpose employs general principles of human motor control and cooperativity in dialogues that can be

  4. Automatic Description Generation from Images : A Survey of Models, Datasets, and Evaluation Measures

    NARCIS (Netherlands)

    Bernardi, Raffaella; Cakici, Ruket; Elliott, Desmond; Erdem, Aykut; Erdem, Erkut; Ikizler-Cinbis, Nazli; Keller, Frank; Muscat, Adrian; Plank, Barbara

    2016-01-01

    Automatic description generation from natural images is a challenging problem that has recently received a large amount of interest from the computer vision and natural language processing communities. In this survey, we classify the existing approaches based on how they conceptualize this problem,

  5. Automatic generation of indoor navigable space using a point cloud and its scanner trajectory

    NARCIS (Netherlands)

    Staats, B. R.; Diakite, A.A.; Voûte, R.; Zlatanova, S.

    2017-01-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may

  6. Students' Feedback Preferences: How Do Students React to Timely and Automatically Generated Assessment Feedback?

    Science.gov (United States)

    Bayerlein, Leopold

    2014-01-01

    This study assesses whether or not undergraduate and postgraduate accounting students at an Australian university differentiate between timely feedback and extremely timely feedback, and whether or not the replacement of manually written formal assessment feedback with automatically generated feedback influences students' perception of feedback…

  7. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    Science.gov (United States)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the

  8. Automatic WSDL-guided Test Case Generation for PropEr Testing of Web Services

    Directory of Open Access Journals (Sweden)

    Konstantinos Sagonas

    2012-10-01

    Full Text Available With web services already being key ingredients of modern web systems, automatic and easy-to-use but at the same time powerful and expressive testing frameworks for web services are increasingly important. Our work aims at fully automatic testing of web services: ideally the user only specifies properties that the web service is expected to satisfy, in the form of input-output relations, and the system handles all the rest. In this paper we present in detail the component which lies at the heart of this system: how the WSDL specification of a web service is used to automatically create test case generators that can be fed to PropEr, a property-based testing tool, to create structurally valid random test cases for its operations and check its responses. Although the process is fully automatic, our tool optionally allows the user to easily modify its output to either add semantic information to the generators or write properties that test for more involved functionality of the web services.

  9. Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse.

    Science.gov (United States)

    Hunter, James; Freer, Yvonne; Gatt, Albert; Reiter, Ehud; Sripada, Somayajulu; Sykes, Cindy

    2012-11-01

    Our objective was to determine whether and how a computer system could automatically generate helpful natural language nursing shift summaries solely from an electronic patient record system, in a neonatal intensive care unit (NICU). A system was developed which automatically generates partial NICU shift summaries (for the respiratory and cardiovascular systems), using data-to-text technology. It was evaluated for 2 months in the NICU at the Royal Infirmary of Edinburgh, under supervision. In an on-ward evaluation, a substantial majority of the summaries was found by outgoing and incoming nurses to be understandable (90%), and a majority was found to be accurate (70%), and helpful (59%). The evaluation also served to identify some outstanding issues, especially with regard to extra content the nurses wanted to see in the computer-generated summaries. It is technically possible automatically to generate limited natural language NICU shift summaries from an electronic patient record. However, it proved difficult to handle electronic data that was intended primarily for display to the medical staff, and considerable engineering effort would be required to create a deployable system from our proof-of-concept software. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  11. Automatic Generation of Deep Web Wrappers based on Discovery of Repetition

    OpenAIRE

    Nakatoh, Tetsuya; Yamada, Yasuhiro; Hirokawa, Sachio

    2004-01-01

    A Deep Web wrapper is a program that extracts contents from search results. We propose a new automatic wrapper generation algorithm which discovers a repetitive pattern from search results. The repetitive pattern is expressed by token sequences which consist of HTML tags, plain texts and wild-cards. The algorithm applies a string matching with mismatches to unify the variation from the template and uses FFT(fast Fourier transformation) to attain efficiency. We show an empirical evaluation of ...

  12. Accuracy assessment of building point clouds automatically generated from iphone images

    Directory of Open Access Journals (Sweden)

    B. Sirmacek

    2014-06-01

    Full Text Available Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ and standard deviation (σ of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m. and (μ2 = 0.025 m., σ2 = 0.037 m. for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  13. A toolchain for the automatic generation of computer codes for correlated wavefunction calculations.

    Science.gov (United States)

    Krupička, Martin; Sivalingam, Kantharuban; Huntington, Lee; Auer, Alexander A; Neese, Frank

    2017-06-05

    In this work, the automated generator environment for ORCA (ORCA-AGE) is described. It is a powerful toolchain for the automatic implementation of wavefunction-based quantum chemical methods. ORCA-AGE consists of three main modules: (1) generation of "raw" equations from a second quantized Ansatz for the wavefunction, (2) factorization and optimization of equations, and (3) generation of actual computer code. We generate code for the ORCA package, making use of the powerful functionality for wavefunction-based correlation calculations that is already present in the code. The equation generation makes use of the most elementary commutation relations and hence is extremely general. Consequently, code can be generated for single reference as well as multireference approaches and spin-independent as well as spin-dependent operators. The performance of the generated code is demonstrated through comparison with efficient hand-optimized code for some well-understood standard configuration interaction and coupled cluster methods. In general, the speed of the generated code is no more than 30% slower than the hand-optimized code, thus allowing for routine application of canonical ab initio methods to molecules with about 500-1000 basis functions. Using the toolchain, complicated methods, especially those surpassing human ability for handling complexity, can be efficiently and reliably implemented in very short times. This enables the developer to shift the attention from debugging code to the physical content of the chosen wavefunction Ansatz. Automatic code generation also has the desirable property that any improvement in the toolchain immediately applies to all generated code. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  14. Automatic generation of stop word lists for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J

    2013-01-08

    Methods and systems for automatically generating lists of stop words for information retrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

  15. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    Science.gov (United States)

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  16. An Efficient Method for Automatic Generation of Linearly Independent Paths in White-box Testing

    Directory of Open Access Journals (Sweden)

    Xinyang Wang

    2015-04-01

    Full Text Available Testing is the one of most significant quality assurance measures for software. It has been shown that the software testing is one of the most critical and important phases in the life cycle of software engineering. In general, software testing takes around 40-60% of the effort, time and cost. Structure-oriented test methods define test cases on the basis of the internal program structures and are widely used. Path-based test is one of the important Structure-oriented test methods during software development. However, there is still lack of automatic and highly efficient tool for generating basic paths in white-box testing. In view of this, an automatic and efficient method for generating basic paths is proposed in this paper. This method firstly transforms the source-code program into corresponding control flow graph (CFG. By modifying the original CFG to a strongly connected graph, a new algorithm (ABPC is designed to automatically construct all basic paths. The ABPC algorithm has computational complexity linear to the number of total edges and nodes in the CFG. Through performance evaluation of many examples, it is shown that the proposed method is correct and scalable to very large test cases. The proposed method can be applied to basis path testing easily.

  17. Infrared Cephalic-Vein to Assist Blood Extraction Tasks: Automatic Projection and Recognition

    Science.gov (United States)

    Lagüela, S.; Gesto, M.; Riveiro, B.; González-Aguilera, D.

    2017-05-01

    Thermal infrared band is not commonly used in photogrammetric and computer vision algorithms, mainly due to the low spatial resolution of this type of imagery. However, this band captures sub-superficial information, increasing the capabilities of visible bands regarding applications. This fact is especially important in biomedicine and biometrics, allowing the geometric characterization of interior organs and pathologies with photogrammetric principles, as well as the automatic identification and labelling using computer vision algorithms. This paper presents advances of close-range photogrammetry and computer vision applied to thermal infrared imagery, with the final application of Augmented Reality in order to widen its application in the biomedical field. In this case, the thermal infrared image of the arm is acquired and simultaneously projected on the arm, together with the identification label of the cephalic-vein. This way, blood analysts are assisted in finding the vein for blood extraction, especially in those cases where the identification by the human eye is a complex task. Vein recognition is performed based on the Gaussian temperature distribution in the area of the vein, while the calibration between projector and thermographic camera is developed through feature extraction and pattern recognition. The method is validated through its application to a set of volunteers, with different ages and genres, in such way that different conditions of body temperature and vein depth are covered for the applicability and reproducibility of the method.

  18. Sherlock: A Semi-automatic Framework for Quiz Generation Using a Hybrid Semantic Similarity Measure.

    Science.gov (United States)

    Lin, Chenghua; Liu, Dong; Pang, Wei; Wang, Zhe

    In this paper, we present a semi-automatic system (Sherlock) for quiz generation using linked data and textual descriptions of RDF resources. Sherlock is distinguished from existing quiz generation systems in its generic framework for domain-independent quiz generation as well as in the ability of controlling the difficulty level of the generated quizzes. Difficulty scaling is non-trivial, and it is fundamentally related to cognitive science. We approach the problem with a new angle by perceiving the level of knowledge difficulty as a similarity measure problem and propose a novel hybrid semantic similarity measure using linked data. Extensive experiments show that the proposed semantic similarity measure outperforms four strong baselines with more than 47 % gain in clustering accuracy. In addition, we discovered in the human quiz test that the model accuracy indeed shows a strong correlation with the pairwise quiz similarity.

  19. Development of ANJOYMC Program for Automatic Generation of Monte Carlo Cross Section Libraries

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Lee, Chung Chan

    2007-03-15

    The NJOY code developed at Los Alamos National Laboratory is to generate the cross section libraries in ACE format for the Monte Carlo codes such as MCNP and McCARD by processing the evaluated nuclear data in ENDF/B format. It takes long time to prepare all the NJOY input files for hundreds of nuclides with various temperatures, and there can be some errors in the input files. In order to solve these problems, ANJOYMC program has been developed. By using a simple user input deck, this program is not only to generate all the NJOY input files automatically, but also to generate a batch file to perform all the NJOY calculations. The ANJOYMC program is written in Fortran90 and can be executed under the WINDOWS and LINUX operating systems in Personal Computer. Cross section libraries in ACE format can be generated in a short time and without an error by using a simple user input deck.

  20. Evaluating the Potential of Imaging Rover for Automatic Point Cloud Generation

    Science.gov (United States)

    Cera, V.; Campi, M.

    2017-02-01

    The paper presents a phase of an on-going interdisciplinary research concerning the medieval site of Casertavecchia (Italy). The project aims to develop a multi-technique approach for the semantic - enriched 3D modeling starting from the automatic acquisition of several data. In particular, the paper reports the results of the first stage about the Cathedral square of the medieval village. The work is focused on evaluating the potential of an imaging rover for automatic point cloud generation. Each of survey techniques has its own advantages and disadvantages so the ideal approach is an integrated methodology in order to maximize single instrument performance. The experimentation was conducted on the Cathedral square of the ancient site of Casertavecchia, in Campania, Italy.

  1. Design of a Computer-Assisted System to Automatically Detect Cell Types Using ANA IIF Images for the Diagnosis of Autoimmune Diseases.

    Science.gov (United States)

    Cheng, Chung-Chuan; Lu, Chun-Feng; Hsieh, Tsu-Yi; Lin, Yaw-Jen; Taur, Jin-Shiuh; Chen, Yung-Fu

    2015-10-01

    Indirect immunofluorescence technique applied on HEp-2 cell substrates provides the major screening method to detect ANA patterns in the diagnosis of autoimmune diseases. Currently, the ANA patterns are mostly inspected by experienced physicians to identify abnormal cell patterns. The objective of this study is to design a computer-assisted system to automatically detect cell patterns of IIF images for the diagnosis of autoimmune diseases in the clinical setting. The system simulates the functions of modern flow cytometer and provides the diagnostic reports generated by the system to the technicians and physicians through the radar graphs, box-plots, and tables. The experimental results show that, among the IIF images collected from 17 patients, 6 were classified as coarse-speckled, 3 as diffused, 2 as discrete-speckled, 1 as fine-speckled, 2 as nucleolar, and 3 as peripheral patterns, which were consistent with the patterns determined by the physicians. In addition to recognition of cell patterns, the system also provides the function to automatically generate the report for each patient. The time needed for the whole procedure is less than 30 min, which is more efficient than the manual operation of the physician after inspecting the ANA IIF images. Besides, the system can be easily deployed on many desktop and laptop computers. In conclusion, the designed system, containing functions for automatic detection of ANA cell pattern and generation of diagnostic report, is effective and efficient to assist physicians to diagnose patients with autoimmune diseases. The limitations of the current developed system include (1) only a unique cell pattern was considered for the IIF images collected from a patient, and (2) the cells during the process of mitosis were not adopted for cell classification.

  2. Automatic Generation of Data Types for Classification of Deep Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A H; Buttler, D J; Critchlow, T J

    2005-02-14

    A Service Class Description (SCD) is an effective meta-data based approach for discovering Deep Web sources whose data exhibit some regular patterns. However, it is tedious and error prone to create an SCD description manually. Moreover, a manually created SCD is not adaptive to the frequent changes of Web sources. It requires its creator to identify all the possible input and output types of a service a priori. In many domains, it is impossible to exhaustively list all the possible input and output data types of a source in advance. In this paper, we describe machine learning approaches for automatic generation of the data types of an SCD. We propose two different approaches for learning data types of a class of Web sources. The Brute-Force Learner is able to generate data types that can achieve high recall, but with low precision. The Clustering-based Learner generates data types that have a high precision rate, but with a lower recall rate. We demonstrate the feasibility of these two learning-based solutions for automatic generation of data types for citation Web sources and presented a quantitative evaluation of these two solutions.

  3. The PM-Assisted Reluctance Synchronous Starter/Generator (PM-RSM): Generator Experimental Characterization

    DEFF Research Database (Denmark)

    Pitic, Cristian Ilie; Tutelea, Lucian; Boldea, Ion

    2004-01-01

    Permanent Magnet-assisted Reluctance Synchronous Machines (PM-RSM) are well known for their lower initial costs and losses in a very wide constant power-speed characteristic. Therefore they are very suitable for hybrid or electrical vehicles. In this application a very good torque control is needed...... and the parameters of the machine have to be known precisely. The present paper introduces a series of the tests for parameters and efficiency determination of a PM-assisted RSM in the generator mode. The testing methods consist of standstill tests (dc decay), generator no load testing with capacitor and on load ac...

  4. Evaluation of Semi-Automatic Metadata Generation Tools: A Survey of the Current State of the Art

    National Research Council Canada - National Science Library

    Jung-ran Park; Andrew Brenza

    2015-01-01

      Assessment of the current landscape of semi-automatic metadata generation tools is particularly important considering the rapid development of digital repositories and the recent explosion of big data...

  5. On the application of bezier surfaces for GA-Fuzzy controller design for use in automatic generation control

    CSIR Research Space (South Africa)

    Boesack, CD

    2012-03-01

    Full Text Available Automatic Generation Control (AGC) of large interconnected power systems are typically controlled by a PI or PID type control law. Recently intelligent control techniques such as GA-Fuzzy controllers have been widely applied within the power...

  6. An exploration of the potential of Automatic Speech Recognition to assist and enable receptive communication in higher education

    Directory of Open Access Journals (Sweden)

    Mike Wald

    2006-12-01

    Full Text Available The potential use of Automatic Speech Recognition to assist receptive communication is explored. The opportunities and challenges that this technology presents students and staff to provide captioning of speech online or in classrooms for deaf or hard of hearing students and assist blind, visually impaired or dyslexic learners to read and search learning material more readily by augmenting synthetic speech with natural recorded real speech is also discussed and evaluated. The automatic provision of online lecture notes, synchronised with speech, enables staff and students to focus on learning and teaching issues, while also benefiting learners unable to attend the lecture or who find it difficult or impossible to take notes at the same time as listening, watching and thinking.

  7. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    CERN Document Server

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques

    2005-01-01

    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  8. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Directory of Open Access Journals (Sweden)

    Luo Hanwu

    2016-01-01

    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  9. Ontorat: automatic generation of new ontology terms, annotations, and axioms based on ontology design patterns.

    Science.gov (United States)

    Xiang, Zuoshuang; Zheng, Jie; Lin, Yu; He, Yongqun

    2015-01-01

    It is time-consuming to build an ontology with many terms and axioms. Thus it is desired to automate the process of ontology development. Ontology Design Patterns (ODPs) provide a reusable solution to solve a recurrent modeling problem in the context of ontology engineering. Because ontology terms often follow specific ODPs, the Ontology for Biomedical Investigations (OBI) developers proposed a Quick Term Templates (QTTs) process targeted at generating new ontology classes following the same pattern, using term templates in a spreadsheet format. Inspired by the ODPs and QTTs, the Ontorat web application is developed to automatically generate new ontology terms, annotations of terms, and logical axioms based on a specific ODP(s). The inputs of an Ontorat execution include axiom expression settings, an input data file, ID generation settings, and a target ontology (optional). The axiom expression settings can be saved as a predesigned Ontorat setting format text file for reuse. The input data file is generated based on a template file created by a specific ODP (text or Excel format). Ontorat is an efficient tool for ontology expansion. Different use cases are described. For example, Ontorat was applied to automatically generate over 1,000 Japan RIKEN cell line cell terms with both logical axioms and rich annotation axioms in the Cell Line Ontology (CLO). Approximately 800 licensed animal vaccines were represented and annotated in the Vaccine Ontology (VO) by Ontorat. The OBI team used Ontorat to add assay and device terms required by ENCODE project. Ontorat was also used to add missing annotations to all existing Biobank specific terms in the Biobank Ontology. A collection of ODPs and templates with examples are provided on the Ontorat website and can be reused to facilitate ontology development. With ever increasing ontology development and applications, Ontorat provides a timely platform for generating and annotating a large number of ontology terms by following

  10. Machine-assisted editing of user-generated content

    Science.gov (United States)

    Cremer, Markus; Cook, Randall

    2009-02-01

    Over recent years user-generated content has become ubiquitously available and an attractive entertainment source for millions of end-users. Particularly for larger events, where many people use their devices to capture the action, a great number of short video clips are made available through appropriate web services. The objective of this presentation is to describe a way to combine these clips by analyzing them, and automatically reconstruct the time line in which the individual video clips were captured. This will enable people to easily create a compelling multimedia experience by leveraging multiple clips taken by different users from different angles, and across different time spans. The user will be able to shift into the role of a movie director mastering a multi-camera recording of the event. To achieve this goal, the audio portion of the video clips is analyzed, and waveform characteristics are computed with high temporal granularity in order to facilitate precise time alignment and overlap computation of the user-generated clips. Special care has to be given not only to the robustness of the selected audio features against ambient noise and various distortions, but also to the matching algorithm used to align the user-generated clips properly.

  11. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2014-12-01

    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  12. Automatic generation of forward and inverse kinematics for a reconfigurable modular manipulator system

    Science.gov (United States)

    Kelmar, Laura; Khosla, Pradeep K.

    1990-01-01

    An algorithm is proposed for automatically generating both the forward and inverse kinematics of a serial-link N-degree-of-freedom reconfigurable manipulator (RM). Generation of the kinematic equations that govern a modular manipulator starts with geometric descriptions of the units, or modules, as well as their sequence in the manipulator. This geometric information is used to obtain the Denavit-Hartenberg (DH) parameters of an RM. The DH kinematic parameters are then used to obtain the forward kinematic transformation of the system. The problem of obtaining the inverse kinematics of RMs is addressed, and the idea of scaling an RM to automate the inverse kinematics and make the procedure as general as possible is proposed.

  13. Optimal gravitational search algorithm for automatic generation control of interconnected power systems

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2014-09-01

    Full Text Available An attempt is made for the effective application of Gravitational Search Algorithm (GSA to optimize PI/PIDF controller parameters in Automatic Generation Control (AGC of interconnected power systems. Initially, comparison of several conventional objective functions reveals that ITAE yields better system performance. Then, the parameters of GSA technique are properly tuned and the GSA control parameters are proposed. The superiority of the proposed approach is demonstrated by comparing the results of some recently published techniques such as Differential Evolution (DE, Bacteria Foraging Optimization Algorithm (BFOA and Genetic Algorithm (GA. Additionally, sensitivity analysis is carried out that demonstrates the robustness of the optimized controller parameters to wide variations in operating loading condition and time constants of speed governor, turbine, tie-line power. Finally, the proposed approach is extended to a more realistic power system model by considering the physical constraints such as reheat turbine, Generation Rate Constraint (GRC and Governor Dead Band nonlinearity.

  14. Automatic generation control with thyristor controlled series compensator including superconducting magnetic energy storage units

    Directory of Open Access Journals (Sweden)

    Saroj Padhan

    2014-09-01

    Full Text Available In the present work, an attempt has been made to understand the dynamic performance of Automatic Generation Control (AGC of multi-area multi-units thermal–thermal power system with the consideration of Reheat turbine, Generation Rate Constraint (GRC and Time delay. Initially, the gains of the fuzzy PID controller are optimized using Differential Evolution (DE algorithm. The superiority of DE is demonstrated by comparing the results with Genetic Algorithm (GA. After that performance of Thyristor Controlled Series Compensator (TCSC has been investigated. Further, a TCSC is placed in the tie-line and Superconducting Magnetic Energy Storage (SMES units are considered in both areas. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions from their nominal values. It is observed that the optimum gains of the proposed controller need not be reset even if the system is subjected to wide variation in loading condition and system parameters.

  15. Automatic Generation Control Study in Two Area Reheat Thermal Power System

    Science.gov (United States)

    Pritam, Anita; Sahu, Sibakanta; Rout, Sushil Dev; Ganthia, Sibani; Prasad Ganthia, Bibhu

    2017-08-01

    Due to industrial pollution our living environment destroyed. An electric grid system has may vital equipment like generator, motor, transformers and loads. There is always be an imbalance between sending end and receiving end system which cause system unstable. So this error and fault causing problem should be solved and corrected as soon as possible else it creates faults and system error and fall of efficiency of the whole power system. The main problem developed from this fault is deviation of frequency cause instability to the power system and may cause permanent damage to the system. Therefore this mechanism studied in this paper make the system stable and balance by regulating frequency at both sending and receiving end power system using automatic generation control using various controllers taking a two area reheat thermal power system into account.

  16. Automatic verification of SSD and generation of respiratory signal with lasers in radiotherapy: a preliminary study.

    Science.gov (United States)

    Prabhakar, Ramachandran

    2012-01-01

    Source to surface distance (SSD) plays a very important role in external beam radiotherapy treatment verification. In this study, a simple technique has been developed to verify the SSD automatically with lasers. The study also suggests a methodology for determining the respiratory signal with lasers. Two lasers, red and green are mounted on the collimator head of a Clinac 2300 C/D linac along with a camera to determine the SSD. A software (SSDLas) was developed to estimate the SSD automatically from the images captured by a 12-megapixel camera. To determine the SSD to a patient surface, the external body contour of the central axis transverse computed tomography (CT) cut is imported into the software. Another important aspect in radiotherapy is the generation of respiratory signal. The changes in the lasers separation as the patient breathes are converted to produce a respiratory signal. Multiple frames of laser images were acquired from the camera mounted on the collimator head and each frame was analyzed with SSDLas to generate the respiratory signal. The SSD as observed with the ODI on the machine and SSD measured by the SSDlas software was found to be within the tolerance limit. The methodology described for generating the respiratory signals will be useful for the treatment of mobile tumors such as lung, liver, breast, pancreas etc. The technique described for determining the SSD and the generation of respiratory signals using lasers is cost effective and simple to implement. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    OpenAIRE

    Hameed, Ibrahim A.; Sorrenson, Claus G.; Bochtis, Dionysis; Green, Ole

    2011-01-01

    Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represent...

  18. Automatic generation of 3D motifs for classification of protein binding sites

    Directory of Open Access Journals (Sweden)

    Herzyk Pawel

    2007-08-01

    Full Text Available Abstract Background Since many of the new protein structures delivered by high-throughput processes do not have any known function, there is a need for structure-based prediction of protein function. Protein 3D structures can be clustered according to their fold or secondary structures to produce classes of some functional significance. A recent alternative has been to detect specific 3D motifs which are often associated to active sites. Unfortunately, there are very few known 3D motifs, which are usually the result of a manual process, compared to the number of sequential motifs already known. In this paper, we report a method to automatically generate 3D motifs of protein structure binding sites based on consensus atom positions and evaluate it on a set of adenine based ligands. Results Our new approach was validated by generating automatically 3D patterns for the main adenine based ligands, i.e. AMP, ADP and ATP. Out of the 18 detected patterns, only one, the ADP4 pattern, is not associated with well defined structural patterns. Moreover, most of the patterns could be classified as binding site 3D motifs. Literature research revealed that the ADP4 pattern actually corresponds to structural features which show complex evolutionary links between ligases and transferases. Therefore, all of the generated patterns prove to be meaningful. Each pattern was used to query all PDB proteins which bind either purine based or guanine based ligands, in order to evaluate the classification and annotation properties of the pattern. Overall, our 3D patterns matched 31% of proteins with adenine based ligands and 95.5% of them were classified correctly. Conclusion A new metric has been introduced allowing the classification of proteins according to the similarity of atomic environment of binding sites, and a methodology has been developed to automatically produce 3D patterns from that classification. A study of proteins binding adenine based ligands showed that

  19. Differential evolution algorithm based automatic generation control for interconnected power systems with

    Directory of Open Access Journals (Sweden)

    Banaja Mohanty

    2014-09-01

    Full Text Available This paper presents the design and performance analysis of Differential Evolution (DE algorithm based Proportional–Integral (PI and Proportional–Integral–Derivative (PID controllers for Automatic Generation Control (AGC of an interconnected power system. Initially, a two area thermal system with governor dead-band nonlinearity is considered for the design and analysis purpose. In the proposed approach, the design problem is formulated as an optimization problem control and DE is employed to search for optimal controller parameters. Three different objective functions are used for the design purpose. The superiority of the proposed approach has been shown by comparing the results with a recently published Craziness based Particle Swarm Optimization (CPSO technique for the same interconnected power system. It is noticed that, the dynamic performance of DE optimized PI controller is better than CPSO optimized PI controllers. Additionally, controller parameters are tuned at different loading conditions so that an adaptive gain scheduling control strategy can be employed. The study is further extended to a more realistic network of two-area six unit system with different power generating units such as thermal, hydro, wind and diesel generating units considering boiler dynamics for thermal plants, Generation Rate Constraint (GRC and Governor Dead Band (GDB non-linearity.

  20. Perfusion CT in acute stroke: effectiveness of automatically-generated colour maps.

    Science.gov (United States)

    Ukmar, Maja; Degrassi, Ferruccio; Pozzi Mucelli, Roberta Antea; Neri, Francesca; Mucelli, Fabio Pozzi; Cova, Maria Assunta

    2017-04-01

    To evaluate the accuracy of perfusion CT (pCT) in the definition of the infarcted core and the penumbra, comparing the data obtained from the evaluation of parametric maps [cerebral blood volume (CBV), cerebral blood flow (CBF) and mean transit time (MTT)] with software-generated colour maps. A retrospective analysis was performed to identify patients with suspected acute ischaemic strokes and who had undergone unenhanced CT and pCT carried out within 4.5 h from the onset of the symptoms. A qualitative evaluation of the CBV, CBF and MTT maps was performed, followed by an analysis of the colour maps automatically generated by the software. 26 patients were identified, but a direct CT follow-up was performed only on 19 patients after 24-48 h. In the qualitative analysis, 14 patients showed perfusion abnormalities. Specifically, 29 perfusion deficit areas were detected, of which 15 areas suggested the penumbra and the remaining 14 areas suggested the infarct. As for automatically software-generated maps, 12 patients showed perfusion abnormalities. 25 perfusion deficit areas were identified, 15 areas of which suggested the penumbra and the other 10 areas the infarct. The McNemar's test showed no statistically significant difference between the two methods of evaluation in highlighting infarcted areas proved later at CT follow-up. We demonstrated how pCT provides good diagnostic accuracy in the identification of acute ischaemic lesions. The limits of identification of the lesions mainly lie at the pons level and in the basal ganglia area. Qualitative analysis has proven to be more efficient in identification of perfusion lesions in comparison with software-generated maps. However, software-generated maps have proven to be very useful in the emergency setting. Advances in knowledge: The use of CT perfusion is requested in increasingly more patients in order to optimize the treatment, thanks also to the technological evolution of CT, which now allows a whole

  1. Generating rate equations for complex enzyme systems by a computer-assisted systematic method

    Directory of Open Access Journals (Sweden)

    Beard Daniel A

    2009-08-01

    Full Text Available Abstract Background While the theory of enzyme kinetics is fundamental to analyzing and simulating biochemical systems, the derivation of rate equations for complex mechanisms for enzyme-catalyzed reactions is cumbersome and error prone. Therefore, a number of algorithms and related computer programs have been developed to assist in such derivations. Yet although a number of algorithms, programs, and software packages are reported in the literature, one or more significant limitation is associated with each of these tools. Furthermore, none is freely available for download and use by the community. Results We have implemented an algorithm based on the schematic method of King and Altman (KA that employs the topological theory of linear graphs for systematic generation of valid reaction patterns in a GUI-based stand-alone computer program called KAPattern. The underlying algorithm allows for the assumption steady-state, rapid equilibrium-binding, and/or irreversibility for individual steps in catalytic mechanisms. The program can automatically generate MathML and MATLAB output files that users can easily incorporate into simulation programs. Conclusion A computer program, called KAPattern, for generating rate equations for complex enzyme system is a freely available and can be accessed at http://www.biocoda.org.

  2. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    Science.gov (United States)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  3. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  4. Parameter optimization of differential evolution algorithm for automatic playlist generation problem

    Science.gov (United States)

    Alamag, Kaye Melina Natividad B.; Addawe, Joel M.

    2017-11-01

    With the digitalization of music, the number of collection of music increased largely and there is a need to create lists of music that filter the collection according to user preferences, thus giving rise to the Automatic Playlist Generation Problem (APGP). Previous attempts to solve this problem include the use of search and optimization algorithms. If a music database is very large, the algorithm to be used must be able to search the lists thoroughly taking into account the quality of the playlist given a set of user constraints. In this paper we perform an evolutionary meta-heuristic optimization algorithm, Differential Evolution (DE) using different combination of parameter values and select the best performing set when used to solve four standard test functions. Performance of the proposed algorithm is then compared with normal Genetic Algorithm (GA) and a hybrid GA with Tabu Search. Numerical simulations are carried out to show better results from Differential Evolution approach with the optimized parameter values.

  5. Grey wolf optimizer based regulator design for automatic generation control of interconnected power system

    Directory of Open Access Journals (Sweden)

    Esha Gupta

    2016-12-01

    Full Text Available This paper presents an application of grey wolf optimizer (GWO in order to find the parameters of primary governor loop for successful Automatic Generation Control of two areas’ interconnected power system. Two standard objective functions, Integral Square Error and Integral Time Absolute Error (ITAE, have been employed to carry out this parameter estimation process. Eigenvalues along with dynamic response analysis reveals that criterion of ITAE yields better performance. The comparison of the regulator performance obtained from GWO is carried out with Genetic Algorithm (GA, Particle Swarm Optimization, and Gravitational Search Algorithm. Different types of perturbations and load changes are incorporated in order to establish the efficacy of the obtained design. It is observed that GWO outperforms all three optimization methods. The optimization performance of GWO is compared with other algorithms on the basis of standard deviations in the values of parameters and objective functions.

  6. Automatic Multiple-Needle Surgical Planning of Robotic-Assisted Microwave Coagulation in Large Liver Tumor Therapy.

    Directory of Open Access Journals (Sweden)

    Shaoli Liu

    Full Text Available The "robotic-assisted liver tumor coagulation therapy" (RALTCT system is a promising candidate for large liver tumor treatment in terms of accuracy and speed. A prerequisite for effective therapy is accurate surgical planning. However, it is difficult for the surgeon to perform surgical planning manually due to the difficulties associated with robot-assisted large liver tumor therapy. These main difficulties include the following aspects: (1 multiple needles are needed to destroy the entire tumor, (2 the insertion trajectories of the needles should avoid the ribs, blood vessels, and other tissues and organs in the abdominal cavity, (3 the placement of multiple needles should avoid interference with each other, (4 an inserted needle will cause some deformation of liver, which will result in changes in subsequently inserted needles' operating environment, and (5 the multiple needle-insertion trajectories should be consistent with the needle-driven robot's movement characteristics. Thus, an effective multiple-needle surgical planning procedure is needed. To overcome these problems, we present an automatic multiple-needle surgical planning of optimal insertion trajectories to the targets, based on a mathematical description of all relevant structure surfaces. The method determines the analytical expression of boundaries of every needle "collision-free reachable workspace" (CFRW, which are the feasible insertion zones based on several constraints. Then, the optimal needle insertion trajectory within the optimization criteria will be chosen in the needle CFRW automatically. Also, the results can be visualized with our navigation system. In the simulation experiment, three needle-insertion trajectories were obtained successfully. In the in vitro experiment, the robot successfully achieved insertion of multiple needles. The proposed automatic multiple-needle surgical planning can improve the efficiency and safety of robot-assisted large liver tumor

  7. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data

    Science.gov (United States)

    Huang, Hai; Brenner, Claus; Sester, Monika

    2013-05-01

    This paper presents a generative statistical approach to automatic 3D building roof reconstruction from airborne laser scanning point clouds. In previous works, bottom-up methods, e.g., points clustering, plane detection, and contour extraction, are widely used. Due to the data artefacts caused by tree clutter, reflection from windows, water features, etc., the bottom-up reconstruction in urban areas may suffer from a number of incomplete or irregular roof parts. Manually given geometric constraints are usually needed to ensure plausible results. In this work we propose an automatic process with emphasis on top-down approaches. The input point cloud is firstly pre-segmented into subzones containing a limited number of buildings to reduce the computational complexity for large urban scenes. For the building extraction and reconstruction in the subzones we propose a pure top-down statistical scheme, in which the bottom-up efforts or additional data like building footprints are no more required. Based on a predefined primitive library we conduct a generative modeling to reconstruct roof models that fit the data. Primitives are assembled into an entire roof with given rules of combination and merging. Overlaps of primitives are allowed in the assembly. The selection of roof primitives, as well as the sampling of their parameters, is driven by a variant of Markov Chain Monte Carlo technique with specified jump mechanism. Experiments are performed on data-sets of different building types (from simple houses, high-rise buildings to combined building groups) and resolutions. The results show robustness despite the data artefacts mentioned above and plausibility in reconstruction.

  8. A PUT-Based Approach to Automatically Extracting Quantities and Generating Final Answers for Numerical Attributes

    Directory of Open Access Journals (Sweden)

    Yaqing Liu

    2016-06-01

    Full Text Available Automatically extracting quantities and generating final answers for numerical attributes is very useful in many occasions, including question answering, image processing, human-computer interaction, etc. A common approach is to learn linguistics templates or wrappers and employ some algorithm or model to generate a final answer. However, building linguistics templates or wrappers is a tough task for builders. In addition, linguistics templates or wrappers are domain-dependent. To make the builder escape from building linguistics templates or wrappers, we propose a new approach to final answer generation based on Predicates-Units Table (PUT, a mini domain-independent knowledge base. It is deserved to point out that, in the following cases, quantities are not represented well. Quantities are absent of units. Quantities are perhaps wrong for a given question. Even if all of them are represented well, their units are perhaps inconsistent. These cases have a strong impact on final answer solving. One thousand nine hundred twenty-six real queries are employed to test the proposed method, and the experimental results show that the average correctness ratio of our approach is 87.1%.

  9. Tra-la-Lyrics 2.0: Automatic Generation of Song Lyrics on a Semantic Domain

    Science.gov (United States)

    Gonçalo Oliveira, Hugo

    2015-12-01

    Tra-la-Lyrics is a system that generates song lyrics automatically. In its original version, the main focus was to produce text where stresses matched the rhythm of given melodies. There were no concerns on whether the text made sense or if the selected words shared some kind of semantic association. In this article, we describe the development of a new version of Tra-la-Lyrics, where text is generated on a semantic domain, defined by one or more seed words. This effort involved the integration of the original rhythm module of Tra-la-Lyrics in PoeTryMe, a generic platform that generates poetry with semantically coherent sentences. To measure our progress, the rhythm, the rhymes, and the semantic coherence in lyrics produced by the original Tra-la-Lyrics were analysed and compared with lyrics produced by the new instantiation of this system, dubbed Tra-la-Lyrics 2.0. The analysis showed that, in the lyrics by the new system, words have higher semantic association among them and with the given seeds, while the rhythm is still matched and rhymes are present. The previous analysis was complemented with a crowdsourced evaluation, where contributors answered a survey about relevant features of lyrics produced by the previous and the current versions of Tra-la-Lyrics. Though tight, the survey results confirmed the improvements of the lyrics by Tra-la-Lyrics 2.0.

  10. [Central Pattern Generators: Mechanisms of the Activity and Their Role in the Control of "Automatic" Movements].

    Science.gov (United States)

    Arshavsky, I; Deliagina, T G; Orlovsky, G N

    2015-01-01

    Central pattern generators (CPGs) are a set of interconnected neurons capable of generating a basic pattern of motor output underlying "automatic" movements (breathing, locomotion, chewing, swallowing, and so on) in the absence of afferent signals from the executive motor apparatus. They can be divided into the constitutive CPGs active throughout the entire lifetime (respiratory CPGs) and conditional CPGs controlling episodic movements (locomotion, chewing, swallowing, and others). Since a motor output of CPGs is determined by their internal organization, the activities of the conditional CPGs are initiated by simple commands coming from higher centers. We describe the structural and functional organization of the locomotor CPGs in the marine mollusk Clione limacina, lamprey, frog embryo, and laboratory mammals (cat, mouse, and rat), CPGs controlling the respiratory and swallowing movements in mammals, and CPGs controlling discharges of the electric organ in the gymnotiform fish. It is shown that in all these cases, the generation of rhythmic motor output is based both on the endogenous (pacemaker) activity of specific groups of interneurons and on interneural interactions. These two interrelated mechanisms complement each other, ensuring the high reliability of CPG functionality. We discuss how the experience obtained in studying CPGs can be used to understand mechanisms of more complex functions of the brain, including its cognitive functions.

  11. Automatic Test Pattern Generator for Fuzzing Based on Finite State Machine

    Directory of Open Access Journals (Sweden)

    Ming-Hung Wang

    2017-01-01

    Full Text Available With the rapid development of the Internet, several emerging technologies are adopted to construct fancy, interactive, and user-friendly websites. Among these technologies, HTML5 is a popular one and is widely used in establishing modern sites. However, the security issues in the new web technologies are also raised and are worthy of investigation. For vulnerability investigation, many previous studies used fuzzing and focused on generation-based approaches to produce test cases for fuzzing; however, these methods require a significant amount of knowledge and mental efforts to develop test patterns for generating test cases. To decrease the entry barrier of conducting fuzzing, in this study, we propose a test pattern generation algorithm based on the concept of finite state machines. We apply graph analysis techniques to extract paths from finite state machines and use these paths to construct test patterns automatically. According to the proposal, fuzzing can be completed through inputting a regular expression corresponding to the test target. To evaluate the performance of our proposal, we conduct an experiment in identifying vulnerabilities of the input attributes in HTML5. According to the results, our approach is not only efficient but also effective for identifying weak validators in HTML5.

  12. Deep Learning-Based Data Forgery Detection in Automatic Generation Control

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Fengli [Univ. of Arkansas, Fayetteville, AR (United States); Li, Qinghua [Univ. of Arkansas, Fayetteville, AR (United States)

    2017-10-09

    Automatic Generation Control (AGC) is a key control system in the power grid. It is used to calculate the Area Control Error (ACE) based on frequency and tie-line power flow between balancing areas, and then adjust power generation to maintain the power system frequency in an acceptable range. However, attackers might inject malicious frequency or tie-line power flow measurements to mislead AGC to do false generation correction which will harm the power grid operation. Such attacks are hard to be detected since they do not violate physical power system models. In this work, we propose algorithms based on Neural Network and Fourier Transform to detect data forgery attacks in AGC. Different from the few previous work that rely on accurate load prediction to detect data forgery, our solution only uses the ACE data already available in existing AGC systems. In particular, our solution learns the normal patterns of ACE time series and detects abnormal patterns caused by artificial attacks. Evaluations on the real ACE dataset show that our methods have high detection accuracy.

  13. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  14. Atlas-Based Automatic Generation of Subject-Specific Finite Element Tongue Meshes.

    Science.gov (United States)

    Bijar, Ahmad; Rohan, Pierre-Yves; Perrier, Pascal; Payan, Yohan

    2016-01-01

    Generation of subject-specific 3D finite element (FE) models requires the processing of numerous medical images in order to precisely extract geometrical information about subject-specific anatomy. This processing remains extremely challenging. To overcome this difficulty, we present an automatic atlas-based method that generates subject-specific FE meshes via a 3D registration guided by Magnetic Resonance images. The method extracts a 3D transformation by registering the atlas' volume image to the subject's one, and establishes a one-to-one correspondence between the two volumes. The 3D transformation field deforms the atlas' mesh to generate the subject-specific FE mesh. To preserve the quality of the subject-specific mesh, a diffeomorphic non-rigid registration based on B-spline free-form deformations is used, which guarantees a non-folding and one-to-one transformation. Two evaluations of the method are provided. First, a publicly available CT-database is used to assess the capability to accurately capture the complexity of each subject-specific Lung's geometry. Second, FE tongue meshes are generated for two healthy volunteers and two patients suffering from tongue cancer using MR images. It is shown that the method generates an appropriate representation of the subject-specific geometry while preserving the quality of the FE meshes for subsequent FE analysis. To demonstrate the importance of our method in a clinical context, a subject-specific mesh is used to simulate tongue's biomechanical response to the activation of an important tongue muscle, before and after cancer surgery.

  15. Methodolo- gical Aspects of Semantic Relationship Extraction for Automatic Thesaurus Generation

    Directory of Open Access Journals (Sweden)

    N. S. Lagutina

    2016-01-01

    Full Text Available The paper is devoted to analysis of methods for automatic generation of a specialized thesaurus. The main algorithm of generation consists of three stages: selection and preprocessing of a text corpus, recognition of thesaurus terms, and extraction of relations among terms. Our work is focused on exploring methods for semantic relation extraction. We developed a test bench that allow to test well-known algorithms for extraction of synonyms and hypernyms. These algorithms are based on different relation extraction techniques: lexico-syntactic patterns, morpho-syntactic rules, measurement of term information quantity, general-purpose thesaurus WordNet, and Levenstein distance. For analysis of the result thesaurus we proposed a complex assessment that includes the following metrics: precision of extracted terms, precision and recall of hierarchical and synonym relations, and characteristics of the thesaurus graph (the number of extracted terms and semantic relationships of different types, the number of connected components, and the number of vertices in the largest component. The proposed set of metrics allows to evaluate the quality of the thesaurus as a whole, reveal some drawbacks of standard relation extraction methods, and create more efficient hybrid methods that can generate thesauri with better characteristics than thesauri generated by using separate methods. In order to illustrate this fact, one of such hybrid methods is considered in the paper. It combines the best standard algorithms for hypernym and synonym extraction and generates a specialized medical thesaurus. The hybrid method leaves the thesaurus quality on the same level and finds more relations between terms than well-known algorithms.

  16. Computer Assisted Automatization of Multiplication Facts Reduces Mathematics Anxiety in Elementary School Children.

    Science.gov (United States)

    Wittman, Timothy K.; Marcinkiewicz, Henryk R.; Hamodey-Douglas, Stacie

    Fourth grade elementary school children exhibiting high and low mathematics anxiety were trained on multiplication facts using the Math Builder Program, a computer program designed to bring their performance to the automaticity level. Mathematics anxiety, measured by the Mathematics Anxiety Rating Scale--Elementary version (MARS-E), was assessed…

  17. Using an Automatic Retrieval System in the Web To Assist Co-operative Learning.

    Science.gov (United States)

    Badue, Claudine; Vaz, Wesley; Albuquerque, Eduardo

    This paper presents an information agent and latent semantic-based indexing architecture to retrieve documents on the Internet. The system optimizes the search for documents in the Internet by automatically retrieving relevant links. The information used for the search can be obtained, for instance, from Internet browser caches and from grades of…

  18. The NetVISA automatic association tool. Next generation software testing and performance under realistic conditions.

    Science.gov (United States)

    Le Bras, Ronan; Arora, Nimar; Kushida, Noriyuki; Tomuta, Elena; Kebede, Fekadu; Feitio, Paulino

    2016-04-01

    The CTBTO's International Data Centre is in the process of developing the next generation software to perform the automatic association step. The NetVISA software uses a Bayesian approach with a forward physical model using probabilistic representations of the propagation, station capabilities, background seismicity, noise detection statistics, and coda phase statistics. The software has been in development for a few years and is now reaching the stage where it is being tested in a realistic operational context. An interactive module has been developed where the NetVISA automatic events that are in addition to the Global Association (GA) results are presented to the analysts. We report on a series of tests where the results are examined and evaluated by seasoned analysts. Consistent with the statistics previously reported (Arora et al., 2013), the first test shows that the software is able to enhance analysis work by providing additional event hypothesis for consideration by analysts. A test on a three-day data set was performed and showed that the system found 42 additional real events out of 116 examined, including 6 that pass the criterion for the Reviewed Event Bulletin of the IDC. The software was functional in a realistic, real-time mode, during the occurrence of the fourth nuclear test claimed by the Democratic People's Republic of Korea on January 6th, 2016. Confirming a previous statistical observation, the software found more associated stations (51, including 35 primary stations) than GA (36, including 26 primary stations) for this event. Nimar S. Arora, Stuart Russell, Erik Sudderth. Bulletin of the Seismological Society of America (BSSA) April 2013, vol. 103 no. 2A pp709-729.

  19. Performance Evaluation of Antlion Optimizer Based Regulator in Automatic Generation Control of Interconnected Power System

    Directory of Open Access Journals (Sweden)

    Esha Gupta

    2016-01-01

    Full Text Available This paper presents an application of the recently introduced Antlion Optimizer (ALO to find the parameters of primary governor loop of thermal generators for successful Automatic Generation Control (AGC of two-area interconnected power system. Two standard objective functions, Integral Square Error (ISE and Integral Time Absolute Error (ITAE, have been employed to carry out this parameter estimation process. The problem is transformed in optimization problem to obtain integral gains, speed regulation, and frequency sensitivity coefficient for both areas. The comparison of the regulator performance obtained from ALO is carried out with Genetic Algorithm (GA, Particle Swarm Optimization (PSO, and Gravitational Search Algorithm (GSA based regulators. Different types of perturbations and load changes are incorporated to establish the efficacy of the obtained design. It is observed that ALO outperforms all three optimization methods for this real problem. The optimization performance of ALO is compared with other algorithms on the basis of standard deviations in the values of parameters and objective functions.

  20. Integration of Variable Speed Pumped Hydro Storage in Automatic Generation Control Systems

    Science.gov (United States)

    Fulgêncio, N.; Moreira, C.; Silva, B.

    2017-04-01

    Pumped storage power (PSP) plants are expected to be an important player in modern electrical power systems when dealing with increasing shares of new renewable energies (NRE) such as solar or wind power. The massive penetration of NRE and consequent replacement of conventional synchronous units will significantly affect the controllability of the system. In order to evaluate the capability of variable speed PSP plants participation in the frequency restoration reserve (FRR) provision, taking into account the expected performance in terms of improved ramp response capability, a comparison with conventional hydro units is presented. In order to address this issue, a three area test network was considered, as well as the corresponding automatic generation control (AGC) systems, being responsible for re-dispatching the generation units to re-establish power interchange between areas as well as the system nominal frequency. The main issue under analysis in this paper is related to the benefits of the fast response of variable speed PSP with respect to its capability of providing fast power balancing in a control area.

  1. Automatic Generation of Template Images for Detecting Vehicles in Parking Lots

    Science.gov (United States)

    Iwasa, Kazumasa; Tanaka, Toshimitsu; Sagawa, Yuji; Sugie, Noboru

    An increase in the number of parking lots is very slow though that of cars increases much every year. Thus, efficient management of parking lots is needed. If information of vacant divisions is transmitted to cars waiting at gates of parking lots, traffic jam caused by cars searching parking divisions will be decreased. Therefore several methods for detecting parking cars have been developed. Especially, the method that detects cars hiding white lines drawn on parking lots is highly reliable. The method needs a template image for each camera. Since these images were created by human in the previous research, much cost and time was needed. In this paper, we present the method automatically generating the template images. Firstly, our method synthesizes an image of parking lots including no cars from several images. Then, the method detects white line segments from the image. The line segments are corrected in consideration of the rule that white lines on the parking lots are parallel and their length is constant. Finally, parking divisions are determined from the line segments and stored in the template image. In the experiment using the template generated by our method, accuracy of detecting cars was about 96%. The template is comparable to the manually created template in accuracy.

  2. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    Science.gov (United States)

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Automatic Generation of Indoor Navigable Space Using a Point Cloud and its Scanner Trajectory

    Science.gov (United States)

    Staats, B. R.; Diakité, A. A.; Voûte, R. L.; Zlatanova, S.

    2017-09-01

    Automatic generation of indoor navigable models is mostly based on 2D floor plans. However, in many cases the floor plans are out of date. Buildings are not always built according to their blue prints, interiors might change after a few years because of modified walls and doors, and furniture may be repositioned to the user's preferences. Therefore, new approaches for the quick recording of indoor environments should be investigated. This paper concentrates on laser scanning with a Mobile Laser Scanner (MLS) device. The MLS device stores a point cloud and its trajectory. If the MLS device is operated by a human, the trajectory contains information which can be used to distinguish different surfaces. In this paper a method is presented for the identification of walkable surfaces based on the analysis of the point cloud and the trajectory of the MLS scanner. This method consists of several steps. First, the point cloud is voxelized. Second, the trajectory is analysing and projecting to acquire seed voxels. Third, these seed voxels are generated into floor regions by the use of a region growing process. By identifying dynamic objects, doors and furniture, these floor regions can be modified so that each region represents a specific navigable space inside a building as a free navigable voxel space. By combining the point cloud and its corresponding trajectory, the walkable space can be identified for any type of building even if the interior is scanned during business hours.

  4. Automatic generation and verification of railway interlocking control tables using FSM and NuSMV

    Directory of Open Access Journals (Sweden)

    Mohammad B. YAZDI

    2009-01-01

    Full Text Available Due to their important role in providing safe conditions for train movements, railway interlocking systems are considered as safety critical systems. The reliability, safety and integrity of these systems, relies on reliability and integrity of all stages in their lifecycle including the design, verification, manufacture, test, operation and maintenance.In this paper, the Automatic generation and verification of interlocking control tables, as one of the most important stages in the interlocking design process has been focused on, by the safety critical research group in the School of Railway Engineering, SRE. Three different subsystems including a graphical signalling layout planner, a Control table generator and a Control table verifier, have been introduced. Using NuSMV model checker, the control table verifier analyses the contents of control table besides the safe train movement conditions and checks for any conflicting settings in the table. This includes settings for conflicting routes, signals, points and also settings for route isolation and single and multiple overlap situations. The latest two settings, as route isolation and multiple overlap situations are from new outcomes of the work comparing to works represented on the subject recently.

  5. Automatic Generation of Optimized and Synthesizable Hardware Implementation from High-Level Dataflow Programs

    Directory of Open Access Journals (Sweden)

    Khaled Jerbi

    2012-01-01

    Full Text Available In this paper, we introduce the Reconfigurable Video Coding (RVC standard based on the idea that video processing algorithms can be defined as a library of components that can be updated and standardized separately. MPEG RVC framework aims at providing a unified high-level specification of current MPEG coding technologies using a dataflow language called Cal Actor Language (CAL. CAL is associated with a set of tools to design dataflow applications and to generate hardware and software implementations. Before this work, the existing CAL hardware compilers did not support high-level features of the CAL. After presenting the main notions of the RVC standard, this paper introduces an automatic transformation process that analyses the non-compliant features and makes the required changes in the intermediate representation of the compiler while keeping the same behavior. Finally, the implementation results of the transformation on video and still image decoders are summarized. We show that the obtained results can largely satisfy the real time constraints for an embedded design on FPGA as we obtain a throughput of 73 FPS for MPEG 4 decoder and 34 FPS for coding and decoding process of the LAR coder using a video of CIF image size. This work resolves the main limitation of hardware generation from CAL designs.

  6. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  7. AUTOMATIC 3D BUILDING MODEL GENERATION FROM LIDAR AND IMAGE DATA USING SEQUENTIAL MINIMUM BOUNDING RECTANGLE

    Directory of Open Access Journals (Sweden)

    E. Kwak

    2012-07-01

    Full Text Available Digital Building Model is an important component in many applications such as city modelling, natural disaster planning, and aftermath evaluation. The importance of accurate and up-to-date building models has been discussed by many researchers, and many different approaches for efficient building model generation have been proposed. They can be categorised according to the data source used, the data processing strategy, and the amount of human interaction. In terms of data source, due to the limitations of using single source data, integration of multi-senor data is desired since it preserves the advantages of the involved datasets. Aerial imagery and LiDAR data are among the commonly combined sources to obtain 3D building models with good vertical accuracy from laser scanning and good planimetric accuracy from aerial images. The most used data processing strategies are data-driven and model-driven ones. Theoretically one can model any shape of buildings using data-driven approaches but practically it leaves the question of how to impose constraints and set the rules during the generation process. Due to the complexity of the implementation of the data-driven approaches, model-based approaches draw the attention of the researchers. However, the major drawback of model-based approaches is that the establishment of representative models involves a manual process that requires human intervention. Therefore, the objective of this research work is to automatically generate building models using the Minimum Bounding Rectangle algorithm and sequentially adjusting them to combine the advantages of image and LiDAR datasets.

  8. Use of an Automatic Problem Generator to Teach Basic Skills in a First Course in Assembly Language.

    Science.gov (United States)

    Benander, Alan; And Others

    1989-01-01

    Discussion of the use of computer aided instruction (CAI) and instructional software in college level courses highlights an automatic problem generator, AUTOGEN, that was written for computer science students learning assembly language. Design of the software is explained, and student responses are reported. (nine references) (LRW)

  9. ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles

    Directory of Open Access Journals (Sweden)

    Bahar Sateli

    2017-07-01

    Full Text Available Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF for representing user profiles and Linked Open Data (LOD sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP techniques. In our method, we start by processing a set of research articles for a

  10. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  11. SU-F-BRB-16: A Spreadsheet Based Automatic Trajectory GEnerator (SAGE): An Open Source Tool for Automatic Creation of TrueBeam Developer Mode Robotic Trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Etmektzoglou, A; Mishra, P; Svatos, M [Varian Medical Systems, Palo Alto, CA (United States)

    2015-06-15

    Purpose: To automate creation and delivery of robotic linac trajectories with TrueBeam Developer Mode, an open source spreadsheet-based trajectory generation tool has been developed, tested and made freely available. The computing power inherent in a spreadsheet environment plus additional functions programmed into the tool insulate users from the underlying schema tedium and allow easy calculation, parameterization, graphical visualization, validation and finally automatic generation of Developer Mode XML scripts which are directly loadable on a TrueBeam linac. Methods: The robotic control system platform that allows total coordination of potentially all linac moving axes with beam (continuous, step-and-shoot, or combination thereof) becomes available in TrueBeam Developer Mode. Many complex trajectories are either geometric or can be described in analytical form, making the computational power, graphing and programmability available in a spreadsheet environment an easy and ideal vehicle for automatic trajectory generation. The spreadsheet environment allows also for parameterization of trajectories thus enabling the creation of entire families of trajectories using only a few variables. Standard spreadsheet functionality has been extended for powerful movie-like dynamic graphic visualization of the gantry, table, MLC, room, lasers, 3D observer placement and beam centerline all as a function of MU or time, for analysis of the motions before requiring actual linac time. Results: We used the tool to generate and deliver extended SAD “virtual isocenter” trajectories of various shapes such as parameterized circles and ellipses. We also demonstrated use of the tool in generating linac couch motions that simulate respiratory motion using analytical parameterized functions. Conclusion: The SAGE tool is a valuable resource to experiment with families of complex geometric trajectories for a TrueBeam Linac. It makes Developer Mode more accessible as a vehicle to quickly

  12. Automatic bearing fault diagnosis of permanent magnet synchronous generators in wind turbines subjected to noise interference

    Science.gov (United States)

    Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo

    2018-02-01

    An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.

  13. Automatic postural responses are generated according to feet orientation and perturbation magnitude.

    Science.gov (United States)

    Azzi, Nametala Maia; Coelho, Daniel Boari; Teixeira, Luis Augusto

    2017-09-01

    This investigation aimed to assess the effect of feet orientation angle in upright stance on automatic postural responses (APRs) to mechanical perturbations of different magnitudes. Perturbation was produced by releasing suddenly a load attached to the participant's trunk, leading to forward body sway. We evaluated APRs to loads corresponding to 5% (low) and 10% (high) of the participant's body weight, comparing the following feet orientations: parallel, preferred (M=10.46°), 15° and 30° for each foot regarding the body midline. Results showed that APRs were sensitive to perturbation magnitude, with the high load leading to increased amplitudes of center of pressure displacement and joints rotation, in addition to stronger and earlier muscular responses. Feet orientation at 30° led to a greater amplitude of center of pressure displacement than the other feet orientations. The low perturbation magnitude led to similar responses both at the hip and ankle across feet orientations, whereas the high load induced increased rotation amplitudes in both joints for feet orientation at 30°. Our results suggest that APRs are generated by the nervous system taking into consideration the biomechanical constraints in the response production. Relevant for standardization of feet placement in evaluations of balance recovery, our results indicated that a moderate range of outward feet orientation angles in stance lead to comparable APRs, while increased outward feet orientation angles lead to distinct postural responses. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Semi-automatic ground truth generation for license plate recognition system

    Science.gov (United States)

    Wang, Shen-Zheng; Zhao, San-Lung; Chen, Yi-Yuan; Lan, Kung-Ming

    2011-09-01

    License plate recognition (LPR) system is to help alert relevant personnel of any passing vehicle in the surveillance area. In order to test algorithms for license plate recognition, it is necessary to have input frames in which the ground truth is determined. The purpose of ground truth data is here to provide an absolute reference for performance evaluation or training purposes. However, annotating ground truth data for real-life inputs is very disturbing task because of timeconsuming manual. In this paper, we proposed a method of semi-automatic ground truth generation for license plate recognition in video sequences. The method started from region of interesting detection to rapidly extract characters lines followed by a license plate recognition system to verify the license plate regions and recognized the numbers. On the top of the LPR system, we incorporated a tracking-validation mechanism to detect the time interval of passing vehicles in input sequences. The tracking mechanism is initialized by a single license plate region in one frame. Moreover, in order to tolerate the variation of the license plate appearances in the input sequences, the validator would be updated by capturing positive and negatives samples during tracking. Experimental results show that the proposed method can achieve promising results.

  15. Isolating automatic photism generation from strategic photism use in grapheme-colour synaesthesia.

    Science.gov (United States)

    Levy, Arielle M; Dixon, Mike J; Soliman, Sherif

    2017-11-01

    Grapheme-colour synaesthesia is a phenomenon in which ordinary black numbers and letters (graphemes) trigger the experience of highly specific colours (photisms). The Synaesthetic Stroop task has been used to demonstrate that graphemes trigger photisms automatically. In the standard Stroop task, congruent trial probability (CTP) has been manipulated to isolate effects of automaticity from higher-order strategic effects, with larger Stroop effects at high CTP attributed to participants strategically attending to the stimulus word to facilitate responding, and smaller Stroop effects at low CTP reflecting automatic word processing. Here we apply this logic for the first time to the Synaesthetic Stroop task. At high CTP we showed larger Stroop effects due to synaesthetes using their synaesthetic colours strategically. At low CTP Stroop effects were reduced but were still significant. We directly isolate automatic processing of graphemes from strategic effects and conclusively show that, in synaesthesia, viewing black graphemes automatically triggers colour experiences. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Generation of gene edited birds in one generation using sperm transfection assisted gene editing (STAGE).

    Science.gov (United States)

    Cooper, Caitlin A; Challagulla, Arjun; Jenkins, Kristie A; Wise, Terry G; O'Neil, Terri E; Morris, Kirsten R; Tizard, Mark L; Doran, Timothy J

    2017-06-01

    Generating transgenic and gene edited mammals involves in vitro manipulation of oocytes or single cell embryos. Due to the comparative inaccessibility of avian oocytes and single cell embryos, novel protocols have been developed to produce transgenic and gene edited birds. While these protocols are relatively efficient, they involve two generation intervals before reaching complete somatic and germline expressing transgenic or gene edited birds. Most of this work has been done with chickens, and many protocols require in vitro culturing of primordial germ cells (PGCs). However, for many other bird species no methodology for long term culture of PGCs exists. Developing methodologies to produce germline transgenic or gene edited birds in the first generation would save significant amounts of time and resource. Furthermore, developing protocols that can be readily adapted to a wide variety of avian species would open up new research opportunities. Here we report a method using sperm as a delivery mechanism for gene editing vectors which we call sperm transfection assisted gene editing (STAGE). We have successfully used this method to generate GFP knockout embryos and chickens, as well as generate embryos with mutations in the doublesex and mab-3 related transcription factor 1 (DMRT1) gene using the CRISPR/Cas9 system. The efficiency of the method varies from as low as 0% to as high as 26% with multiple factors such as CRISPR guide efficiency and mRNA stability likely impacting the outcome. This straightforward methodology could simplify gene editing in many bird species including those for which no methodology currently exists.

  17. Automatic segmentation of lesions for the computer-assisted detection in fluorescence urology

    Science.gov (United States)

    Kage, Andreas; Legal, Wolfgang; Kelm, Peter; Simon, Jörg; Bergen, Tobias; Münzenmayer, Christian; Benz, Michaela

    2012-03-01

    Bladder cancer is one of the most common cancers in the western world. The diagnosis in Germany is based on the visual inspection of the bladder. This inspection performed with a cystoscope is a challenging task as some kinds of abnormal tissues do not differ much in their appearance from their surrounding healthy tissue. Fluorescence Cystoscopy has the potential to increase the detection rate. A liquid marker introduced into the bladder in advance of the inspection is concentrated in areas with high metabolism. Thus these areas appear as bright "glowing". Unfortunately, the fluorescence image contains besides the glowing of the suspicious lesions no more further visual information like for example the appearance of the blood vessels. A visual judgment of the lesion as well as a precise treatment has to be done using white light illumination. Thereby, the spatial information of the lesion provided by the fluorescence image has to be guessed by the clinical expert. This leads to a time consuming procedure due to many switches between the modalities and increases the risk of mistreatment. We introduce an automatic approach, which detects and segments any suspicious lesion in the fluorescence image automatically once the image was classified as a fluorescence image. The area of the contour of the detected lesion is transferred to the corresponding white light image and provide the clinical expert the spatial information of the lesion. The advantage of this approach is, that the clinical expert gets the spatial and the visual information of the lesion together in one image. This can save time and decrease the risk of an incomplete removal of a malign lesion.

  18. Embedded Platform for Automatic Testing and Optimizing of FPGA Based Cryptographic True Random Number Generators

    Directory of Open Access Journals (Sweden)

    M. Varchola

    2009-12-01

    Full Text Available This paper deals with an evaluation platform for cryptographic True Random Number Generators (TRNGs based on the hardware implementation of statistical tests for FPGAs. It was developed in order to provide an automatic tool that helps to speed up the TRNG design process and can provide new insights on the TRNG behavior as it will be shown on a particular example in the paper. It enables to test sufficient statistical properties of various TRNG designs under various working conditions on the fly. Moreover, the tests are suitable to be embedded into cryptographic hardware products in order to recognize TRNG output of weak quality and thus increase its robustness and reliability. Tests are fully compatible with the FIPS 140 standard and are implemented by the VHDL language as an IP-Core for vendor independent FPGAs. A recent Flash based Actel Fusion FPGA was chosen for preliminary experiments. The Actel version of the tests possesses an interface to the Actel’s CoreMP7 softcore processor that is fully compatible with the industry standard ARM7TDMI. Moreover, identical tests suite was implemented to the Xilinx Virtex 2 and 5 in order to compare the performance of the proposed solution with the performance of already published one based on the same FPGAs. It was achieved 25% and 65% greater clock frequency respectively while consuming almost equal resources of the Xilinx FPGAs. On the top of it, the proposed FIPS 140 architecture is capable of processing one random bit per one clock cycle which results in 311.5 Mbps throughput for Virtex 5 FPGA.

  19. FORMATION OF THE SYNTHESIS ALGORITHMS OF THE COORDINATING CONTROL SYSTEMS BY MEANS OF THE AUTOMATIC GENERATION OF PETRI NETS

    Directory of Open Access Journals (Sweden)

    A. A. Gurskiy

    2016-09-01

    Full Text Available The coordinating control system by drives of the robot-manipulator is presented in this article. The purpose of the scientific work is the development and research of the new algorithms for parametric synthesis of the coordinating control systems. To achieve this aim it is necessary to develop the system generating the required parametric synthesis algorithms and performing the necessary procedures according to the generated algorithm. This scientific work deals with the synthesis of Petri net in the specific case with the automatic generation of Petri nets.

  20. Field Robotics in Sports: Automatic Generation of Guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ibrahim A. Hameed

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  1. Field Robotics in Sports: Automatic Generation of guidance Lines for Automatic Grass Cutting, Striping and Pitch Marking of Football Playing Fields

    Directory of Open Access Journals (Sweden)

    Ole Green

    2011-03-01

    Full Text Available Progress is constantly being made and new applications are constantly coming out in the area of field robotics. In this paper, a promising application of field robotics in football playing fields is introduced. An algorithmic approach for generating the way points required for the guidance of a GPS-based field robotic through a football playing field to automatically carry out periodical tasks such as cutting the grass field, pitch and line marking illustrations and lawn striping is represented. The manual operation of these tasks requires very skilful personnel able to work for long hours with very high concentration for the football yard to be compatible with standards of Federation Internationale de Football Association (FIFA. In the other side, a GPS-based guided vehicle or robot with three implements; grass mower, lawn stripping roller and track marking illustrator is capable of working 24 h a day, in most weather and in harsh soil conditions without loss of quality. The proposed approach for the automatic operation of football playing fields requires no or very limited human intervention and therefore it saves numerous working hours and free a worker to focus on other tasks. An economic feasibility study showed that the proposed method is economically superimposing the current manual practices.

  2. Towards automatic skill evaluation: detection and segmentation of robot-assisted surgical motions.

    Science.gov (United States)

    Lin, Henry C; Shafran, Izhak; Yuh, David; Hager, Gregory D

    2006-09-01

    This paper reports our progress in developing techniques for "parsing" raw motion data from a simple surgical task into a labeled sequence of surgical gestures. The ability to automatically detect and segment surgical motion can be useful in evaluating surgical skill, providing surgical training feedback, or documenting essential aspects of a procedure. If processed online, the information can be used to provide context-specific information or motion enhancements to the surgeon. However, in every case, the key step is to relate recorded motion data to a model of the procedure being performed. Robotic surgical systems such as the da Vinci system from Intuitive Surgical provide a rich source of motion and video data from surgical procedures. The application programming interface (API) of the da Vinci outputs 192 kinematics values at 10 Hz. Through a series of feature-processing steps, tailored to this task, the highly redundant features are projected to a compact and discriminative space. The resulting classifier is simple and effective.Cross-validation experiments show that the proposed approach can achieve accuracies higher than 90% when segmenting gestures in a 4-throw suturing task, for both expert and intermediate surgeons. These preliminary results suggest that gesture-specific features can be extracted to provide highly accurate surgical skill evaluation.

  3. Computer-assisted counting of retinal cells by automatic segmentation after TV denoising

    Science.gov (United States)

    2013-01-01

    Background Quantitative evaluation of mosaics of photoreceptors and neurons is essential in studies on development, aging and degeneration of the retina. Manual counting of samples is a time consuming procedure while attempts to automatization are subject to various restrictions from biological and preparation variability leading to both over- and underestimation of cell numbers. Here we present an adaptive algorithm to overcome many of these problems. Digital micrographs were obtained from cone photoreceptor mosaics visualized by anti-opsin immuno-cytochemistry in retinal wholemounts from a variety of mammalian species including primates. Segmentation of photoreceptors (from background, debris, blood vessels, other cell types) was performed by a procedure based on Rudin-Osher-Fatemi total variation (TV) denoising. Once 3 parameters are manually adjusted based on a sample, similarly structured images can be batch processed. The module is implemented in MATLAB and fully documented online. Results The object recognition procedure was tested on samples with a typical range of signal and background variations. We obtained results with error ratios of less than 10% in 16 of 18 samples and a mean error of less than 6% compared to manual counts. Conclusions The presented method provides a traceable module for automated acquisition of retinal cell density data. Remaining errors, including addition of background items, splitting or merging of objects might be further reduced by introduction of additional parameters. The module may be integrated into extended environments with features such as 3D-acquisition and recognition. PMID:24138794

  4. Automatic virtual transducer locating system to assist in interpreting ultrasound imaging.

    Science.gov (United States)

    Taniguchi, Nobuyuki; Kuwata, Tomoyuki; Ono, Tomoko; Itoh, Kouichi; Omoto, Kiyoka; Fujii, Yasutomo; Ootake, Akifumi

    2003-12-01

    Bodymarkers are used to label the location and orientation of the transducer during ultrasound examination. We attempt to evaluate the usefulness of a new system that indicates transducer location over that of the conventional bodymarker. The proposed system uses an electromagnetic tracking device to track the three-dimensional (3-D) position and orientation of a small electromagnetic receiver attached to the ultrasound transducer relative to a transmitter placed under the bed. The new bodymarker is displayed as a 3-D graphic model. The physique of the examinee is calibrated by representing five locations on the body on the original bodymarker. To evaluate the accuracy of the system visually, we compared the transducer position indicated in the new bodymarker and the actual transducer position in four abdominal sections. Actual and displayed position and orientation closely agreed in all cases, and the transducer position indicator in the bodymarker display moved smoothly. Automatic transducer locator on the virtual 3-D bodymarker accurately indicated its position and orientation. This system is useful and convenient in clinical examinations.

  5. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  6. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  7. Automatic Supervision of Temperature, Humidity, and Luminance with an Assistant Personal Robot

    Directory of Open Access Journals (Sweden)

    Jordi Palacín

    2017-01-01

    Full Text Available Smart environments and Ambient Intelligence (AmI technologies are defining the future society where energy optimization and intelligent management are essential for a sustainable advance. Mobile robotics is also making an important contribution to this advance with the integration of sensors and intelligent processing algorithms. This paper presents the application of an Assistant Personal Robot (APR as an autonomous agent for temperature, humidity, and luminance supervision in human-frequented areas. The robot multiagent capabilities allow gathering sensor information while exploring or performing specific tasks and then verifying human comfortability levels. The proposed methodology creates information maps with the distribution of temperature, humidity, and luminance and interprets such information in terms of comfort and warns about corrective actuations if required.

  8. Automatic SAR/optical cross-matching for GCP monograph generation

    Science.gov (United States)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  9. Automatic Generation of English/Chinese Thesaurus Based on a Parallel Corpus in Laws.

    Science.gov (United States)

    Yang, Christopher C.; Luk, Johnny

    2003-01-01

    Discusses the growth in the availability of information on the Web in languages other than English and focuses on cross-lingual semantic interoperability. Describes the development of an automatic English/Chinese thesaurus and reports results of an experiment with legal information from the Hong Kong government, including precision and recall.…

  10. Automatic speech recognition for report generation in computed tomography; Digitale Spracherkennung bei der Erfassung computertomographischer Befundtexte

    Energy Technology Data Exchange (ETDEWEB)

    Teichgraeber, U.K.M.; Ehrenstein, T. [Humboldt-Universitaet, Berlin (Germany). Strahlenklinik und Poliklinik; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-11-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4{+-}2.3 min compared to 4.5{+-}3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [German] Ziel: In einer Vergleichsstudie wurde der Einsatz eines kontinuierlichen digitalen Spracherkennungssystems (DSS) mit der herkoemmlichen Befundtexterstellung durch Schreibkraefte verglichen. Methodik: Es wurden je 100 CT-Befunde konsekutiv mit dem DSS erstellt und in herkoemmlicher Weise auf einen Tontraeger diktiert und durch die Schreibkraefte zu Papier gebracht. Es wurden der Zeitaufwand des Radiologen fuer Diktat und Korrektur gemessen und die Art der Fehler analysiert. Fuer beide

  11. MAGE (M-file/Mif Automatic GEnerator): A graphical interface tool for automatic generation of Object Oriented Micromagnetic Framework configuration files and Matlab scripts for results analysis

    Science.gov (United States)

    Chęciński, Jakub; Frankowski, Marek

    2016-10-01

    We present a tool for fully-automated generation of both simulations configuration files (Mif) and Matlab scripts for automated data analysis, dedicated for Object Oriented Micromagnetic Framework (OOMMF). We introduce extended graphical user interface (GUI) that allows for fast, error-proof and easy creation of Mifs, without any programming skills usually required for manual Mif writing necessary. With MAGE we provide OOMMF extensions for complementing it by mangetoresistance and spin-transfer-torque calculations, as well as local magnetization data selection for output. Our software allows for creation of advanced simulations conditions like simultaneous parameters sweeps and synchronic excitation application. Furthermore, since output of such simulation could be long and complicated we provide another GUI allowing for automated creation of Matlab scripts suitable for analysis of such data with Fourier and wavelet transforms as well as user-defined operations.

  12. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias; Zhang, Jianhua; Li, Zhigang; Shafie-Khah, Miadreza; Catalao, Joao P. S.

    2017-11-01

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and upper bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.

  13. An automatic, vigorous-injection assisted dispersive liquid-liquid microextraction technique for stopped-flow spectrophotometric detection of boron.

    Science.gov (United States)

    Alexovič, Michal; Wieczorek, Marcin; Kozak, Joanna; Kościelniak, Paweł; Balogh, Ioseph S; Andruch, Vasil

    2015-02-01

    A novel automatic vigorous-injection assisted dispersive liquid-liquid microextraction procedure based on the use of a modified single-valve sequential injection manifold (SV-SIA) was developed and applied for determination of boron in water samples. The major novelties in the procedure are the achieving of efficient dispersive liquid-liquid microextraction by means of single vigorous-injection (250 µL, 900 µL s(-1)) of the extraction solvent (n-amylacetate) into aqueous phase resulting in the effective dispersive mixing without using dispersive solvent and after self-separation of the phases, as well as forwarding of the extraction phase directly to a Z-flow cell (10 mm) without the use of a holding coil for stopped-flow spectrophotometric detection. The calibration working range was linear up to 2.43 mg L(-1) of boron at 426nm wavelength. The limit of detection, calculated as 3s of a blank test (n=10), was found to be 0.003 mg L(-1), and the relative standard deviation, measured as ten replicable concentrations at 0.41 mg L(-1) of boron was determined to be 5.6%. The validation of the method was tested using certified reference material. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Cloud Detection from Satellite Imagery: A Comparison of Expert-Generated and Automatically-Generated Decision Trees

    Science.gov (United States)

    Shiffman, Smadar

    2004-01-01

    Automated cloud detection and tracking is an important step in assessing global climate change via remote sensing. Cloud masks, which indicate whether individual pixels depict clouds, are included in many of the data products that are based on data acquired on- board earth satellites. Many cloud-mask algorithms have the form of decision trees, which employ sequential tests that scientists designed based on empirical astrophysics studies and astrophysics simulations. Limitations of existing cloud masks restrict our ability to accurately track changes in cloud patterns over time. In this study we explored the potential benefits of automatically-learned decision trees for detecting clouds from images acquired using the Advanced Very High Resolution Radiometer (AVHRR) instrument on board the NOAA-14 weather satellite of the National Oceanic and Atmospheric Administration. We constructed three decision trees for a sample of 8km-daily AVHRR data from 2000 using a decision-tree learning procedure provided within MATLAB(R), and compared the accuracy of the decision trees to the accuracy of the cloud mask. We used ground observations collected by the National Aeronautics and Space Administration Clouds and the Earth s Radiant Energy Systems S COOL project as the gold standard. For the sample data, the accuracy of automatically learned decision trees was greater than the accuracy of the cloud masks included in the AVHRR data product.

  15. Solution Approach to Automatic Generation Control Problem Using Hybridized Gravitational Search Algorithm Optimized PID and FOPID Controllers

    Directory of Open Access Journals (Sweden)

    DAHIYA, P.

    2015-05-01

    Full Text Available This paper presents the application of hybrid opposition based disruption operator in gravitational search algorithm (DOGSA to solve automatic generation control (AGC problem of four area hydro-thermal-gas interconnected power system. The proposed DOGSA approach combines the advantages of opposition based learning which enhances the speed of convergence and disruption operator which has the ability to further explore and exploit the search space of standard gravitational search algorithm (GSA. The addition of these two concepts to GSA increases its flexibility for solving the complex optimization problems. This paper addresses the design and performance analysis of DOGSA based proportional integral derivative (PID and fractional order proportional integral derivative (FOPID controllers for automatic generation control problem. The proposed approaches are demonstrated by comparing the results with the standard GSA, opposition learning based GSA (OGSA and disruption based GSA (DGSA. The sensitivity analysis is also carried out to study the robustness of DOGSA tuned controllers in order to accommodate variations in operating load conditions, tie-line synchronizing coefficient, time constants of governor and turbine. Further, the approaches are extended to a more realistic power system model by considering the physical constraints such as thermal turbine generation rate constraint, speed governor dead band and time delay.

  16. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  17. Automatic Generation of Machine Emulators: Efficient Synthesis of Robust Virtual Machines for Legacy Software Migration

    DEFF Research Database (Denmark)

    Franz, Michael; Gal, Andreas; Probst, Christian

    2006-01-01

    As older mainframe architectures become obsolete, the corresponding le- gacy software is increasingly executed via platform emulators running on top of more modern commodity hardware. These emulators are virtual machines that often include a combination of interpreters and just-in-time compilers....... Implementing interpreters and compilers for each combination of emulated and target platform independently of each other is a redundant and error-prone task. We describe an alternative approach that automatically synthesizes specialized virtual-machine interpreters and just-in-time compilers, which...... then execute on top of an existing software portability platform such as Java. The result is a considerably reduced implementation effort....

  18. Automatic selection of informative sentences: The sentences that can generate multiple choice questions

    Directory of Open Access Journals (Sweden)

    Mukta Majumder

    2014-12-01

    Full Text Available Traditional education cannot meet the expectation and requirement of a Smart City; it require more advance forms like active learning, ICT education etc. Multiple choice questions (MCQs play an important role in educational assessment and active learning which has a key role in Smart City education. MCQs are effective to assess the understanding of well-defined concepts. A fraction of all the sentences of a text contain well-defined concepts or information that can be asked as a MCQ. These informative sentences are required to be identified first for preparing multiple choice questions manually or automatically. In this paper we propose a technique for automatic identification of such informative sentences that can act as the basis of MCQ. The technique is based on parse structure similarity. A reference set of parse structures is compiled with the help of existing MCQs. The parse structure of a new sentence is compared with the reference structures and if similarity is found then the sentence is considered as a potential candidate. Next a rule-based post-processing module works on these potential candidates to select the final set of informative sentences. The proposed approach is tested in sports domain, where many MCQs are easily available for preparing the reference set of structures. The quality of the system selected sentences is evaluated manually. The experimental result shows that the proposed technique is quite promising.

  19. A human-assisted computer generated LA-grammar for simple ...

    African Journals Online (AJOL)

    A set of computer programs to generate Left Associative Grammars (LAGs) for natural languages is described. The generation proceeds from examples of correct sentences and needs human assistance to correctly categorise word surfaces. An example LAG for simple Southern Sotho sentences is shown. Hausser's LAGs ...

  20. Characteristics Analysis of an Excitation Assistance Switched Reluctance Wind Power Generator

    DEFF Research Database (Denmark)

    Liu, Xiao; Wang, Chao; Chen, Zhe

    2015-01-01

    In order to fully analyze the characteristics of an excitation assistance switched reluctance generator (EASRG) applied in wind power generation, a static model and a dynamic model are proposed. The static model is based on the 3-D finite-element method (FEM), which can be used to obtain the static...

  1. A NEW APPROACH FOR THE SEMI-AUTOMATIC TEXTURE GENERATION OF THE BUILDINGS FACADES, FROM TERRESTRIAL LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    E. Oniga

    2012-07-01

    Full Text Available The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  2. a New Approach for the Semi-Automatic Texture Generation of the Buildings Facades, from Terrestrial Laser Scanner Data

    Science.gov (United States)

    Oniga, E.

    2012-07-01

    The result of the terrestrial laser scanning is an impressive number of spatial points, each of them being characterized as position by the X, Y and Z co-ordinates, by the value of the laser reflectance and their real color, expressed as RGB (Red, Green, Blue) values. The color code for each LIDAR point is taken from the georeferenced digital images, taken with a high resolution panoramic camera incorporated in the scanner system. In this article I propose a new algorithm for the semiautomatic texture generation, using the color information, the RGB values of every point that has been taken by terrestrial laser scanning technology and the 3D surfaces defining the buildings facades, generated with the Leica Cyclone software. The first step is when the operator defines the limiting value, i.e. the minimum distance between a point and the closest surface. The second step consists in calculating the distances, or the perpendiculars drawn from each point to the closest surface. In the third step we associate the points whose 3D coordinates are known, to every surface, depending on the limiting value. The fourth step consists in computing the Voronoi diagram for the points that belong to a surface. The final step brings automatic association between the RGB value of the color code and the corresponding polygon of the Voronoi diagram. The advantage of using this algorithm is that we can obtain, in a semi-automatic manner, a photorealistic 3D model of the building.

  3. A Solar Automatic Tracking System that Generates Power for Lighting Greenhouses

    Directory of Open Access Journals (Sweden)

    Qi-Xun Zhang

    2015-07-01

    Full Text Available In this study we design and test a novel solar tracking generation system. Moreover, we show that this system could be successfully used as an advanced solar power source to generate power in greenhouses. The system was developed after taking into consideration the geography, climate, and other environmental factors of northeast China. The experimental design of this study included the following steps: (i the novel solar tracking generation system was measured, and its performance was analyzed; (ii the system configuration and operation principles were evaluated; (iii the performance of this power generation system and the solar irradiance were measured according to local time and conditions; (iv the main factors affecting system performance were analyzed; and (v the amount of power generated by the solar tracking system was compared with the power generated by fixed solar panels. The experimental results indicated that compared to the power generated by fixed solar panels, the solar tracking system generated about 20% to 25% more power. In addition, the performance of this novel power generating system was found to be closely associated with solar irradiance. Therefore, the solar tracking system provides a new approach to power generation in greenhouses.

  4. Wind power integration into the automatic generation control of power systems with large-scale wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Hansen, Anca Daniela; Altin, Müfit

    2014-01-01

    Transmission system operators have an increased interest in the active participation of wind power plants (WPP) in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC......) of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs) and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described...... and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different...

  5. A Solar Automatic Tracking System that Generates Power for Lighting Greenhouses

    OpenAIRE

    Qi-Xun Zhang; Hai-Ye Yu; Qiu-Yuan Zhang; Zhong-Yuan Zhang; Cheng-Hui Shao; Di Yang

    2015-01-01

    In this study we design and test a novel solar tracking generation system. Moreover, we show that this system could be successfully used as an advanced solar power source to generate power in greenhouses. The system was developed after taking into consideration the geography, climate, and other environmental factors of northeast China. The experimental design of this study included the following steps: (i) the novel solar tracking generation system was measured, and its performance was analyz...

  6. Computer-generated ovaries to assist follicle counting experiments.

    Directory of Open Access Journals (Sweden)

    Angelos Skodras

    Full Text Available Precise estimation of the number of follicles in ovaries is of key importance in the field of reproductive biology, both from a developmental point of view, where follicle numbers are determined at specific time points, as well as from a therapeutic perspective, determining the adverse effects of environmental toxins and cancer chemotherapeutics on the reproductive system. The two main factors affecting follicle number estimates are the sampling method and the variation in follicle numbers within animals of the same strain, due to biological variability. This study aims at assessing the effect of these two factors, when estimating ovarian follicle numbers of neonatal mice. We developed computer algorithms, which generate models of neonatal mouse ovaries (simulated ovaries, with characteristics derived from experimental measurements already available in the published literature. The simulated ovaries are used to reproduce in-silico counting experiments based on unbiased stereological techniques; the proposed approach provides the necessary number of ovaries and sampling frequency to be used in the experiments given a specific biological variability and a desirable degree of accuracy. The simulated ovary is a novel, versatile tool which can be used in the planning phase of experiments to estimate the expected number of animals and workload, ensuring appropriate statistical power of the resulting measurements. Moreover, the idea of the simulated ovary can be applied to other organs made up of large numbers of individual functional units.

  7. Automatic treatment planning facilitates fast generation of high-quality treatment plans for esophageal cancer

    DEFF Research Database (Denmark)

    Hansen, Christian Rønn; Nielsen, Morten; Bertelsen, Anders Smedegaard

    2017-01-01

    cancer patients. MATERIAL AND METHODS: Thirty-two consecutive inoperable patients with esophageal cancer originally treated with manually (MA) generated volumetric modulated arc therapy (VMAT) plans were retrospectively replanned using an auto-planning engine. All plans were optimized with one full 6MV...... to the lungs. The automation of the planning process generated esophageal cancer treatment plans quickly and with high quality....

  8. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    NARCIS (Netherlands)

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written

  9. Automatically Generating Questions to Support the Acquisition of Particle Verbs: Evaluating via Crowdsourcing

    Science.gov (United States)

    Chinkina, Maria; Ruiz, Simón; Meurers, Detmar

    2017-01-01

    We integrate insights from research in Second Language Acquisition (SLA) and Computational Linguistics (CL) to generate text-based questions. We discuss the generation of wh- questions as functionally-driven input enhancement facilitating the acquisition of particle verbs and report the results of two crowdsourcing studies. The first study shows…

  10. Automatic Mesh Generation of Hybrid Mesh on Valves in Multiple Positions in Feedline Systems

    Science.gov (United States)

    Ross, Douglass H.; Ito, Yasushi; Dorothy, Fredric W.; Shih, Alan M.; Peugeot, John

    2010-01-01

    Fluid flow simulations through a valve often require evaluation of the valve in multiple opening positions. A mesh has to be generated for the valve for each position and compounding. The problem is the fact that the valve is typically part of a larger feedline system. In this paper, we propose to develop a system to create meshes for feedline systems with parametrically controlled valve openings. Herein we outline two approaches to generate the meshes for a valve in a feedline system at multiple positions. There are two issues that must be addressed. The first is the creation of the mesh on the valve for multiple positions. The second is the generation of the mesh for the total feedline system including the valve. For generation of the mesh on the valve, we will describe the use of topology matching and mesh generation parameter transfer. For generation of the total feedline system, we will describe two solutions that we have implemented. In both cases the valve is treated as a component in the feedline system. In the first method the geometry of the valve in the feedline system is replaced with a valve at a different opening position. Geometry is created to connect the valve to the feedline system. Then topology for the valve is created and the portion of the topology for the valve is topology matched to the standard valve in a different position. The mesh generation parameters are transferred and then the volume mesh for the whole feedline system is generated. The second method enables the user to generate the volume mesh on the valve in multiple open positions external to the feedline system, to insert it into the volume mesh of the feedline system, and to reduce the amount of computer time required for mesh generation because only two small volume meshes connecting the valve to the feedline mesh need to be updated.

  11. Automatic Generation of Overlays and Offset Values Based on Visiting Vehicle Telemetry and RWS Visuals

    Science.gov (United States)

    Dunne, Matthew J.

    2011-01-01

    The development of computer software as a tool to generate visual displays has led to an overall expansion of automated computer generated images in the aerospace industry. These visual overlays are generated by combining raw data with pre-existing data on the object or objects being analyzed on the screen. The National Aeronautics and Space Administration (NASA) uses this computer software to generate on-screen overlays when a Visiting Vehicle (VV) is berthing with the International Space Station (ISS). In order for Mission Control Center personnel to be a contributing factor in the VV berthing process, computer software similar to that on the ISS must be readily available on the ground to be used for analysis. In addition, this software must perform engineering calculations and save data for further analysis.

  12. An extensible six-step methodology to automatically generate fuzzy DSSs for diagnostic applications

    Science.gov (United States)

    2013-01-01

    Background The diagnosis of many diseases can be often formulated as a decision problem; uncertainty affects these problems so that many computerized Diagnostic Decision Support Systems (in the following, DDSSs) have been developed to aid the physician in interpreting clinical data and thus to improve the quality of the whole process. Fuzzy logic, a well established attempt at the formalization and mechanization of human capabilities in reasoning and deciding with noisy information, can be profitably used. Recently, we informally proposed a general methodology to automatically build DDSSs on the top of fuzzy knowledge extracted from data. Methods We carefully refine and formalize our methodology that includes six stages, where the first three stages work with crisp rules, whereas the last three ones are employed on fuzzy models. Its strength relies on its generality and modularity since it supports the integration of alternative techniques in each of its stages. Results The methodology is designed and implemented in the form of a modular and portable software architecture according to a component-based approach. The architecture is deeply described and a summary inspection of the main components in terms of UML diagrams is outlined as well. A first implementation of the architecture has been then realized in Java following the object-oriented paradigm and used to instantiate a DDSS example aimed at accurately diagnosing breast masses as a proof of concept. Conclusions The results prove the feasibility of the whole methodology implemented in terms of the architecture proposed. PMID:23368970

  13. Automatic landmark generation for deformable image registration evaluation for 4D CT images of lung

    Science.gov (United States)

    Vickress, J.; Battista, J.; Barnett, R.; Morgan, J.; Yartsev, S.

    2016-10-01

    Deformable image registration (DIR) has become a common tool in medical imaging across both diagnostic and treatment specialties, but the methods used offer varying levels of accuracy. Evaluation of DIR is commonly performed using manually selected landmarks, which is subjective, tedious and time consuming. We propose a semi-automated method that saves time and provides accuracy comparable to manual selection. Three landmarking methods including manual (with two independent observers), scale invariant feature transform (SIFT), and SIFT with manual editing (SIFT-M) were tested on 10 thoracic 4DCT image studies corresponding to the 0% and 50% phases of respiration. Results of each method were evaluated against a gold standard (GS) landmark set comparing both mean and proximal landmark displacements. The proximal method compares the local deformation magnitude between a test landmark pair and the closest GS pair. Statistical analysis was done using an intra class correlation (ICC) between test and GS displacement values. The creation time per landmark pair was 22, 34, 2.3, and 4.3 s for observers 1 and 2, SIFT, and SIFT-M methods respectively. Across 20 lungs from the 10 CT studies, the ICC values between the GS and observer 1 and 2, SIFT, and SIFT-M methods were 0.85, 0.85, 0.84, and 0.82 for mean lung deformation, and 0.97, 0.98, 0.91, and 0.96 for proximal landmark deformation, respectively. SIFT and SIFT-M methods have an accuracy that is comparable to manual methods when tested against a GS landmark set while saving 90% of the time. The number and distribution of landmarks significantly affected the analysis as manifested by the different results for mean deformation and proximal landmark deformation methods. Automatic landmark methods offer a promising alternative to manual landmarking, if the quantity, quality and distribution of landmarks can be optimized for the intended application.

  14. Automatic Generation of Web Applications from Visual High-Level Functional Web Components

    Directory of Open Access Journals (Sweden)

    Quan Liang Chen

    2009-01-01

    Full Text Available This paper presents high-level functional Web components such as frames, framesets, and pivot tables, which conventional development environments for Web applications have not yet supported. Frameset Web components provide several editing facilities such as adding, deleting, changing, and nesting of framesets to make it easier to develop Web applications that use frame facilities. Pivot table Web components sum up various kinds of data in two dimensions. They reduce the amount of code to be written by developers greatly. The paper also describes the system that implements these high-level functional components as visual Web components. This system assists designers in the development of Web applications based on the page-transition framework that models a Web application as a set of Web page transitions, and by using visual Web components, makes it easier to write processes to be executed when a Web page transfers to another.

  15. Towards the Automatic Generation of Programmed Foreign-Language Instructional Materials.

    Science.gov (United States)

    Van Campen, Joseph A.

    The purpose of this report is to describe a set of programs which either perform certain tasks useful in the generation of programed foreign-language instructional material or facilitate the writing of such task-oriented programs by other researchers. The programs described are these: (1) a PDP-10 assembly language program for the selection from a…

  16. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    Science.gov (United States)

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Extending a User Interface Prototyping Tool with Automatic MISRA C Code Generation

    Directory of Open Access Journals (Sweden)

    Gioacchino Mauro

    2017-01-01

    Full Text Available We are concerned with systems, particularly safety-critical systems, that involve interaction between users and devices, such as the user interface of medical devices. We therefore developed a MISRA C code generator for formal models expressed in the PVSio-web prototyping toolkit. PVSio-web allows developers to rapidly generate realistic interactive prototypes for verifying usability and safety requirements in human-machine interfaces. The visual appearance of the prototypes is based on a picture of a physical device, and the behaviour of the prototype is defined by an executable formal model. Our approach transforms the PVSio-web prototyping tool into a model-based engineering toolkit that, starting from a formally verified user interface design model, will produce MISRA C code that can be compiled and linked into a final product. An initial validation of our tool is presented for the data entry system of an actual medical device.

  18. Generative Computer-Assisted Instruction and Artificial Intelligence. Report No. 5.

    Science.gov (United States)

    Sinnott, Loraine T.

    This paper reviews the state-of-the-art in generative computer-assisted instruction and artificial intelligence. It divides relevant research into three areas of instructional modeling: models of the subject matter; models of the learner's state of knowledge; and models of teaching strategies. Within these areas, work sponsored by Advanced…

  19. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    Science.gov (United States)

    2015-12-01

    Topology Generator [BRITE])1 writing models and scenarios remains a manual, complex, and time-consuming task. Models are used to represent nodes...protocols. We identified limitations and implemented a system that could utilize some of these tools to extract the vocabulary and grammar . We collected 3...sniffer or by specifying an existing capture, network flow, or other accepted formats. • Protocol inference modules: The vocabulary and grammar inference

  20. GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.

  1. Automatic mesh-point clustering near a boundary in grid generation with elliptic partial differential equations

    Science.gov (United States)

    Steger, J. L.; Sorenson, R. L.

    1979-01-01

    Elliptic partial differential equations are used to generate a smooth grid that permits a one-to-one mapping in such a way that mesh lines of the same family do not cross. Problems that arise due to lack of clustering at crucial points or intersections of mesh lines at highly acute angles, are examined and various forcing or source terms are used (to correct the problems) that are either compatible with the maximum principle or are so locally controlled that mesh lines do not intersect. Attention is given to various schematics of unclustered grids and grid detail about (highly cambered) airfoils.

  2. Automatic Aircraft Structural Topology Generation for Multidisciplinary Optimization and Weight Estimation

    Science.gov (United States)

    Sensmeier, Mark D.; Samareh, Jamshid A.

    2005-01-01

    An approach is proposed for the application of rapid generation of moderate-fidelity structural finite element models of air vehicle structures to allow more accurate weight estimation earlier in the vehicle design process. This should help to rapidly assess many structural layouts before the start of the preliminary design phase and eliminate weight penalties imposed when actual structure weights exceed those estimated during conceptual design. By defining the structural topology in a fully parametric manner, the structure can be mapped to arbitrary vehicle configurations being considered during conceptual design optimization. A demonstration of this process is shown for two sample aircraft wing designs.

  3. Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

    Directory of Open Access Journals (Sweden)

    Alan F. Smeaton

    2010-02-01

    Full Text Available In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor’s output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one’s life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with “Web 2.0” content collected by millions of other individuals.

  4. Automatically augmenting lifelog events using pervasively generated content from millions of people.

    Science.gov (United States)

    Doherty, Aiden R; Smeaton, Alan F

    2010-01-01

    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor's output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one's life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with "Web 2.0" content collected by millions of other individuals.

  5. Intra-Hour Dispatch and Automatic Generator Control Demonstration with Solar Forecasting - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Coimbra, Carlos F. M. [Univ. of California, San Diego, CA (United States

    2016-02-25

    In this project we address multiple resource integration challenges associated with increasing levels of solar penetration that arise from the variability and uncertainty in solar irradiance. We will model the SMUD service region as its own balancing region, and develop an integrated, real-time operational tool that takes solar-load forecast uncertainties into consideration and commits optimal energy resources and reserves for intra-hour and intra-day decisions. The primary objectives of this effort are to reduce power system operation cost by committing appropriate amount of energy resources and reserves, as well as to provide operators a prediction of the generation fleet’s behavior in real time for realistic PV penetration scenarios. The proposed methodology includes the following steps: clustering analysis on the expected solar variability per region for the SMUD system, Day-ahead (DA) and real-time (RT) load forecasts for the entire service areas, 1-year of intra-hour CPR forecasts for cluster centers, 1-year of smart re-forecasting CPR forecasts in real-time for determination of irreducible errors, and uncertainty quantification for integrated solar-load for both distributed and central stations (selected locations within service region) PV generation.

  6. Automatic generation of design structure matrices through the evolution of product models

    DEFF Research Database (Denmark)

    Gopsill, James A.; Snider, Chris; McMahon, Chris

    2016-01-01

    Dealing with component interactions and dependencies remains a core and fundamental aspect of engineering, where conflicts and constraints are solved on an almost daily basis. Failure to consider these interactions and dependencies can lead to costly overruns, failure to meet requirements, and le...... sense. For these reasons, tools and methods to support the identification and monitoring of component interactions and dependencies continues to be an active area of research. In particular, design structure matrices (DSMs) have been extensively applied to identify and visualize product...... and organizational architectures across a number of engineering disciplines. However, the process of generating these DSMs has primarily used surveys, structured interviews, and/or meetings with engineers. As a consequence, there is a high cost associated with engineers' time alongside the requirement to continually...... computer-aided design, finite element analysis, and computational fluid dynamics systems. The paper shows that a DSM generated from the changes in the product models corroborates with the product architecture as defined by the engineers and results from previous DSM studies. In addition, further levels...

  7. BIOSMILE: a semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features.

    Science.gov (United States)

    Tsai, Richard Tzong-Han; Chou, Wen-Chi; Su, Ying-Shan; Lin, Yu-Chun; Sung, Cheng-Lung; Dai, Hong-Jie; Yeh, Irene Tzu-Hsuan; Ku, Wei; Sung, Ting-Yi; Hsu, Wen-Lian

    2007-09-01

    Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE) systems that incorporate natural language processing (NLP) techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL) is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME) machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. To evaluate the performance of BIOSMILE, we conducted two experiments to (1) compare the performance of SRL systems trained on newswire and biomedical corpora; and (2) examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. We demonstrate the necessity of using a biomedical proposition bank for training SRL systems in the biomedical domain. Besides the

  8. BIOSMILE: A semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features

    Directory of Open Access Journals (Sweden)

    Tsai Richard

    2007-09-01

    Full Text Available Abstract Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE systems that incorporate natural language processing (NLP techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1 compare the performance of SRL systems trained on newswire and biomedical corpora; and (2 examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training

  9. GENPRO: automatic generation of Prolog clause files for knowledge-based systems in the biomedical sciences.

    Science.gov (United States)

    Saldanha, J; Eccles, J R

    1989-03-01

    With the increasing interest in using knowledge-based approaches for protein structure prediction and modelling, there is a requirement for general techniques to convert molecular biological data into structures that can be interpreted by artificial intelligence programming languages (e.g. Prolog). We describe here an interactive program that generates files in Prolog clausal form from the most commonly distributed protein structural data collections. The program is flexible and enables a variety of clause structures to be defined by the user through a general schema definition system. Our method can be extended to include other types of molecular biological database or those containing non-structural information, thus providing a uniform framework for handling the increasing volume of data available to knowledge-based systems in biomedicine.

  10. Wind power integration into the automatic generation control of power systems with large-scale wind power

    Directory of Open Access Journals (Sweden)

    Abdul Basit

    2014-10-01

    Full Text Available Transmission system operators have an increased interest in the active participation of wind power plants (WPP in the power balance control of power systems with large wind power penetration. The emphasis in this study is on the integration of WPPs into the automatic generation control (AGC of the power system. The present paper proposes a coordinated control strategy for the AGC between combined heat and power plants (CHPs and WPPs to enhance the security and the reliability of a power system operation in the case of a large wind power penetration. The proposed strategy, described and exemplified for the future Danish power system, takes the hour-ahead regulating power plan for generation and power exchange with neighbouring power systems into account. The performance of the proposed strategy for coordinated secondary control is assessed and discussed by means of simulations for different possible future scenarios, when wind power production in the power system is high and conventional production from CHPs is at a minimum level. The investigation results of the proposed control strategy have shown that the WPPs can actively help the AGC, and reduce the real-time power imbalance in the power system, by down regulating their production when CHPs are unable to provide the required response.

  11. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  12. FigSum: Automatically Generating Structured Text Summaries for Figures in Biomedical Literature

    Science.gov (United States)

    Agarwal, Shashank; Yu, Hong

    2009-01-01

    Figures are frequently used in biomedical articles to support research findings; however, they are often difficult to comprehend based on their legends alone and information from the full-text articles is required to fully understand them. Previously, we found that the information associated with a single figure is distributed throughout the full-text article the figure appears in. Here, we develop and evaluate a figure summarization system – FigSum, which aggregates this scattered information to improve figure comprehension. For each figure in an article, FigSum generates a structured text summary comprising one sentence from each of the four rhetorical categories – Introduction, Methods, Results and Discussion (IMRaD). The IMRaD category of sentences is predicted by an automated machine learning classifier. Our evaluation shows that FigSum captures 53% of the sentences in the gold standard summaries annotated by biomedical scientists and achieves an average ROUGE-1 score of 0.70, which is higher than a baseline system. PMID:20351812

  13. Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Rabindra Kumar Sahu

    2016-03-01

    Full Text Available This paper presents the design and analysis of Proportional-Integral-Double Derivative (PIDD controller for Automatic Generation Control (AGC of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization (TLBO algorithm. At first, a two-area reheat thermal power system with appropriate Generation Rate Constraint (GRC is considered. The design problem is formulated as an optimization problem and TLBO is employed to optimize the parameters of the PIDD controller. The superiority of the proposed TLBO based PIDD controller has been demonstrated by comparing the results with recently published optimization technique such as hybrid Firefly Algorithm and Pattern Search (hFA-PS, Firefly Algorithm (FA, Bacteria Foraging Optimization Algorithm (BFOA, Genetic Algorithm (GA and conventional Ziegler Nichols (ZN for the same interconnected power system. Also, the proposed approach has been extended to two-area power system with diverse sources of generation like thermal, hydro, wind and diesel units. The system model includes boiler dynamics, GRC and Governor Dead Band (GDB non-linearity. It is observed from simulation results that the performance of the proposed approach provides better dynamic responses by comparing the results with recently published in the literature. Further, the study is extended to a three unequal-area thermal power system with different controllers in each area and the results are compared with published FA optimized PID controller for the same system under study. Finally, sensitivity analysis is performed by varying the system parameters and operating load conditions in the range of ±25% from their nominal values to test the robustness.

  14. The SBAS Sentinel-1 Surveillance service for automatic and systematic generation of Earth surface displacement within the GEP platform.

    Science.gov (United States)

    Casu, Francesco; De Luca, Claudio; Lanari, Riccardo; Manunta, Michele; Zinno, Ivana

    2017-04-01

    The Geohazards Exploitation Platform (GEP) is an ESA activity of the Earth Observation (EO) ground segment to demonstrate the benefit of new technologies for large scale processing of EO data. GEP aims at providing both on-demand processing services for scientific users of the geohazards community and an integration platform for new EO data analysis processors dedicated to scientists and other expert users. In the Remote Sensing scenario, a crucial role is played by the recently launched Sentinel-1 (S1) constellation that, with its global acquisition policy, has literally flooded the scientific community with a huge amount of data acquired over large part of the Earth on a regular basis (down to 6-days with both Sentinel-1A and 1B passes). Moreover, the S1 data, as part of the European Copernicus program, are openly and freely accessible, thus fostering their use for the development of tools for Earth surface monitoring. In particular, due to their specific SAR Interferometry (InSAR) design, Sentinel-1 satellites can be exploited to build up operational services for the generation of advanced interferometric products that can be very useful within risk management and natural hazard monitoring scenarios. Accordingly, in this work we present the activities carried out for the development, integration, and deployment of the SBAS Sentinel-1 Surveillance service of CNR-IREA within the GEP platform. This service is based on a parallel implementation of the SBAS approach, referred to as P-SBAS, able to effectively run in large distributed computing infrastructures (grid and cloud) and to allow for an efficient computation of large SAR data sequences with advanced DInSAR approaches. In particular, the Surveillance service developed on GEP platform consists on the systematic and automatic processing of Sentinel-1 data on selected Areas of Interest (AoI) to generate updated surface displacement time series via the SBAS-InSAR algorithm. We built up a system that is

  15. A computer-assisted verification of hyperchaos in the Saito hysteresis chaos generator

    Energy Technology Data Exchange (ETDEWEB)

    Li Qingdu [Institute for Nonlinear Systems, Chongqing University of Posts and Telecomm., Chongqing 400065 (China); Yang Xiaosong [Department of Mathematics, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2006-07-21

    This paper presents a computer-assisted verification of hyperchaos in the well-known Saito hysteresis chaos generator (SHCG) by virtue of topological horseshoe theory. By means of interval analysis we find two disjoint compact subsets in a carefully chosen 3D cross section that can guarantee the existence of a topological horseshoe for the corresponding third-return Poincare map. Numerical studies show that the Poincare map expands in two directions. It justifiably indicates that there exists hyperchaos in the SHCG.

  16. CAV_KO: a Simple 1-D Langrangian Hydrocode for MS EXCEL™ with Automatic Generation of X-T Diagrams

    Science.gov (United States)

    Tsembelis, K.; Ramsden, B.; Proud, W. G.; Borg, J.

    2007-12-01

    Hydrocodes are widely used to predict or simulate highly dynamic and transient events such as blast and impact. Codes such as GRIM, CTH or AUTODYN are well developed and involve complex numerical methods and in many cases require a large computing infrastructure. In this paper we present a simple 1-D Langrangian hydrocode developed at the University of Cambridge, called CAV_KO written in Visual Basic. The motivation being to produce a code which, while being relatively simple, is useful for both experimental planning and teaching. The code has been adapted from the original KO code written in FORTRAN by J. Borg, which, in turn, is based on the algorithm developed by Wilkins [1]. The developed GUI within MS Excel™ and the automatic generation of x-t diagrams allow CAV_KO to be a useful tool for quick calculations of plate impact events and teaching purposes. The VB code is licensed under the GNU General Public License and a MS Excel™ spreadsheet containing the code can be downloaded from www.shockphysics.com together with a copy of the user guide.

  17. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection.

    Science.gov (United States)

    Kasagi, M; Fujita, K; Tsuji, M; Takewaki, I

    2016-02-01

    A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency) earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency) and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  18. Automatic generation of smart earthquake-resistant building system: Hybrid system of base-isolation and building-connection

    Directory of Open Access Journals (Sweden)

    M. Kasagi

    2016-02-01

    Full Text Available A base-isolated building may sometimes exhibit an undesirable large response to a long-duration, long-period earthquake ground motion and a connected building system without base-isolation may show a large response to a near-fault (rather high-frequency earthquake ground motion. To overcome both deficiencies, a new hybrid control system of base-isolation and building-connection is proposed and investigated. In this new hybrid building system, a base-isolated building is connected to a stiffer free wall with oil dampers. It has been demonstrated in a preliminary research that the proposed hybrid system is effective both for near-fault (rather high-frequency and long-duration, long-period earthquake ground motions and has sufficient redundancy and robustness for a broad range of earthquake ground motions.An automatic generation algorithm of this kind of smart structures of base-isolation and building-connection hybrid systems is presented in this paper. It is shown that, while the proposed algorithm does not work well in a building without the connecting-damper system, it works well in the proposed smart hybrid system with the connecting damper system.

  19. Long-term fiscal implications of funding assisted reproduction: a generational accounting model for Spain

    Directory of Open Access Journals (Sweden)

    R. Matorras

    2015-12-01

    Full Text Available The aim of this study was to assess the lifetime economic benefits of assisted reproduction in Spain by calculating the return on this investment. We developed a generational accounting model that simulates the flow of taxes paid by the individual, minus direct government transfers received over the individual’s lifetime. The difference between discounted transfers and taxes minus the cost of either IVF or artificial insemination (AI equals the net fiscal contribution (NFC of a child conceived through assisted reproduction. We conducted sensitivity analysis to test the robustness of our results under various macroeconomic scenarios. A child conceived through assisted reproduction would contribute €370,482 in net taxes to the Spanish Treasury and would receive €275,972 in transfers over their lifetime. Taking into account that only 75% of assisted reproduction pregnancies are successful, the NFC was estimated at €66,709 for IVF-conceived children and €67,253 for AI-conceived children. The return on investment for each euro invested was €15.98 for IVF and €18.53 for AI. The long-term NFC of a child conceived through assisted reproduction could range from €466,379 to €-9,529 (IVF and from €466,923 to €-8,985 (AI. The return on investment would vary between €-2.28 and €111.75 (IVF, and €-2.48 and €128.66 (AI for each euro invested. The break-even point at which the financial position would begin to favour the Spanish Treasury ranges between 29 and 41 years of age. Investment in assisted reproductive techniques may lead to positive discounted future fiscal revenue, notwithstanding its beneficial psychological effect for infertile couples in Spain.

  20. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning

    DEFF Research Database (Denmark)

    Olesen, Alexander Neergaard; Christensen, Julie Anja Engelhard; Sørensen, Helge Bjarup Dissing

    2016-01-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography...... (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen’s kappa of 0.74 indicating substantial agreement between...

  1. Hydrogen Assisted Cracking in Pearlitic Steel Rods: The Role of Residual Stresses Generated by Fatigue Precracking

    Directory of Open Access Journals (Sweden)

    Jesús Toribio

    2017-05-01

    Full Text Available Stress corrosion cracking (SCC of metals is an issue of major concern in engineering since this phenomenon causes many catastrophic failures of structural components in aggressive environments. SCC is even more harmful under cathodic conditions promoting the phenomenon known as hydrogen assisted cracking (HAC, hydrogen assisted fracture (HAF or hydrogen embrittlement (HE. A common way to assess the susceptibility of a given material to HAC, HAF or HE is to subject a cracked rod to a constant extension rate tension (CERT test until it fractures in this harsh environment. This paper analyzes the influence of a residual stress field generated by fatigue precracking on the sample’s posterior susceptibility to HAC. To achieve this goal, numerical simulations were carried out of hydrogen diffusion assisted by the stress field. Firstly, a mechanical simulation of the fatigue precracking was developed for revealing the residual stress field after diverse cyclic loading scenarios and posterior stress field evolution during CERT loading. Afterwards, a simulation of hydrogen diffusion assisted by stress was carried out considering the residual stresses after fatigue and the superposed rising stresses caused by CERT loading. Results reveal the key role of the residual stress field after fatigue precracking in the HAC phenomena in cracked steel rods as well as the beneficial effect of compressive residual stress.

  2. Hydrogen Assisted Cracking in Pearlitic Steel Rods: The Role of Residual Stresses Generated by Fatigue Precracking.

    Science.gov (United States)

    Toribio, Jesús; Aguado, Leticia; Lorenzo, Miguel; Kharin, Viktor

    2017-05-02

    Stress corrosion cracking (SCC) of metals is an issue of major concern in engineering since this phenomenon causes many catastrophic failures of structural components in aggressive environments. SCC is even more harmful under cathodic conditions promoting the phenomenon known as hydrogen assisted cracking (HAC), hydrogen assisted fracture (HAF) or hydrogen embrittlement (HE). A common way to assess the susceptibility of a given material to HAC, HAF or HE is to subject a cracked rod to a constant extension rate tension (CERT) test until it fractures in this harsh environment. This paper analyzes the influence of a residual stress field generated by fatigue precracking on the sample's posterior susceptibility to HAC. To achieve this goal, numerical simulations were carried out of hydrogen diffusion assisted by the stress field. Firstly, a mechanical simulation of the fatigue precracking was developed for revealing the residual stress field after diverse cyclic loading scenarios and posterior stress field evolution during CERT loading. Afterwards, a simulation of hydrogen diffusion assisted by stress was carried out considering the residual stresses after fatigue and the superposed rising stresses caused by CERT loading. Results reveal the key role of the residual stress field after fatigue precracking in the HAC phenomena in cracked steel rods as well as the beneficial effect of compressive residual stress.

  3. Modeling and simulation of the generation automatic control of electric power systems; Modelado y simulacion del control automatico de generacion de sistemas electricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Caballero Ortiz, Ezequiel

    2002-12-01

    This work is devoted to the analysis of the Automatic Control of Electrical Systems Generation of power, as of the information that generates the loop with Load-Frequency Control and the Automatic Voltage Regulator loop. To accomplish the analysis, the control classical theory and feedback control systems concepts are applied. Thus also, the modern theory concepts are employed. The studies are accomplished in the digital computer through the MATLAB program and the available simulation technique in the SIMULINK tool. In this thesis the theoretical and physical concepts of the automatic control of generation are established; dividing it in load frequency control and automatic voltage regulator loops. The mathematical models of the two control loops are established. Later, the models of the elements are interconnected in order to integrate the loop with load frequency control and the digital simulation of the system is carried out. In first instance, the function of the primary control in are - machine, area - multi machine and multi area - multi machine power systems, is analyzed. Then, the automatic control of generation of the area and multi area power systems is studied. The economic dispatch concept is established and with this plan the power system multi area is simulated, there in after the energy exchange among areas in stationary stage is studied. The mathematical models of the component elements of the control loop of the automatic voltage regulator are interconnected. Data according to the nature of each component are generated and their behavior is simulated to analyze the system response. The two control loops are interconnected and a simulation is carry out with data generated previously, examining the performance of the automatic control of generation and the interaction between the two control loops. Finally, the Poles Positioning and the Optimum Control techniques of the modern control theory are applied to the automatic control of an area generation

  4. Acoustic emission-based in-process monitoring of surface generation in robot-assisted polishing

    DEFF Research Database (Denmark)

    Pilny, Lukas; Bissacco, Giuliano; De Chiffre, Leonardo

    2016-01-01

    The applicability of acoustic emission (AE) measurements for in-process monitoring of surface generation in the robot-assisted polishing (RAP) was investigated. Surface roughness measurements require interruption of the process, proper surface cleaning and measurements that sometimes necessitate...... removal of the part from the machine tool. In this study, stabilisation of surface roughness during polishing rotational symmetric surfaces by the RAP process was monitored by AE measurements. An AE sensor was placed on a polishing arm in direct contact with a bonded abrasive polishing tool...

  5. NET-VISA, a Bayesian method next-generation automatic association software. Latest developments and operational assessment.

    Science.gov (United States)

    Le Bras, Ronan; Kushida, Noriyuki; Mialle, Pierrick; Tomuta, Elena; Arora, Nimar

    2017-04-01

    The Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) has been developing a Bayesian method and software to perform the key step of automatic association of seismological, hydroacoustic, and infrasound (SHI) parametric data. In our preliminary testing in the CTBTO, NET_VISA shows much better performance than its currently operating automatic association module, with a rate for automatic events matching the analyst-reviewed events increased by 10%, signifying that the percentage of missed events is lowered by 40%. Initial tests involving analysts also showed that the new software will complete the automatic bulletins of the CTBTO by adding previously missed events. Because products by the CTBTO are also widely distributed to its member States as well as throughout the seismological community, the introduction of a new technology must be carried out carefully, and the first step of operational integration is to first use NET-VISA results within the interactive analysts' software so that the analysts can check the robustness of the Bayesian approach. We report on the latest results both on the progress for automatic processing and for the initial introduction of NET-VISA results in the analyst review process

  6. Evaluation of plan quality assurance models for prostate cancer patients based on fully automatically generated Pareto-optimal treatment plans.

    Science.gov (United States)

    Wang, Yibing; Breedveld, Sebastiaan; Heijmen, Ben; Petit, Steven F

    2016-06-07

    IMRT planning with commercial Treatment Planning Systems (TPSs) is a trial-and-error process. Consequently, the quality of treatment plans may not be consistent among patients, planners and institutions. Recently, different plan quality assurance (QA) models have been proposed, that could flag and guide improvement of suboptimal treatment plans. However, the performance of these models was validated using plans that were created using the conventional trail-and-error treatment planning process. Consequently, it is challenging to assess and compare quantitatively the accuracy of different treatment planning QA models. Therefore, we created a golden standard dataset of consistently planned Pareto-optimal IMRT plans for 115 prostate patients. Next, the dataset was used to assess the performance of a treatment planning QA model that uses the overlap volume histogram (OVH). 115 prostate IMRT plans were fully automatically planned using our in-house developed TPS Erasmus-iCycle. An existing OVH model was trained on the plans of 58 of the patients. Next it was applied to predict DVHs of the rectum, bladder and anus of the remaining 57 patients. The predictions were compared with the achieved values of the golden standard plans for the rectum D mean, V 65, and V 75, and D mean of the anus and the bladder. For the rectum, the prediction errors (predicted-achieved) were only  -0.2  ±  0.9 Gy (mean  ±  1 SD) for D mean,-1.0  ±  1.6% for V 65, and  -0.4  ±  1.1% for V 75. For D mean of the anus and the bladder, the prediction error was 0.1  ±  1.6 Gy and 4.8  ±  4.1 Gy, respectively. Increasing the training cohort to 114 patients only led to minor improvements. A dataset of consistently planned Pareto-optimal prostate IMRT plans was generated. This dataset can be used to train new, and validate and compare existing treatment planning QA models, and has been made publicly available. The OVH model was highly accurate

  7. An algorithm for generating data accessibility recommendations for flight deck Automatic Dependent Surveillance-Broadcast (ADS-B) applications

    Science.gov (United States)

    2014-09-09

    Automatic Dependent Surveillance-Broadcast (ADS-B) In technology supports the display of traffic data on Cockpit Displays of Traffic Information (CDTIs). The data are used by flightcrews to perform defined self-separation procedures, such as the in-t...

  8. Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Dowling, Jason A., E-mail: jason.dowling@csiro.au [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); University of Newcastle, Callaghan, New South Wales (Australia); Sun, Jidi [University of Newcastle, Callaghan, New South Wales (Australia); Pichler, Peter [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Rivest-Hénault, David; Ghose, Soumya [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Richardson, Haylea [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Wratten, Chris; Martin, Jarad [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Arm, Jameen [Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia); Best, Leah [Department of Radiology, Hunter New England Health, New Lambton, New South Wales (Australia); Chandra, Shekhar S. [School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland (Australia); Fripp, Jurgen [CSIRO Australian e-Health Research Centre, Herston, Queensland (Australia); Menk, Frederick W. [University of Newcastle, Callaghan, New South Wales (Australia); Greer, Peter B. [University of Newcastle, Callaghan, New South Wales (Australia); Calvary Mater Newcastle Hospital, Waratah, New South Wales (Australia)

    2015-12-01

    Purpose: To validate automatic substitute computed tomography CT (sCT) scans generated from standard T2-weighted (T2w) magnetic resonance (MR) pelvic scans for MR-Sim prostate treatment planning. Patients and Methods: A Siemens Skyra 3T MR imaging (MRI) scanner with laser bridge, flat couch, and pelvic coil mounts was used to scan 39 patients scheduled for external beam radiation therapy for localized prostate cancer. For sCT generation a whole-pelvis MRI scan (1.6 mm 3-dimensional isotropic T2w SPACE [Sampling Perfection with Application optimized Contrasts using different flip angle Evolution] sequence) was acquired. Three additional small field of view scans were acquired: T2w, T2*w, and T1w flip angle 80° for gold fiducials. Patients received a routine planning CT scan. Manual contouring of the prostate, rectum, bladder, and bones was performed independently on the CT and MR scans. Three experienced observers contoured each organ on MRI, allowing interobserver quantification. To generate a training database, each patient CT scan was coregistered to their whole-pelvis T2w using symmetric rigid registration and structure-guided deformable registration. A new multi-atlas local weighted voting method was used to generate automatic contours and sCT results. Results: The mean error in Hounsfield units between the sCT and corresponding patient CT (within the body contour) was 0.6 ± 14.7 (mean ± 1 SD), with a mean absolute error of 40.5 ± 8.2 Hounsfield units. Automatic contouring results were very close to the expert interobserver level (Dice similarity coefficient): prostate 0.80 ± 0.08, bladder 0.86 ± 0.12, rectum 0.84 ± 0.06, bones 0.91 ± 0.03, and body 1.00 ± 0.003. The change in monitor units between the sCT-based plans relative to the gold standard CT plan for the same dose prescription was found to be 0.3% ± 0.8%. The 3-dimensional γ pass rate was 1.00 ± 0.00 (2 mm/2%). Conclusions: The MR-Sim setup and automatic s

  9. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants; Extracsion: un systeme de controle automatique par courants de Foucault des tubes de generateurs de vapeur de centrales nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Georgel, B.; Zorgati, R.

    1994-12-31

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs.

  10. Automotive Radar and Lidar Systems for Next Generation Driver Assistance Functions

    Directory of Open Access Journals (Sweden)

    R. H. Rasshofer

    2005-01-01

    Full Text Available Automotive radar and lidar sensors represent key components for next generation driver assistance functions (Jones, 2001. Today, their use is limited to comfort applications in premium segment vehicles although an evolution process towards more safety-oriented functions is taking place. Radar sensors available on the market today suffer from low angular resolution and poor target detection in medium ranges (30 to 60m over azimuth angles larger than ±30°. In contrast, Lidar sensors show large sensitivity towards environmental influences (e.g. snow, fog, dirt. Both sensor technologies today have a rather high cost level, forbidding their wide-spread usage on mass markets. A common approach to overcome individual sensor drawbacks is the employment of data fusion techniques (Bar-Shalom, 2001. Raw data fusion requires a common, standardized data interface to easily integrate a variety of asynchronous sensor data into a fusion network. Moreover, next generation sensors should be able to dynamically adopt to new situations and should have the ability to work in cooperative sensor environments. As vehicular function development today is being shifted more and more towards virtual prototyping, mathematical sensor models should be available. These models should take into account the sensor's functional principle as well as all typical measurement errors generated by the sensor.

  11. A predictive thermal dynamic model for parameter generation in the laser assisted direct write process

    Science.gov (United States)

    Shang, Shuo; Fearon, Eamonn; Wellburn, Dan; Sato, Taku; Edwardson, Stuart; Dearden, G.; Watkins, K. G.

    2011-11-01

    The laser assisted direct write (LADW) method can be used to generate electrical circuitry on a substrate by depositing metallic ink and curing the ink thermally by a laser. Laser curing has emerged over recent years as a novel yet efficient alternative to oven curing. This method can be used in situ, over complicated 3D contours of large parts (e.g. aircraft wings) and selectively cure over heat sensitive substrates, with little or no thermal damage. In previous studies, empirical methods have been used to generate processing windows for this technique, relating to the several interdependent processing parameters on which the curing quality and efficiency strongly depend. Incorrect parameters can result in a track that is cured in some areas and uncured in others, or in damaged substrates. This paper addresses the strong need for a quantitative model which can systematically output the processing conditions for a given combination of ink, substrate and laser source; transforming the LADW technique from a purely empirical approach, to a simple, repeatable, mathematically sound, efficient and predictable process. The method comprises a novel and generic finite element model (FEM) that for the first time predicts the evolution of the thermal profile of the ink track during laser curing and thus generates a parametric map which indicates the most suitable combination of parameters for process optimization. Experimental data are compared with simulation results to verify the accuracy of the model.

  12. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System.

    Science.gov (United States)

    Angibaud, Laurent D; Dai, Yifei; Liebelt, Ralph A; Gao, Bo; Gulbransen, Scott W; Silver, Xeve S

    2015-06-01

    Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (system investigated. This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence of extra-articular deformity in the knee.

  13. Effective Generation and Update of a Building Map Database Through Automatic Building Change Detection from LiDAR Point Cloud Data

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2015-10-01

    Full Text Available Periodic building change detection is important for many applications, including disaster management. Building map databases need to be updated based on detected changes so as to ensure their currency and usefulness. This paper first presents a graphical user interface (GUI developed to support the creation of a building database from building footprints automatically extracted from LiDAR (light detection and ranging point cloud data. An automatic building change detection technique by which buildings are automatically extracted from newly-available LiDAR point cloud data and compared to those within an existing building database is then presented. Buildings identified as totally new or demolished are directly added to the change detection output. However, for part-building demolition or extension, a connected component analysis algorithm is applied, and for each connected building component, the area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building-part. Using the developed GUI, a user can quickly examine each suggested change and indicate his/her decision to update the database, with a minimum number of mouse clicks. In experimental tests, the proposed change detection technique was found to produce almost no omission errors, and when compared to the number of reference building corners, it reduced the human interaction to 14% for initial building map generation and to 3% for map updating. Thus, the proposed approach can be exploited for enhanced automated building information updating within a topographic database.

  14. LiDAR The Generation of Automatic Mapping for Buildings, Using High Spatial Resolution Digital Vertical Aerial Photography and LiDAR Point Clouds

    Directory of Open Access Journals (Sweden)

    William Barragán Zaque

    2015-06-01

    Full Text Available The aim of this paper is to generate photogrammetrie products and to automatically map buildings in the area of interest in vector format. The research was conducted Bogotá using high resolution digital vertical aerial photographs and point clouds obtained using LIDAR technology. Image segmentation was also used, alongside radiometric and geometric digital processes. The process took into account aspects including building height, segmentation algorithms, and spectral band combination. The results had an effectiveness of 97.2 % validated through ground-truthing.

  15. Refraction-Assisted Solar Thermoelectric Generator based on Phase-Change Lens.

    Science.gov (United States)

    Kim, Myoung-Soo; Kim, Min-Ki; Jo, Sung-Eun; Joo, Chulmin; Kim, Yong-Jun

    2016-06-10

    Solar thermoelectric generators (STEGs), which are used for various applications, (particularly small size electronic devices), have optical concentration systems for high energy conversion efficiency. In this study, a refraction-assisted STEG (R-STEG) is designed based on phase-change materials. As the phase-change material (PCM) changes phase from solid to liquid, its refractive index and transmittance also change, resulting in changes in the refraction of the sunlight transmitted through it, and concentration of solar energy in the phase-change lens. This innovative design facilitates double focusing the solar energy through the optical lens and a phase-change lens. This mechanism resulted in the peak energy conversion efficiencies of the R-STEG being 60% and 86% higher than those of the typical STEG at solar intensities of 1 kW m(-2) and 1.5 kW m(-2), respectively. In addition, the energy stored in PCM can help to generate steady electrical energy when the solar energy was removed. This work presents significant progress regarding the optical characteristic of PCM and optical concentration systems of STEGs.

  16. Refraction-Assisted Solar Thermoelectric Generator based on Phase-Change Lens

    Science.gov (United States)

    Kim, Myoung-Soo; Kim, Min-Ki; Jo, Sung-Eun; Joo, Chulmin; Kim, Yong-Jun

    2016-01-01

    Solar thermoelectric generators (STEGs), which are used for various applications, (particularly small size electronic devices), have optical concentration systems for high energy conversion efficiency. In this study, a refraction-assisted STEG (R-STEG) is designed based on phase-change materials. As the phase-change material (PCM) changes phase from solid to liquid, its refractive index and transmittance also change, resulting in changes in the refraction of the sunlight transmitted through it, and concentration of solar energy in the phase-change lens. This innovative design facilitates double focusing the solar energy through the optical lens and a phase-change lens. This mechanism resulted in the peak energy conversion efficiencies of the R-STEG being 60% and 86% higher than those of the typical STEG at solar intensities of 1 kW m−2 and 1.5 kW m−2, respectively. In addition, the energy stored in PCM can help to generate steady electrical energy when the solar energy was removed. This work presents significant progress regarding the optical characteristic of PCM and optical concentration systems of STEGs. PMID:27283350

  17. Study on heat pipe assisted thermoelectric power generation system from exhaust gas

    Science.gov (United States)

    Chi, Ri-Guang; Park, Jong-Chan; Rhi, Seok-Ho; Lee, Kye-Bock

    2017-11-01

    Currently, most fuel consumed by vehicles is released to the environment as thermal energy through the exhaust pipe. Environmentally friendly vehicle technology needs new methods to increase the recycling efficiency of waste exhaust thermal energy. The present study investigated how to improve the maximum power output of a TEG (Thermoelectric generator) system assisted with a heat pipe. Conventionally, the driving energy efficiency of an internal combustion engine is approximately less than 35%. TEG with Seebeck elements is a new idea for recycling waste exhaust heat energy. The TEG system can efficiently utilize low temperature waste heat, such as industrial waste heat and solar energy. In addition, the heat pipe can transfer heat from the automobile's exhaust gas to a TEG. To improve the efficiency of the thermal power generation system with a heat pipe, effects of various parameters, such as inclination angle, charged amount of the heat pipe, condenser temperature, and size of the TEM (thermoelectric element), were investigated. Experimental studies, CFD simulation, and the theoretical approach to thermoelectric modules were carried out, and the TEG system with heat pipe (15-20% charged, 20°-30° inclined configuration) showed the best performance.

  18. Study on heat pipe assisted thermoelectric power generation system from exhaust gas

    Science.gov (United States)

    Chi, Ri-Guang; Park, Jong-Chan; Rhi, Seok-Ho; Lee, Kye-Bock

    2017-04-01

    Currently, most fuel consumed by vehicles is released to the environment as thermal energy through the exhaust pipe. Environmentally friendly vehicle technology needs new methods to increase the recycling efficiency of waste exhaust thermal energy. The present study investigated how to improve the maximum power output of a TEG (Thermoelectric generator) system assisted with a heat pipe. Conventionally, the driving energy efficiency of an internal combustion engine is approximately less than 35%. TEG with Seebeck elements is a new idea for recycling waste exhaust heat energy. The TEG system can efficiently utilize low temperature waste heat, such as industrial waste heat and solar energy. In addition, the heat pipe can transfer heat from the automobile's exhaust gas to a TEG. To improve the efficiency of the thermal power generation system with a heat pipe, effects of various parameters, such as inclination angle, charged amount of the heat pipe, condenser temperature, and size of the TEM (thermoelectric element), were investigated. Experimental studies, CFD simulation, and the theoretical approach to thermoelectric modules were carried out, and the TEG system with heat pipe (15-20% charged, 20°-30° inclined configuration) showed the best performance.

  19. Development of next-generation mapping populations: Multi-parent Advanced Generation Inter-Cross (MAGIC) and Marker-Assisted Recurrent Selection (MARS) populations in peanut

    Science.gov (United States)

    Generation Inter-Cross (MAGIC) and Marker-Assisted Recurrent Selection (MARS) have been proposed and used in many crops to dissect complex traits or QTL. MAGIC allows for dissecting genomic structure, and for improving breeding populations by integrating multiple alleles from different parents. MAR...

  20. A new tool for rapid and automatic estimation of earthquake source parameters and generation of seismic bulletins

    Science.gov (United States)

    Zollo, Aldo

    2016-04-01

    RISS S.r.l. is a Spin-off company recently born from the initiative of the research group constituting the Seismology Laboratory of the Department of Physics of the University of Naples Federico II. RISS is an innovative start-up, based on the decade-long experience in earthquake monitoring systems and seismic data analysis of its members and has the major goal to transform the most recent innovations of the scientific research into technological products and prototypes. With this aim, RISS has recently started the development of a new software, which is an elegant solution to manage and analyse seismic data and to create automatic earthquake bulletins. The software has been initially developed to manage data recorded at the ISNet network (Irpinia Seismic Network), which is a network of seismic stations deployed in Southern Apennines along the active fault system responsible for the 1980, November 23, MS 6.9 Irpinia earthquake. The software, however, is fully exportable and can be used to manage data from different networks, with any kind of station geometry or network configuration and is able to provide reliable estimates of earthquake source parameters, whichever is the background seismicity level of the area of interest. Here we present the real-time automated procedures and the analyses performed by the software package, which is essentially a chain of different modules, each of them aimed at the automatic computation of a specific source parameter. The P-wave arrival times are first detected on the real-time streaming of data and then the software performs the phase association and earthquake binding. As soon as an event is automatically detected by the binder, the earthquake location coordinates and the origin time are rapidly estimated, using a probabilistic, non-linear, exploration algorithm. Then, the software is able to automatically provide three different magnitude estimates. First, the local magnitude (Ml) is computed, using the peak-to-peak amplitude

  1. Fully automatized renal parenchyma volumetry using a support vector machine based recognition system for subject-specific probability map generation in native MR volume data

    Science.gov (United States)

    Gloger, Oliver; Tönnies, Klaus; Mensel, Birger; Völzke, Henry

    2015-11-01

    In epidemiological studies as well as in clinical practice the amount of produced medical image data strongly increased in the last decade. In this context organ segmentation in MR volume data gained increasing attention for medical applications. Especially in large-scale population-based studies organ volumetry is highly relevant requiring exact organ segmentation. Since manual segmentation is time-consuming and prone to reader variability, large-scale studies need automatized methods to perform organ segmentation. Fully automatic organ segmentation in native MR image data has proven to be a very challenging task. Imaging artifacts as well as inter- and intrasubject MR-intensity differences complicate the application of supervised learning strategies. Thus, we propose a modularized framework of a two-stepped probabilistic approach that generates subject-specific probability maps for renal parenchyma tissue, which are refined subsequently by using several, extended segmentation strategies. We present a three class-based support vector machine recognition system that incorporates Fourier descriptors as shape features to recognize and segment characteristic parenchyma parts. Probabilistic methods use the segmented characteristic parenchyma parts to generate high quality subject-specific parenchyma probability maps. Several refinement strategies including a final shape-based 3D level set segmentation technique are used in subsequent processing modules to segment renal parenchyma. Furthermore, our framework recognizes and excludes renal cysts from parenchymal volume, which is important to analyze renal functions. Volume errors and Dice coefficients show that our presented framework outperforms existing approaches.

  2. Some Behavioral Considerations on the GPS4GEF Cloud-Based Generator of Evaluation Forms with Automatic Feedback and References to Interactive Support Content

    Directory of Open Access Journals (Sweden)

    Daniel HOMOCIANU

    2015-01-01

    Full Text Available The paper introduces some considerations on a previously defined general purpose system used to dynamically generate online evaluation forms with automatic feedback immediately after submitting responses and working with a simple and well-known data source format able to store questions, answers and links to additional support materials in order to increase the productivity of evaluation and assessment. Beyond presenting a short description of the prototype’s components and underlining advantages and limitations of using it for any user involved in assessment and evaluation processes, this paper promotes the use of such a system together with a simple technique of generating and referencing interactive support content cited within this paper and defined together with the LIVES4IT approach. This type of content means scenarios having adhoc documentation and interactive simulation components useful when emulating concrete examples of working with real world objects, operating with devices or using software applications from any activity field.

  3. MATURE: A Model Driven bAsed Tool to Automatically Generate a langUage That suppoRts CMMI Process Areas spEcification

    Science.gov (United States)

    Musat, David; Castaño, Víctor; Calvo-Manzano, Jose A.; Garbajosa, Juan

    Many companies have achieved a higher quality in their processes by using CMMI. Process definition may be efficiently supported by software tools. A higher automation level will make process improvement and assessment activities easier to be adapted to customer needs. At present, automation of CMMI is based on tools that support practice definition in a textual way. These tools are often enhanced spreadsheets. In this paper, following the Model Driven Development paradigm (MDD), a tool that supports automatic generation of a language that can be used to specify process areas practices is presented. The generation is performed from a metamodel that represents CMMI. This tool, differently from others available, can be customized according to user needs. Guidelines to specify the CMMI metamodel are also provided. The paper also shows how this approach can support other assessment methods.

  4. Automatic Generation of Object Models for Process Planning and Control Purposes using an International standard for Information Exchange

    Directory of Open Access Journals (Sweden)

    Petter Falkman

    2003-10-01

    Full Text Available In this paper a formal mapping between static information models and dynamic models is presented. The static information models are given according to an international standard for product, process and resource information exchange, (ISO 10303-214. The dynamic models are described as Discrete Event Systems. The product, process and resource information is automatically converted into product routes and used for simulation, controller synthesis and verification. A high level language, combining Petri nets and process algebra, is presented and used for speci- fication of desired routes. A main implication of the presented method is that it enables the reuse of process information when creating dynamic models for process control. This method also enables simulation and verification to be conducted early in the development chain.

  5. Assisted reproductive technology in South Africa: first results generated from the South African Register of Assisted Reproductive Techniques.

    Science.gov (United States)

    Dyer, Silke Juliane; Kruger, Thinus Frans

    2012-02-23

    We present the first report from the South African Register of Assisted Reproductive Techniques. All assisted reproductive technology (ART) centres in South Africa were invited to join the register. Participant centres voluntarily submitted information from 2009 on the number of ART cycles, embryo transfers, clinical pregnancies, age of female partners or egg donors, and use of fertilisation techniques. Data were anonymised, pooled and analysed. The 12 participating units conducted a total of 4 512 oocyte aspirations and 3 872 embryo transfers in 2009, resulting in 1 303 clinical pregnancies. The clinical pregnancy rate (CPR) per aspiration and per embryo transfer was 28.9% and 33.6%, respectively. Fertilisation was achieved by intracytoplasmic sperm injection in two-thirds of cycles. In most cycles, 1 - 2 embryos or blastocysts were transferred. Female age was inversely related to pregnancy rate. The register achieved a high rate of participation. The reported number of ART cycles covers approximately 6% of the estimated ART demand in South Africa. The achieved CPRs compare favourably with those reported for other countries.

  6. Transposon assisted gene insertion technology (TAGIT: a tool for generating fluorescent fusion proteins.

    Directory of Open Access Journals (Sweden)

    James A Gregory

    2010-01-01

    Full Text Available We constructed a transposon (transposon assisted gene insertion technology, or TAGIT that allows the random insertion of gfp (or other genes into chromosomal loci without disrupting operon structure or regulation. TAGIT is a modified Tn5 transposon that uses Kan(R to select for insertions on the chromosome or plasmid, beta-galactosidase to identify in-frame gene fusions, and Cre recombinase to excise the kan and lacZ genes in vivo. The resulting gfp insertions maintain target gene reading frame (to the 5' and 3' of gfp and are integrated at the native chromosomal locus, thereby maintaining native expression signals. Libraries can be screened to identify GFP insertions that maintain target protein function at native expression levels, allowing more trustworthy localization studies. We here use TAGIT to generate a library of GFP insertions in the Escherichia coli lactose repressor (LacI. We identified fully functional GFP insertions and partially functional insertions that bind DNA but fail to repress the lacZ operon. Several of these latter GFP insertions localize to lacO arrays integrated in the E. coli chromosome without producing the elongated cells frequently observed when functional LacI-GFP fusions are used in chromosome tagging experiments. TAGIT thereby faciliates the isolation of fully functional insertions of fluorescent proteins into target proteins expressed from the native chromosomal locus as well as potentially useful partially functional proteins.

  7. Protein preconcentration using nanofractures generated by nanoparticle-assisted electric breakdown at junction gaps.

    Directory of Open Access Journals (Sweden)

    Chun-Ping Jen

    Full Text Available Sample preconcentration is an important step that increases the accuracy of subsequent detection, especially for samples with extremely low concentrations. Due to the overlapping of electrical double layers in the nanofluidic channel, the concentration polarization effect can be generated by applying an electric field. Therefore, a nonlinear electrokinetic flow is induced, which results in the fast accumulation of proteins in front of the induced ionic depletion zone, the so-called exclusion-enrichment effect. Nanofractures were created in this work to preconcentrate proteins via the exclusion-enrichment effect. The protein sample was driven by electroosmotic flow and accumulated at a specific location. The preconcentration chip for proteins was fabricated using simple standard soft lithography with a polydimethylsiloxane replica. Nanofractures were formed by utilizing nanoparticle-assisted electric breakdown. The proposed method for nanofracture formation that utilizes nanoparticle deposition at the junction gap between microchannels greatly decreases the required electric breakdown voltage. The experimental results indicate that a protein sample with an extremely low concentration of 1 nM was concentrated to 1.5×10(4-fold in 60 min using the proposed chip.

  8. Application of genomics-assisted breeding for generation of climate resilient crops: Progress and prospects

    Directory of Open Access Journals (Sweden)

    Chittaranjan eKole

    2015-08-01

    Full Text Available Climate change affects agricultural productivity worldwide. Increased prices of food commodities are the initial indication of drastic edible yield loss, which is expected to surge further due to global warming. This situation has compelled plant scientists to develop climate change-resilient crops, which can withstand broad-spectrum stresses such as drought, heat, cold, salinity, flood and submergence, and pests along with increased productivity. Genomics appears to be a promising tool for deciphering the stress responsiveness of crop species with adaptation traits or in wild relatives towards identifying underlying genes, alleles or quantitative trait loci. Molecular breeding approaches have been proven helpful in enhancing the stress adaptation of crop plants, and recent advancement in next-generation sequencing along with high-throughput sequencing and phenotyping platforms have transformed molecular breeding to genomics-assisted breeding (GAB. In view of this, the present review elaborates the progress and prospects of GAB in improving climate change resilience in crop plants towards circumventing global food insecurity.

  9. Bioelectrochemically-assisted anaerobic composting process enhancing compost maturity of dewatered sludge with synchronous electricity generation.

    Science.gov (United States)

    Yu, Hang; Jiang, Junqiu; Zhao, Qingliang; Wang, Kun; Zhang, Yunshu; Zheng, Zhen; Hao, Xiaodi

    2015-10-01

    Bioelectrochemically-assisted anaerobic composting process (AnCBE) with dewatered sludge as the anode fuel was constructed to accelerate composting of dewatered sludge, which could increase the quality of the compost and harvest electric energy in comparison with the traditional anaerobic composting (AnC). Results revealed that the AnCBE yielded a voltage of 0.60 ± 0.02 V, and total COD (TCOD) removal reached 19.8 ± 0.2% at the end of 35 d. The maximum power density was 5.6 W/m(3). At the end of composting, organic matter content (OM) reduction rate increased to 19.5 ± 0.2% in AnCBE and to 12.9 ± 0.1% in AnC. The fuzzy comprehensive assessment (FCA) result indicated that the membership degree of class I of AnCBE compost (0.64) was higher than that of AnC compost (0.44). It was demonstrated that electrogenesis in the AnCBE could improve the sludge stabilization degree, accelerate anaerobic composting process and enhance composting maturity with bioelectricity generation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A generation/recombination model assisted with two trap centers in wide band-gap semiconductors

    Science.gov (United States)

    Yamaguchi, Ken; Kuwabara, Takuhito; Uda, Tsuyoshi

    2013-03-01

    A generation/recombination (GR) model assisted with two trap centers has been proposed for studying reverse current on pn junctions in wide band-gap semiconductors. A level (Et1) has been assumed to be located near the bottom of the conduction band and the other (Et2) to be near the top of the valence band. The GR model has been developed by assuming (1) a high-electric field; F, (2) a short distance; d, between trap centers, (3) reduction in an energy-difference; Δeff = |Et1 - Et2| - eFd, and (4) hopping or tunneling conductions between trap centers with the same energy-level (Δeff ≈ 0). The GR rate has been modeled by trap levels, capture cross-sections, trap densities, and transition rate between trap centers. The GR rate, about 1010 greater than that estimated from the single-level model, has been predicted on pn junctions in a material with band-gap of 3.1 eV. Device simulations using the proposed GR model have been demonstrated for SiC diodes with and without a guard ring. A reasonable range for reverse current at room temperature has been simulated and stable convergence has been obtained in a numerical scheme for analyzing diodes with an electrically floating region.

  11. Computer-assisted mesh generation based on hydrological response units for distributed hydrological modeling

    Science.gov (United States)

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.; Gironás, J.

    2013-08-01

    Distributed hydrological models rely on a spatial discretization composed of homogeneous units representing different areas within the catchment. Hydrological Response Units (HRUs) typically form the basis of such a discretization. HRUs are generally obtained by intersecting raster or vector layers of land uses, soil types, geology and sub-catchments. Polylines maps representing ditches and river drainage networks can also be used. However this overlapping may result in a mesh with numerical and topological problems not highly representative of the terrain. Thus, a pre-processing is needed to improve the mesh in order to avoid negative effects on the performance of the hydrological model. This paper proposes computer-assisted mesh generation tools to obtain a more regular and physically meaningful mesh of HRUs suitable for hydrologic modeling. We combined existing tools with newly developed scripts implemented in GRASS GIS. The developed scripts address the following problems: (1) high heterogeneity in Digital Elevation Model derived properties within the HRUs, (2) correction of concave polygons or polygons with holes inside, (3) segmentation of very large polygons, and (4) bad estimations of units' perimeter and distances among them. The improvement process was applied and tested using two small catchments in France. The improvement of the spatial discretization was further assessed by comparing the representation and arrangement of overland flow paths in the original and improved meshes. Overall, a more realistic physical representation was obtained with the improved meshes, which should enhance the computation of surface and sub-surface flows in a hydrologic model.

  12. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  13. Costs and reimbursement gaps after implementation of third-generation left ventricular assist devices.

    Science.gov (United States)

    Mishra, Vinod; Geiran, Odd; Fiane, Arnt E; Sørensen, Gro; Andresen, Sølvi; Olsen, Ellen K; Khushi, Ishtiaq; Hagen, Terje P

    2010-01-01

    The purpose of this study was to compare and contrast total hospital costs and subsequent reimbursement of implementing a new program using a third-generation left ventricular assist device (LVAD) in Norway. Between July 2005 and March 2008, the total costs of treatment for 9 patients were examined. Costs were calculated for three periods-the pre-implantation LVAD phase, the LVAD implantation phase and the post-implantation LVAD phase-as well as for total hospital care. Patient-specific costs were obtained prospectively from patient records and included personnel resources, medication, blood products, blood chemistry and microbiology, imaging, and procedure costs including operating room costs. Overhead costs were registered retrospectively and allocated to the specific patient by pre-defined allocation keys. Finally, patient-specific costs and overhead costs were aggregated into total patient costs. The average total patient cost in 2007 U.S. dollars was $735,342 and the median was $613,087 (range $342,581 to $1,256,026). The mean length of stay was 77 days (range 40 to 127 days). For the LVAD implantation phase, the mean cost was $457,795 and median cost was $458,611 (range $246,239 to $677,680). The mean length of stay for the LVAD implantation phase was 55 days (range 25 to 125 days). The diagnosis-related group (DRG) reimbursement (2007) was $143,192. There is significant discrepancy between actual hospital costs and the current Norwegian DRG reimbursement for the LVAD procedure. This discrepancy can be partly explained by excessive costs related to the introduction of a new program with new technology. Costly innovations should be considered in price setting of reimbursement for novel technology. Copyright (c) 2010 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  14. Potential of human twin embryos generated by embryo splitting in assisted reproduction and research.

    Science.gov (United States)

    Noli, Laila; Ogilvie, Caroline; Khalaf, Yacoub; Ilic, Dusko

    2017-03-01

    Embryo splitting or twinning has been widely used in veterinary medicine over 20 years to generate monozygotic twins with desirable genetic characteristics. The first human embryo splitting, reported in 1993, triggered fierce ethical debate on human embryo cloning. Since Dolly the sheep was born in 1997, the international community has acknowledged the complexity of the moral arguments related to this research and has expressed concerns about the potential for reproductive cloning in humans. A number of countries have formulated bans either through laws, decrees or official statements. However, in general, these laws specifically define cloning as an embryo that is generated via nuclear transfer (NT) and do not mention embryo splitting. Only the UK includes under cloning both embryo splitting and NT in the same legislation. On the contrary, the Ethics Committee of the American Society for Reproductive Medicine does not have a major ethical objection to transferring two or more artificially created embryos with the same genome with the aim of producing a single pregnancy, stating that 'since embryo splitting has the potential to improve the efficacy of IVF treatments for infertility, research to investigate the technique is ethically acceptable'. Embryo splitting has been introduced successfully to the veterinary medicine several decades ago and today is a part of standard practice. We present here an overview of embryo splitting experiments in humans and non-human primates and discuss the potential of this technology in assisted reproduction and research. A comprehensive literature search was carried out using PUBMED and Google Scholar databases to identify studies on embryo splitting in humans and non-human primates. 'Embryo splitting' and 'embryo twinning' were used as the keywords, alone or in combination with other search phrases relevant to the topics of biology of preimplantation embryos. A very limited number of studies have been conducted in humans and non

  15. A comparative study between xerographic, computer-assisted overlay generation and animated-superimposition methods in bite mark analyses.

    Science.gov (United States)

    Tai, Meng Wei; Chong, Zhen Feng; Asif, Muhammad Khan; Rahmat, Rabiah A; Nambiar, Phrabhakaran

    2016-09-01

    This study was to compare the suitability and precision of xerographic and computer-assisted methods for bite mark investigations. Eleven subjects were asked to bite on their forearm and the bite marks were photographically recorded. Alginate impressions of the subjects' dentition were taken and their casts were made using dental stone. The overlays generated by xerographic method were obtained by photocopying the subjects' casts and the incisal edge outlines were then transferred on a transparent sheet. The bite mark images were imported into Adobe Photoshop® software and printed to life-size. The bite mark analyses using xerographically generated overlays were done by comparing an overlay to the corresponding printed bite mark images manually. In computer-assisted method, the subjects' casts were scanned into Adobe Photoshop®. The bite mark analyses using computer-assisted overlay generation were done by matching an overlay and the corresponding bite mark images digitally using Adobe Photoshop®. Another comparison method was superimposing the cast images with corresponding bite mark images employing the Adobe Photoshop® CS6 and GIF-Animator©. A score with a range of 0-3 was given during analysis to each precision-determining criterion and the score was increased with better matching. The Kruskal Wallis H test showed significant difference between the three sets of data (H=18.761, pbite mark analysis using the computer-assisted animated-superimposition method was the most accurate, followed by the computer-assisted overlay generation and lastly the xerographic method. The superior precision contributed by digital method is discernible despite the human skin being a poor recording medium of bite marks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Automatic generation of 2D micromechanical finite element model of silicon–carbide/aluminum metal matrix composites: Effects of the boundary conditions

    DEFF Research Database (Denmark)

    Qing, Hai

    2013-01-01

    Two-dimensional finite element (FE) simulations of the deformation and damage evolution of Silicon–Carbide (SiC) particle reinforced aluminum alloy composite including interphase are carried out for different microstructures and particle volume fractions of the composites. A program is developed...... for the automatic generation of 2D micromechanical FE-models with randomly distributed SiC particles. In order to simulate the damage process in aluminum alloy matrix and SiC particles, a damage parameter based on the stress triaxial indicator and the maximum principal stress criterion based elastic brittle damage...... are performed to study the influence of boundary condition, particle number and volume fraction of the representative volume element (RVE) on composite stiffness and strength properties....

  17. SCALS: a fourth-generation study of assisted living technologies in their organisational, social, political and policy context.

    Science.gov (United States)

    Greenhalgh, Trisha; Shaw, Sara; Wherton, Joe; Hughes, Gemma; Lynch, Jenni; A'Court, Christine; Hinder, Sue; Fahy, Nick; Byrne, Emma; Finlayson, Alexander; Sorell, Tom; Procter, Rob; Stones, Rob

    2016-02-15

    Research to date into assisted living technologies broadly consists of 3 generations: technical design, experimental trials and qualitative studies of the patient experience. We describe a fourth-generation paradigm: studies of assisted living technologies in their organisational, social, political and policy context. Fourth-generation studies are necessarily organic and emergent; they view technology as part of a dynamic, networked and potentially unstable system. They use co-design methods to generate and stabilise local solutions, taking account of context. SCALS (Studies in Co-creating Assisted Living Solutions) consists (currently) of 5 organisational case studies, each an English health or social care organisation striving to introduce technology-supported services to support independent living in people with health and/or social care needs. Treating these cases as complex systems, we seek to explore interdependencies, emergence and conflict. We employ a co-design approach informed by the principles of action research to help participating organisations establish, refine and evaluate their service. To that end, we are conducting in-depth ethnographic studies of people's experience of assisted living technologies (micro level), embedded in evolving organisational case studies that use interviews, ethnography and document analysis (meso level), and exploring the wider national and international context for assisted living technologies and policy (macro level). Data will be analysed using a sociotechnical framework developed from structuration theory. Research ethics approval for the first 4 case studies has been granted. An important outcome will be lessons learned from individual co-design case studies. We will document the studies' credibility and rigour, and assess the transferability of findings to other settings while also recognising unique aspects of the contexts in which they were generated. Academic outputs will include a cross-case analysis and

  18. Development of an expert system for automatic mesh generation for S(N) particle transport method in parallel environment

    Science.gov (United States)

    Patchimpattapong, Apisit

    This dissertation develops an expert system for generating an effective spatial mesh distribution for the discrete ordinates particle transport method in a parallel environment. This expert system consists of two main parts: (1) an algorithm for generating an effective mesh distribution in a serial environment, and (2) an algorithm for inference of an effective domain decomposition strategy for parallel computing. The mesh generation algorithm consists of four steps: creation of a geometric model as partitioned into coarse meshes, determination of an approximate flux shape, selection of appropriate differencing schemes, and generation of an effective fine mesh distribution. A geometric model was created using AutoCAD. A parallel code PENFC (Parallel Environment Neutral-Particle First Collision) has been developed to calculate an uncollided flux in a 3-D Cartesian geometry. The appropriate differencing schemes were selected based on the uncollided flux distribution using a least squares methodology. A menu-driven serial code PENXMSH has been developed to generate an effective spatial mesh distribution that preserves problem geometry and physics. The domain decomposition selection process involves evaluation of the four factors that affect parallel performance, which include number of processors and memory available per processor, load balance, granularity, and degree-of-coupling among processors. These factors are used to derive a parallel-performance-index that provides expected performance of a parallel algorithm depending on computing environment and resources. A large index indicates a high granularity algorithm with relatively low coupling among processors. This expert system has been successfully tested within the PENTRAN (Parallel Environment Neutral-Particle Transport) code system for simulating real-life shielding problems: the VENUS-3 experimental facility and the BWR core shroud.

  19. Reenganche automático en circuitos de distribución con generación distribuida; Automatic reclosing in distribution circuits with distributed generation

    Directory of Open Access Journals (Sweden)

    Marta Bravo de las Casas

    2015-04-01

    Full Text Available Las redes de distribución han sido diseñadas tradicionalmente para que la potencia fluya en un solo sentido. La introducción de las unidades de generación distribuida hace que esta consideración ya no sea cierta, lo que traerá consigo nuevos retos para la operación y el diseño de estas redes. Una de las áreas afectadas en este sentido son la de las protecciones eléctricas, sobre todo la protección anti-aislamiento o separadora, y en especial cuando se utiliza reenganche automático, típico en las redes eléctricas de media tensión. El presente artículo realiza un estudio del reenganche automático en una subestación típica cubana que presenta generación distribuida fuel y diesel. Inicialmente se hace una breve revisión de la literatura y los resultados se presentan por medio de simulaciones en el software Matlab – Simulik (versión 7.4. La simulación confirma la existencia del problema y para ello se plantean las posibles soluciones. Distribution networks traditionally have been designed so that the power flows in one direction only. The introduction of distributed generation units makes this consideration is no longer true, which will bring new challenges for the operation and design of these networks. One of the areas affected in this regard are the electrical protections, especially the anti-isolating or separating, especially when automatic reclosing is used. The automatic reclosing is typical in middle voltage networks. In present article is carried out a study of automatic reclosing on a Cuban typical substation that presents distributed generation diesel and fuel. Initially a short review of the literature is made and the results are presented by means of the simulations from Matlab -Simulik (version 7.4 software. The simulation confirms the existence of this problem and possible solutions arise.

  20. Automatic Commercial Permit Sets

    Energy Technology Data Exchange (ETDEWEB)

    Grana, Paul [Folsom Labs, Inc., San Francisco, CA (United States)

    2017-12-21

    Final report for Folsom Labs’ Solar Permit Generator project, which has successfully completed, resulting in the development and commercialization of a software toolkit within the cloud-based HelioScope software environment that enables solar engineers to automatically generate and manage draft documents for permit submission.

  1. Innovative Method for Automatic Shape Generation and 3D Printing of Reduced-Scale Models of Ultra-Thin Concrete Shells

    Directory of Open Access Journals (Sweden)

    Ana Tomé

    2018-02-01

    Full Text Available A research and development project has been conducted aiming to design and produce ultra-thin concrete shells. In this paper, the first part of the project is described, consisting of an innovative method for shape generation and the consequent production of reduced-scale models of the selected geometries. First, the shape generation is explained, consisting of a geometrically nonlinear analysis based on the Finite Element Method (FEM to define the antifunicular of the shell’s deadweight. Next, the scale model production is described, consisting of 3D printing, specifically developed to evaluate the aesthetics and visual impact, as well as to study the aerodynamic behaviour of the concrete shells in a wind tunnel. The goals and constraints of the method are identified and a step-by-step guidelines presented, aiming to be used as a reference in future studies. The printed geometry is validated by high-resolution assessment achieved by photogrammetry. The results are compared with the geometry computed through geometric nonlinear finite-element-based analysis, and no significant differences are recorded. The method is revealed to be an important tool for automatic shape generation and building scale models of shells. The latter enables the performing of wind tunnel tests to obtain pressure coefficients, essential for structural analysis of this type of structures.

  2. Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard

    Science.gov (United States)

    Wang, Weimin; Sakurada, Ken; Kawaguchi, Nobuo

    2017-08-01

    This paper presents a novel method for fully automatic and convenient extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally printed chessboard. The proposed method is based on the 3D corner estimation of the chessboard from the sparse point cloud generated by one frame scan of the LiDAR. To estimate the corners, we formulate a full-scale model of the chessboard and fit it to the segmented 3D points of the chessboard. The model is fitted by optimizing the cost function under constraints of correlation between the reflectance intensity of laser and the color of the chessboard's patterns. Powell's method is introduced for resolving the discontinuity problem in optimization. The corners of the fitted model are considered as the 3D corners of the chessboard. Once the corners of the chessboard in the 3D point cloud are estimated, the extrinsic calibration of the two sensors is converted to a 3D-2D matching problem. The corresponding 3D-2D points are used to calculate the absolute pose of the two sensors with Unified Perspective-n-Point (UPnP). Further, the calculated parameters are regarded as initial values and are refined using the Levenberg-Marquardt method. The performance of the proposed corner detection method from the 3D point cloud is evaluated using simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR and a Ladybug3 camera under the proposed re-projection error metric, qualitatively and quantitatively demonstrate the accuracy and stability of the final extrinsic calibration parameters.

  3. Assisted reproductive technology in Europe, 2006: results generated from European registers by ESHRE

    DEFF Research Database (Denmark)

    de Mouzon, J; Goossens, V; Bhattacharya, S

    2010-01-01

    In this 10th European IVF-monitoring (EIM) report, the results of assisted reproductive techniques from treatments initiated in Europe during 2006 are presented. Data were mainly collected from existing national registers....

  4. Assisted reproductive technology in Europe, 2006: results generated from European registers by ESHRE

    DEFF Research Database (Denmark)

    de Mouzon, J; Goossens, V; Bhattacharya, S

    2010-01-01

    In this 10th European IVF-monitoring (EIM) report, the results of assisted reproductive techniques from treatments initiated in Europe during 2006 are presented. Data were mainly collected from existing national registers.......In this 10th European IVF-monitoring (EIM) report, the results of assisted reproductive techniques from treatments initiated in Europe during 2006 are presented. Data were mainly collected from existing national registers....

  5. AVID: Automatic Visualization Interface Designer

    National Research Council Canada - National Science Library

    Chuah, Mei

    2000-01-01

    .... Automatic generation offers great flexibility in performing data and information analysis tasks, because new designs are generated on a case by case basis to suit current and changing future needs...

  6. LanHEP—a package for the automatic generation of Feynman rules in field theory. Version 3.0

    Science.gov (United States)

    Semenov, A. V.

    2009-03-01

    The LanHEP program version 3.0 for Feynman rules generation from the Lagrangian is described. It reads the Lagrangian written in a compact form, close to the one used in publications. It means that Lagrangian terms can be written with summation over indices of broken symmetries and using special symbols for complicated expressions, such as covariant derivative and strength tensor for gauge fields. Supersymmetric theories can be described using the superpotential formalism and the 2-component fermion notation. The output is Feynman rules in terms of physical fields and independent parameters in the form of CompHEP model files, which allows one to start calculations of processes in the new physical model. Alternatively, Feynman rules can be generated in FeynArts format or as LaTeX table. One-loop counterterms can be generated in FeynArts format. Program summaryProgram title: LanHEP Catalogue identifier: ADZV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 83 041 No. of bytes in distributed program, including test data, etc.: 1 090 931 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 2 MB (SM), 12 MB (MSSM), 120 MB (MSSM with counterterms) Classification: 4.4 Nature of problem: Deriving Feynman rules from the Lagrangian Solution method: The program reads the Lagrangian written in a compact form, close to the one used in publications. It means that Lagrangian terms can be written with summation over indices of broken symmetries and using special symbols for complicated expressions, such as covariant derivative and strength tensor for gauge fields. Tools for checking the correctness of the model, and for simplifying the output expressions are provided. The output is

  7. The ear, the eye, earthquakes and feature selection: listening to automatically generated seismic bulletins for clues as to the differences between true and false events.

    Science.gov (United States)

    Kuzma, H. A.; Arehart, E.; Louie, J. N.; Witzleben, J. L.

    2012-04-01

    Listening to the waveforms generated by earthquakes is not new. The recordings of seismometers have been sped up and played to generations of introductory seismology students, published on educational websites and even included in the occasional symphony. The modern twist on earthquakes as music is an interest in using state-of-the-art computer algorithms for seismic data processing and evaluation. Algorithms such as such as Hidden Markov Models, Bayesian Network models and Support Vector Machines have been highly developed for applications in speech recognition, and might also be adapted for automatic seismic data analysis. Over the last three years, the International Data Centre (IDC) of the Comprehensive Test Ban Treaty Organization (CTBTO) has supported an effort to apply computer learning and data mining algorithms to IDC data processing, particularly to the problem of weeding through automatically generated event bulletins to find events which are non-physical and would otherwise have to be eliminated by the hand of highly trained human analysts. Analysts are able to evaluate events, distinguish between phases, pick new phases and build new events by looking at waveforms displayed on a computer screen. Human ears, however, are much better suited to waveform processing than are the eyes. Our hypothesis is that combining an auditory representation of seismic events with visual waveforms would reduce the time it takes to train an analyst and the time they need to evaluate an event. Since it takes almost two years for a person of extraordinary diligence to become a professional analyst and IDC contracts are limited to seven years by Treaty, faster training would significantly improve IDC operations. Furthermore, once a person learns to distinguish between true and false events by ear, various forms of audio compression can be applied to the data. The compression scheme which yields the smallest data set in which relevant signals can still be heard is likely an

  8. Applying fractional order PID to design TCSC-based damping controller in coordination with automatic generation control of interconnected multi-source power system

    Directory of Open Access Journals (Sweden)

    Javad Morsali

    2017-02-01

    Full Text Available In this paper, fractional order proportional-integral-differential (FOPID controller is employed in the design of thyristor controlled series capacitor (TCSC-based damping controller in coordination with the secondary integral controller as automatic generation control (AGC loop. In doing so, the contribution of the TCSC in tie-line power exchange is extracted mathematically for small load disturbance. Adjustable parameters of the proposed FOPID-based TCSC damping controller and the AGC loop are optimized concurrently via an improved particle swarm optimization (IPSO algorithm which is reinforced by chaotic parameter and crossover operator to obtain a globally optimal solution. The powerful FOMCON toolbox is used along with MATLAB for handling fractional order modeling and control. An interconnected multi-source power system is simulated regarding the physical constraints of generation rate constraint (GRC nonlinearity and governor dead band (GDB effect. Simulation results using FOMCON toolbox demonstrate that the proposed FOPID-based TCSC damping controller achieves the greatest dynamic performance under different load perturbation patterns in comparison with phase lead-lag and classical PID-based TCSC damping controllers, all in coordination with the integral AGC. Moreover, sensitivity analyses are performed to show the robustness of the proposed controller under various uncertainty scenarios.

  9. Production optimization of {sup 99}Mo/{sup 99m}Tc zirconium molybate gel generators at semi-automatic device: DISIGEG

    Energy Technology Data Exchange (ETDEWEB)

    Monroy-Guzman, F., E-mail: fabiola.monroy@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Rivero Gutierrez, T., E-mail: tonatiuh.rivero@inin.gob.mx [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Lopez Malpica, I.Z.; Hernandez Cortes, S.; Rojas Nava, P.; Vazquez Maldonado, J.C. [Instituto Nacional de Investigaciones Nucleares, Carretera Mexico-Toluca S/N, La Marquesa, Ocoyoacac, 52750, Estado de Mexico (Mexico); Vazquez, A. [Instituto Mexicano del Petroleo, Eje Central Norte Lazaro Cardenas 152, Col. San Bartolo Atepehuacan, 07730, Mexico D.F. (Mexico)

    2012-01-15

    DISIGEG is a synthesis installation of zirconium {sup 99}Mo-molybdate gels for {sup 99}Mo/{sup 99m}Tc generator production, which has been designed, built and installed at the ININ. The device consists of a synthesis reactor and five systems controlled via keyboard: (1) raw material access, (2) chemical air stirring, (3) gel dried by air and infrared heating, (4) moisture removal and (5) gel extraction. DISIGEG operation is described and dried condition effects of zirconium {sup 99}Mo- molybdate gels on {sup 99}Mo/{sup 99m}Tc generator performance were evaluated as well as some physical-chemical properties of these gels. The results reveal that temperature, time and air flow applied during the drying process directly affects zirconium {sup 99}Mo-molybdate gel generator performance. All gels prepared have a similar chemical structure probably constituted by three-dimensional network, based on zirconium pentagonal bipyramids and molybdenum octahedral. Basic structural variations cause a change in gel porosity and permeability, favouring or inhibiting {sup 99m}TcO{sub 4}{sup -} diffusion into the matrix. The {sup 99m}TcO{sub 4}{sup -} eluates produced by {sup 99}Mo/{sup 99m}Tc zirconium {sup 99}Mo-molybdate gel generators prepared in DISIGEG, air dried at 80 Degree-Sign C for 5 h and using an air flow of 90 mm, satisfied all the Pharmacopoeias regulations: {sup 99m}Tc yield between 70-75%, {sup 99}Mo breakthrough less than 3 Multiplication-Sign 10{sup -3}%, radiochemical purities about 97% sterile and pyrogen-free eluates with a pH of 6. - Highlights: Black-Right-Pointing-Pointer {sup 99}Mo/{sup 99m}Tc generators based on {sup 99}Mo-molybdate gels were synthesized at a semi-automatic device. Black-Right-Pointing-Pointer Generator performances depend on synthesis conditions of the zirconium {sup 99}Mo-molybdate gel. Black-Right-Pointing-Pointer {sup 99m}TcO{sub 4}{sup -} diffusion and yield into generator depends on gel porosity and permeability. Black

  10. Magnetic Resonance–Based Automatic Air Segmentation for Generation of Synthetic Computed Tomography Scans in the Head Region

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Weili; Kim, Joshua P. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Kadbi, Mo [Philips Healthcare, Cleveland, Ohio (United States); Movsas, Benjamin; Chetty, Indrin J. [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States); Glide-Hurst, Carri K., E-mail: churst2@hfhs.org [Department of Radiation Oncology, Henry Ford Health Systems, Detroit, Michigan (United States)

    2015-11-01

    Purpose: To incorporate a novel imaging sequence for robust air and tissue segmentation using ultrashort echo time (UTE) phase images and to implement an innovative synthetic CT (synCT) solution as a first step toward MR-only radiation therapy treatment planning for brain cancer. Methods and Materials: Ten brain cancer patients were scanned with a UTE/Dixon sequence and other clinical sequences on a 1.0 T open magnet with simulation capabilities. Bone-enhanced images were generated from a weighted combination of water/fat maps derived from Dixon images and inverted UTE images. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating segmentation errors (true-positive rate, false-positive rate, and Dice similarity indices using CT simulation (CT-SIM) as ground truth. The synCTs were generated using a voxel-based, weighted summation method incorporating T2, fluid attenuated inversion recovery (FLAIR), UTE1, and bone-enhanced images. Mean absolute error (MAE) characterized Hounsfield unit (HU) differences between synCT and CT-SIM. A dosimetry study was conducted, and differences were quantified using γ-analysis and dose-volume histogram analysis. Results: On average, true-positive rate and false-positive rate for the CT and MR-derived air masks were 80.8% ± 5.5% and 25.7% ± 6.9%, respectively. Dice similarity indices values were 0.78 ± 0.04 (range, 0.70-0.83). Full field of view MAE between synCT and CT-SIM was 147.5 ± 8.3 HU (range, 138.3-166.2 HU), with the largest errors occurring at bone–air interfaces (MAE 422.5 ± 33.4 HU for bone and 294.53 ± 90.56 HU for air). Gamma analysis revealed pass rates of 99.4% ± 0.04%, with acceptable treatment plan quality for the cohort. Conclusions: A hybrid MRI phase/magnitude UTE image processing technique was introduced that significantly improved bone and air contrast in MRI. Segmented air masks and bone-enhanced images were integrated

  11. Assisted reproductive technology in Europe, 2004: results generated from European registers by ESHRE

    DEFF Research Database (Denmark)

    Goossens, V.; Ferraretti, A.P.; Bhattacharya, S.

    2008-01-01

    BACKGROUND: European results of assisted reproductive techniques from treatments initiated during 2004 are presented in this eighth report. METHODS: Data were mainly collected from existing national registers. From 29 countries, 785 clinics reported 367,066 treatment cycles including: IVF (114...

  12. Characterization methods of nano-patterned surfaces generated by induction heating assisted injection molding

    DEFF Research Database (Denmark)

    Tang, Peter Torben; Ravn, Christian; Menotti, Stefano

    2015-01-01

    An induction heating-assisted injection molding (IHAIM) process developed by the authors is used to replicate surfaces containing random nano-patterns. The injection molding setup is developed so that an induction heating system rapidly heats the cavity wall at rates of up to 10◦C/s. In order...

  13. Oncology providers' evaluation of the use of an automatically generated cancer survivorship care plan: longitudinal results from the ROGY Care trial.

    Science.gov (United States)

    Nicolaije, Kim A H; Ezendam, Nicole P M; Vos, M Caroline; Pijnenborg, Johanna M A; van de Poll-Franse, Lonneke V; Kruitwagen, Roy F P M

    2014-06-01

    Previous studies have merely investigated oncology providers' a priori attitudes toward SCPs. The purpose of the current study was to longitudinally evaluate oncology providers' expectations and actual experiences with the use of an automatically generated Survivorship Care Plan (SCP) in daily clinical practice. Between April 2011 and October 2012, the participating oncology providers (i.e., gynecologists, gynecologic oncologists, oncology nurses) provided usual care or SCP care to 222 endometrial and 85 ovarian cancer patients included in the Registrationsystem Oncological GYnecology (ROGY) Care trial. All (n = 43) oncology providers in both arms were requested to complete a questionnaire before and after patient inclusion regarding their expectations and evaluation of SCP care. Before patient inclusion, 38 (88%; 21 SCP, 17 usual care), and after patient inclusion, 35 (83%; 20 SCP, 15 usual care) oncology providers returned the questionnaire. After patient inclusion, oncology providers were generally satisfied with the SCP (M = 7.1, SD = 1.3, with 1 = not at all-10 = very much) and motivated to keep using the SCP (M = 7.9, SD = 1.5). Most providers (64%) encountered barriers. Twenty-five percent felt they used more time for consultations (M = 7.3 min, SD = 4.6). However, self-reported consultation time did not differ between before (M = 21.8 min, SD = 11.6) and after patient inclusion (M = 18.7, SD = 10.6; p = 0.22) or between SCP care (M = 18.5, SD = 10.3) and usual care (M = 22.0, SD = 12.2; p = 0.21). Oncology providers using the SCP were generally satisfied and motivated to keep using the SCP. However, the findings of the current study suggest that even when the SCP can be generated automatically, oncology providers still have difficulties with finding the time to discuss the SCP with their patients. If SCP care is indeed effective, overcoming the perceived barriers is needed before

  14. DEVELOPMENT AND TESTING OF GEO-PROCESSING MODELS FOR THE AUTOMATIC GENERATION OF REMEDIATION PLAN AND NAVIGATION DATA TO USE IN INDUSTRIAL DISASTER REMEDIATION

    Directory of Open Access Journals (Sweden)

    G. Lucas

    2015-08-01

    Full Text Available This paper introduces research done on the automatic preparation of remediation plans and navigation data for the precise guidance of heavy machinery in clean-up work after an industrial disaster. The input test data consists of a pollution extent shapefile derived from the processing of hyperspectral aerial survey data from the Kolontár red mud disaster. Three algorithms were developed and the respective scripts were written in Python. The first model aims at drawing a parcel clean-up plan. The model tests four different parcel orientations (0, 90, 45 and 135 degree and keeps the plan where clean-up parcels are less numerous considering it is an optimal spatial configuration. The second model drifts the clean-up parcel of a work plan both vertically and horizontally following a grid pattern with sampling distance of a fifth of a parcel width and keep the most optimal drifted version; here also with the belief to reduce the final number of parcel features. The last model aims at drawing a navigation line in the middle of each clean-up parcel. The models work efficiently and achieve automatic optimized plan generation (parcels and navigation lines. Applying the first model we demonstrated that depending on the size and geometry of the features of the contaminated area layer, the number of clean-up parcels generated by the model varies in a range of 4% to 38% from plan to plan. Such a significant variation with the resulting feature numbers shows that the optimal orientation identification can result in saving work, time and money in remediation. The various tests demonstrated that the model gains efficiency when 1/ the individual features of contaminated area present a significant orientation with their geometry (features are long, 2/ the size of pollution extent features becomes closer to the size of the parcels (scale effect. The second model shows only 1% difference with the variation of feature number; so this last is less interesting for

  15. Native American Technical Assistance and Training for Renewable Energy Resource Development and Electrical Generation Facilities Management

    Energy Technology Data Exchange (ETDEWEB)

    A. David Lester

    2008-10-17

    The Council of Energy Resource Tribes (CERT) will facilitate technical expertise and training of Native Americans in renewable energy resource development for electrical generation facilities, and distributed generation options contributing to feasibility studies, strategic planning and visioning. CERT will also provide information to Tribes on energy efficiency and energy management techniques.This project will provide facilitation and coordination of expertise from government agencies and private industries to interact with Native Americans in ways that will result in renewable energy resource development, energy efficiency program development, and electrical generation facilities management by Tribal entities. The intent of this cooperative agreement is to help build capacity within the Tribes to manage these important resources.

  16. Automatic control of plants of direct steam generation with cylinder-parabolic solar collectors; Control automatico de plantas de generacion directa de vapor con colectores solares cilindro-parabolicos

    Energy Technology Data Exchange (ETDEWEB)

    Valenzuela Gutierrez, L.

    2008-07-01

    The main objective of this dissertation has been the contributions to the operation in automatic mode of a new generation of direct steam generation solar plants with parabolic-trough collectors. The dissertation starts introducing the parabolic-trough collectors solar thermal technology for the generation of process steam or steam for a Rankine cycle in the case of power generation generation, which is currently the most developed and commercialized technology. Presently, the parabolic-trough collectors technology is based on the configuration known as heat-exchanger system, based in the use of a heat transfer fluid in the solar field which is heated during the recirculation through the absorber tubes of the solar collectors, transferring later on the that thermal energy to a heat-exchanger for steam generation. Direct steam generation in the absorber tubes has always been shown as an ideal pathway to reduce generation cost by 15% and increase conversion efficiency by 20% (DISS, 1999). (Author)

  17. Modelling of Diesel Generator Sets That Assist Off-Grid Renewable Energy Micro-grids

    Directory of Open Access Journals (Sweden)

    Johanna Salazar

    2015-08-01

    Full Text Available This paper focuses on modelling diesel generators for off-grid installations based on renewable energies. Variations in Environmental Variables (for example, Solar Radiation and Wind Speed make necessary to include these auxiliary systems in off-grid renewable energy installations, in order to ensure minimal services when the produced renewable energy is not sufficient to fulfill the demand. This paper concentrates on modelling the dynamical behaviour of the diesel generator, in order to use the models and simulations for developing and testing advanced controllers for the overall off-grid system. The Diesel generator is assumed to consist of a diesel motor connected to a synchronous generator through an electromagnetic clutch, with a flywheel to damp variations. Each of the components is modelled using physical models, with the corresponding control systems also modelled: these control systems include the speed and the voltage regulation (in cascade regulation.

  18. W-Band Millimeter-Wave Vector Signal Generation Based on Precoding-Assisted Random Photonic Frequency Tripling Scheme Enabled by Phase Modulator

    National Research Council Canada - National Science Library

    Li, Xinying; Xu, Yuming; Xiao, Jiangnan; Yu, Jianjun

    2016-01-01

    We propose W-band photonic millimeter-wave (mm-wave) vector signal generation employing a precoding-assisted random frequency tripling scheme enabled by a single phase modulator cascaded with a wavelength selective switch (WSS...

  19. Assisted reproductive technology in Europe, 2012: results generated from European registers by ESHREaEuro

    DEFF Research Database (Denmark)

    Calhaz-Jorge, C.; De Geyter, C.; Kupka, M. S.

    2016-01-01

    The 16th European IVF-monitoring (EIM) report presents the data of the treatments involving assisted reproductive technology (ART) and intrauterine insemination (IUI) initiated in Europe during 2012: are there any changes compared with previous years? Despite some fluctuations in the number...... 1997, ART data in Europe have been collected and re-ported in 15 manuscripts, published in Human Reproduction. Retrospective data collection of European ART data by the EIM Consortium for the European Society of Human Reproduction and Embryology (ESHRE). Data for cycles between 1 January and 31...

  20. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain-spe...

  1. Automatic generation of tourist maps

    OpenAIRE

    Grabler, Floraine; Agrawala, Maneesh; Sumner, Robert W.; Pauly, Mark

    2008-01-01

    Tourist maps are essential resources for visitors to an unfamiliar city because they visually highlight landmarks and other points of interest. Yet, hand-designed maps are static representations that cannot adapt to the needs and tastes of the individual tourist. In this paper we present an automated system for designing tourist maps that selects and highlights the information that is most important to tourists. Our system determines the salience of map elements using bottom-up vision-based i...

  2. Disrupting the world of Disability: The Next Generation of Assistive Technologies and Rehabilitation Practices.

    Science.gov (United States)

    Holloway, Catherine; Dawes, Helen

    2016-12-01

    Designing, developing and deploying assistive technologies at a scale and cost which makes them accessible to people is challenging. Traditional models of manufacturing would appear to be insufficient at helping the world's 1 billion disabled people in accessing the technologies they require. In addition, many who receive assistive technologies simply abandon them as they do not meet their needs. In this study the authors explore the changing world of design for disability. A landscape which includes the rise of the maker movement, the role of ubiquitous sensing and the changing role of the 'user' to one of designer and maker. The authors argue they are on the cusp of a revolution in healthcare provision, where the population will soon have the ability to manage their own care with systems in place for diagnosis, monitoring, individualised prescription and action/reaction. This will change the role of the clinician from that of diagnostician, gatekeeper and resource manager/deliverer to that of consultant informatics manager and overseer; perhaps only intervening to promote healthy behaviour, prevent crisis and react at flash moments.

  3. Generation of CsI cluster ions for mass calibration in matrix-assisted laser desorption/ionization mass spectrometry.

    Science.gov (United States)

    Lou, Xianwen; van Dongen, Joost L J; Meijer, E W

    2010-07-01

    A simple method was developed for the generation of cesium iodide (CsI) cluster ions up to m/z over 20,000 in matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS). Calibration ions in both positive and negative ion modes can readily be generated from a single MALDI spot of CsI(3) with 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene] malononitrile (DCTB) matrix. The major cluster ion series observed in the positive ion mode is [(CsI)(n)Cs](+), and in the negative ion mode is [(CsI)(n)I](-). In both cluster series, ions spread evenly every 259.81 units. The easy method described here for the production of CsI cluster ions should be useful for MALDI MS calibrations. Copyright 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  4. Wind turbine generators having wind assisted cooling systems and cooling methods

    Science.gov (United States)

    Bagepalli, Bharat [Niskayuna, NY; Barnes, Gary R [Delanson, NY; Gadre, Aniruddha D [Rexford, NY; Jansen, Patrick L [Scotia, NY; Bouchard, Jr., Charles G.; Jarczynski, Emil D [Scotia, NY; Garg, Jivtesh [Cambridge, MA

    2008-09-23

    A wind generator includes: a nacelle; a hub carried by the nacelle and including at least a pair of wind turbine blades; and an electricity producing generator including a stator and a rotor carried by the nacelle. The rotor is connected to the hub and rotatable in response to wind acting on the blades to rotate the rotor relative to the stator to generate electricity. A cooling system is carried by the nacelle and includes at least one ambient air inlet port opening through a surface of the nacelle downstream of the hub and blades, and a duct for flowing air from the inlet port in a generally upstream direction toward the hub and in cooling relation to the stator.

  5. Coherent hard x rays from attosecond pulse train-assisted harmonic generation.

    Science.gov (United States)

    Klaiber, Michael; Hatsagortsyan, Karen Z; Müller, Carsten; Keitel, Christoph H

    2008-02-15

    High-order harmonic generation from atomic systems is considered in the crossed fields of a relativistically strong infrared laser and a weak attosecond pulse train of soft x rays. Due to one-photon ionization by the x-ray pulse, the ionized electron obtains a starting momentum that compensates the relativistic drift, which is induced by the laser magnetic field, and allows the electron to efficiently emit harmonic radiation upon recombination with the atomic core in the relativistic regime. This way, short pulses of coherent hard x rays of up to 40 keV energy can be generated.

  6. An ultrasensitive, non-enzymatic glucose assay via gold nanorod-assisted generation of silver nanoparticles

    Science.gov (United States)

    Xianyu, Yunlei; Sun, Jiashu; Li, Yixuan; Tian, Yue; Wang, Zhuo; Jiang, Xingyu

    2013-06-01

    This report demonstrates a colorimetric, non-enzymatic glucose assay with a low detection limit of 0.07 μM based on negatively charged gold nanorod-enhanced redox reaction. This glucose assay could generate silver nanoparticles as the readout that can be visualized by the naked eye, and only 4 femtomoles of nanorods are needed for glucose determination in one human plasma sample.This report demonstrates a colorimetric, non-enzymatic glucose assay with a low detection limit of 0.07 μM based on negatively charged gold nanorod-enhanced redox reaction. This glucose assay could generate silver nanoparticles as the readout that can be visualized by the naked eye, and only 4 femtomoles of nanorods are needed for glucose determination in one human plasma sample. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr01697h

  7. Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage

    OpenAIRE

    Faezeh Mosallat; Eric L. Bibeau; Tarek El Mekkawy

    2015-01-01

    Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. A numerical dynamic model is developed to simulate troughs installed in cold climates and validated using a parabolic solar trough facility in Winnipeg. The model is developed in Simulink and will be utilized to simulate a trigeneration system for heating, cooling and electricity generation in remote northern co...

  8. Meganuclease-assisted generation of stable transgenics in the sea anemone Nematostella vectensis.

    Science.gov (United States)

    Renfer, Eduard; Technau, Ulrich

    2017-09-01

    The sea anemone Nematostella vectensis is a model system used by a rapidly growing research community for comparative genomics, developmental biology and ecology. Here, we describe a microinjection procedure for creating stable transgenic lines in Nematostella based on meganuclease (I-SceI)-assisted integration of a transgenic cassette into the genome. The procedure describes the preparation of the reagents, microinjection of the transgenesis vector and the husbandry of transgenic animals. The microinjection setup differs from those of previously published protocols by the use of a holding capillary mounted on an inverted fluorescence microscope. In one session of injections, a single researcher can microinject up to 1,300 zygotes with a reporter construct digested with the meganuclease I-SceI. Under optimal conditions, fully transgenic heterozygous F1 animals can be obtained within 4-5 months of the injections, with a germ-line transmission efficiency of ∼3%. The method is versatile and, after a short training phase, can be carried out by any researcher with basic training in molecular biology. Flexibility of construct design enables this method to be used for numerous applications, including the functional dissection of cis-regulatory elements, subcellular localization of proteins, detection of protein-binding partners, ectopic expression of genes of interest, lineage tracing and cell-type-specific reporter gene expression.

  9. Application of genomics-assisted breeding for generation of climate resilient crops: progress and prospects.

    Science.gov (United States)

    Kole, Chittaranjan; Muthamilarasan, Mehanathan; Henry, Robert; Edwards, David; Sharma, Rishu; Abberton, Michael; Batley, Jacqueline; Bentley, Alison; Blakeney, Michael; Bryant, John; Cai, Hongwei; Cakir, Mehmet; Cseke, Leland J; Cockram, James; de Oliveira, Antonio Costa; De Pace, Ciro; Dempewolf, Hannes; Ellison, Shelby; Gepts, Paul; Greenland, Andy; Hall, Anthony; Hori, Kiyosumi; Hughes, Stephen; Humphreys, Mike W; Iorizzo, Massimo; Ismail, Abdelbagi M; Marshall, Athole; Mayes, Sean; Nguyen, Henry T; Ogbonnaya, Francis C; Ortiz, Rodomiro; Paterson, Andrew H; Simon, Philipp W; Tohme, Joe; Tuberosa, Roberto; Valliyodan, Babu; Varshney, Rajeev K; Wullschleger, Stan D; Yano, Masahiro; Prasad, Manoj

    2015-01-01

    Climate change affects agricultural productivity worldwide. Increased prices of food commodities are the initial indication of drastic edible yield loss, which is expected to increase further due to global warming. This situation has compelled plant scientists to develop climate change-resilient crops, which can withstand broad-spectrum stresses such as drought, heat, cold, salinity, flood, submergence and pests, thus helping to deliver increased productivity. Genomics appears to be a promising tool for deciphering the stress responsiveness of crop species with adaptation traits or in wild relatives toward identifying underlying genes, alleles or quantitative trait loci. Molecular breeding approaches have proven helpful in enhancing the stress adaptation of crop plants, and recent advances in high-throughput sequencing and phenotyping platforms have transformed molecular breeding to genomics-assisted breeding (GAB). In view of this, the present review elaborates the progress and prospects of GAB for improving climate change resilience in crops, which is likely to play an ever increasing role in the effort to ensure global food security.

  10. Application of genomics-assisted breeding for generation of climate resilient crops: progress and prospects

    Science.gov (United States)

    Kole, Chittaranjan; Muthamilarasan, Mehanathan; Henry, Robert; Edwards, David; Sharma, Rishu; Abberton, Michael; Batley, Jacqueline; Bentley, Alison; Blakeney, Michael; Bryant, John; Cai, Hongwei; Cakir, Mehmet; Cseke, Leland J.; Cockram, James; de Oliveira, Antonio Costa; De Pace, Ciro; Dempewolf, Hannes; Ellison, Shelby; Gepts, Paul; Greenland, Andy; Hall, Anthony; Hori, Kiyosumi; Hughes, Stephen; Humphreys, Mike W.; Iorizzo, Massimo; Ismail, Abdelbagi M.; Marshall, Athole; Mayes, Sean; Nguyen, Henry T.; Ogbonnaya, Francis C.; Ortiz, Rodomiro; Paterson, Andrew H.; Simon, Philipp W.; Tohme, Joe; Tuberosa, Roberto; Valliyodan, Babu; Varshney, Rajeev K.; Wullschleger, Stan D.; Yano, Masahiro; Prasad, Manoj

    2015-01-01

    Climate change affects agricultural productivity worldwide. Increased prices of food commodities are the initial indication of drastic edible yield loss, which is expected to increase further due to global warming. This situation has compelled plant scientists to develop climate change-resilient crops, which can withstand broad-spectrum stresses such as drought, heat, cold, salinity, flood, submergence and pests, thus helping to deliver increased productivity. Genomics appears to be a promising tool for deciphering the stress responsiveness of crop species with adaptation traits or in wild relatives toward identifying underlying genes, alleles or quantitative trait loci. Molecular breeding approaches have proven helpful in enhancing the stress adaptation of crop plants, and recent advances in high-throughput sequencing and phenotyping platforms have transformed molecular breeding to genomics-assisted breeding (GAB). In view of this, the present review elaborates the progress and prospects of GAB for improving climate change resilience in crops, which is likely to play an ever increasing role in the effort to ensure global food security. PMID:26322050

  11. Coherent hard x-rays from attosecond pulse train-assisted harmonic generation

    OpenAIRE

    Klaiber, Michael; Hatsagortsyan, Karen Z.; Müller, Carsten; Keitel, Christoph H.

    2007-01-01

    High-order harmonic generation from atomic systems is considered in the crossed fields of a relativistically strong infrared laser and a weak attosecond-pulse train of soft x-rays. Due to one-photon ionization by the x-ray pulse, the ionized electron obtains a starting momentum that compensates the relativistic drift which is induced by the laser magnetic field, and allows the electron to efficiently emit harmonic radiation upon recombination with the atomic core in the relativistic regime. I...

  12. Generation of Aptamers from A Primer-Free Randomized ssDNA Library Using Magnetic-Assisted Rapid Aptamer Selection

    Science.gov (United States)

    Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih

    2017-04-01

    Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.

  13. Generation of octave-spanning supercontinuum by Raman-assisted four-wave mixing in single-crystal diamond.

    Science.gov (United States)

    Lu, Chih-Hsuan; Yang, Li-Fan; Zhi, Miaochan; Sokolov, Alexei V; Yang, Shang-Da; Hsu, Chia-Chen; Kung, A H

    2014-02-24

    An octave-spanning coherent supercontinuum is generated by non-collinear Raman-assisted four-wave mixing in single-crystal diamond using 7.7 fs laser pulses that have been chirped to about 420 fs in duration. The use of ultrabroad bandwidth pulses as input results in substantial overlap of the generated spectrum of the anti-Stokes sidebands, creating a phase-locked supercontinuum when all the sidebands are combined to overlap in time and space. The overall bandwidth of the generated supercontinuum is sufficient to support its compression to isolated few-to-single cycle attosecond transients. The significant spectral overlap of adjacent anti-Stokes sidebands allows the utilization of straight-forward spectral interferometry to test the relative phase coherence of the anti-Stokes outputs and is demonstrated here for two adjacent pairs of sidebands. The method can subsequently be employed to set the relative phase of the sidebands for pulse compression and for the synthesis of arbitrary field transients.

  14. jMHC: software assistant for multilocus genotyping of gene families using next-generation amplicon sequencing.

    Science.gov (United States)

    Stuglik, Michał T; Radwan, Jacek; Babik, Wiesław

    2011-07-01

    Genotyping of multilocus gene families, such as the major histocompatibility complex (MHC), may be challenging because of problems with assigning alleles to loci and copy number variation among individuals. Simultaneous amplification and genotyping of multiple loci may be necessary, and in such cases, next-generation deep amplicon sequencing offers a great promise as a genotyping method of choice. Here, we describe jMHC, a computer program developed for analysing and assisting in the visualization of deep amplicon sequencing data. Software operates on FASTA files; therefore, output from any sequencing technology may be used. jMHC was designed specifically for MHC studies but it may be useful for analysing amplicons derived from other multigene families or for genotyping other polymorphic systems. The program is written in Java with user-friendly graphical interface (GUI) and can be run on Microsoft Windows, Linux OS and Mac OS. © 2011 Blackwell Publishing Ltd.

  15. Pupillary automatism

    Directory of Open Access Journals (Sweden)

    Menon V

    1989-01-01

    Full Text Available An unusual case of cyclic pupillary movements in an otherwise complete oculomotor nerve palsy in a five year-old girl is reported. This is considered to be due to destruction of somatic and visceral nuclei of the oculomotor nerve following injury to its fascicular part. Pupillary automatism has been explained on the basis of the presence of aberrant autonomic cells in the ciliary ganglion which are discharging in a regular rhythm independent of higher control.

  16. An Automated Microwave-Assisted Synthesis Purification System for Rapid Generation of Compound Libraries.

    Science.gov (United States)

    Tu, Noah P; Searle, Philip A; Sarris, Kathy

    2016-06-01

    A novel methodology for the synthesis and purification of drug-like compound libraries has been developed through the use of a microwave reactor with an integrated high-performance liquid chromatography-mass spectrometry (HPLC-MS) system. The strategy uses a fully automated synthesizer with a microwave as energy source and robotic components for weighing and dispensing of solid reagents, handling liquid reagents, capper/crimper of microwave reaction tube assemblies, and transportation. Crude reaction products were filtered through solid-phase extraction cartridges and injected directly onto a reverse-phase chromatography column via an injection valve. For multistep synthesis, crude products were passed through scavenger resins and reintroduced for subsequent reactions. All synthetic and purification steps were conducted under full automation with no handling or isolation of intermediates, to afford the desired purified products. This approach opens the way to highly efficient generation of drug-like compounds as part of a lead discovery strategy or within a lead optimization program. © 2015 Society for Laboratory Automation and Screening.

  17. Outer-selective pressure-retarded osmosis hollow fiber membranes from vacuum-assisted interfacial polymerization for osmotic power generation

    KAUST Repository

    Sun, Shipeng

    2013-11-19

    In this paper, we report the technical breakthroughs to synthesize outer-selective thin-film composite (TFC) hollow fiber membranes, which is in an urgent need for osmotic power generation with the pressure-retarded osmosis (PRO) process. In the first step, a defect-free thin-film composite membrane module is achieved by vacuum-assisted interfacial polymerization. The PRO performance is further enhanced by optimizing the support in terms of pore size and mechanical strength and the TFC layer with polydopamine coating and molecular engineering of the interfacial polymerization solution. The newly developed membranes can stand over 20 bar with a peak power density of 7.63 W/m2, which is equivalent to 13.72 W/m2 of its inner-selective hollow fiber counterpart with the same module size, packing density, and fiber dimensions. The study may provide insightful guidelines for optimizing the interfacial polymerization procedures and scaling up of the outer-selective TFC hollow fiber membrane modules for PRO power generation. © 2013 American Chemical Society.

  18. Outer-selective pressure-retarded osmosis hollow fiber membranes from vacuum-assisted interfacial polymerization for osmotic power generation.

    Science.gov (United States)

    Sun, Shi-Peng; Chung, Tai-Shung

    2013-11-19

    In this paper, we report the technical breakthroughs to synthesize outer-selective thin-film composite (TFC) hollow fiber membranes, which is in an urgent need for osmotic power generation with the pressure-retarded osmosis (PRO) process. In the first step, a defect-free thin-film composite membrane module is achieved by vacuum-assisted interfacial polymerization. The PRO performance is further enhanced by optimizing the support in terms of pore size and mechanical strength and the TFC layer with polydopamine coating and molecular engineering of the interfacial polymerization solution. The newly developed membranes can stand over 20 bar with a peak power density of 7.63 W/m(2), which is equivalent to 13.72 W/m(2) of its inner-selective hollow fiber counterpart with the same module size, packing density, and fiber dimensions. The study may provide insightful guidelines for optimizing the interfacial polymerization procedures and scaling up of the outer-selective TFC hollow fiber membrane modules for PRO power generation.

  19. Development of a automatic positioning system of photovoltaic panels for electric energy generation; Desenvolvimento de um sistema de posicionamento automatico de placas fotovoltaicas para a geracao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Alceu F.; Cagnon, Odivaldo Jose [Universidade Estadual Paulista (DEE/FEB/UNESP), Bauru, SP (Brazil). Fac. de Engenharia. Dept. de Engenharia Eletrica; Seraphin, Odivaldo Jose [Universidade Estadual Paulista (DER/FCA/UNESP), Botucatu, SP (Brazil). Fac. de Ciencias Agronomicas. Dept. de Engenharia Rural

    2008-07-01

    This work presents an automatic positioning system for photovoltaic panels, in order to improve the conversion of solar energy to electric energy. A prototype with automatic movement was developed, and its efficiency in generating electric energy was compared to another one with the same characteristics, but fixed in space. Preliminary results point to a significant increase in efficiency, obtained from a simplified process of movement, in which sensors are not used to determine the apparent sun's position, but instead of it, the relative Sun-Earth's position equations are used. An innovative movement mechanical system is also presented, using two stepper motors to move the panel along two-axis, but with independent movement, contributing, this way, to save energy during the positioning times. The use of this proposed system in rural areas is suggested. (author)

  20. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    Science.gov (United States)

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  1. BH-ShaDe: A Software Tool That Assists Architecture Students in the III-Structured Task of Housing Design

    Science.gov (United States)

    Millan, Eva; Belmonte, Maria-Victoria; Ruiz-Montiel, Manuela; Gavilanes, Juan; Perez-de-la-Cruz, Jose-Luis

    2016-01-01

    In this paper, we present BH-ShaDe, a new software tool to assist architecture students learning the ill-structured domain/task of housing design. The software tool provides students with automatic or interactively generated floor plan schemas for basic houses. The students can then use the generated schemas as initial seeds to develop complete…

  2. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  3. Intelligent E-Learning Systems: Automatic Construction of Ontologies

    Science.gov (United States)

    Peso, Jesús del; de Arriaga, Fernando

    2008-05-01

    During the last years a new generation of Intelligent E-Learning Systems (ILS) has emerged with enhanced functionality due, mainly, to influences from Distributed Artificial Intelligence, to the use of cognitive modelling, to the extensive use of the Internet, and to new educational ideas such as the student-centered education and Knowledge Management. The automatic construction of ontologies provides means of automatically updating the knowledge bases of their respective ILS, and of increasing their interoperability and communication among them, sharing the same ontology. The paper presents a new approach, able to produce ontologies from a small number of documents such as those obtained from the Internet, without the assistance of large corpora, by using simple syntactic rules and some semantic information. The method is independent of the natural language used. The use of a multi-agent system increases the flexibility and capability of the method. Although the method can be easily improved, the results so far obtained, are promising.

  4. Gel-aided sample preparation (GASP)?A simplified method for gel-assisted proteomic sample generation from protein extracts and intact cells

    OpenAIRE

    Fischer, Roman; Benedikt M Kessler

    2015-01-01

    We describe a ?gel-assisted? proteomic sample preparation method for MS analysis. Solubilized protein extracts or intact cells are copolymerized with acrylamide, facilitating denaturation, reduction, quantitative cysteine alkylation, and matrix formation. Gel-aided sample preparation has been optimized to be highly flexible, scalable, and to allow reproducible sample generation from 50 cells to milligrams of protein extracts. This methodology is fast, sensitive, easy-to-use on a wide range of...

  5. Assisted reproductive technology in the United States: 2001 results generated from the American Society for Reproductive Medicine/Society for Assisted Reproductive Technology registry.

    Science.gov (United States)

    2007-06-01

    To summarize the procedures and outcomes of assisted reproductive technologies (ART) that were initiated in the United States in 2001. Data were collected electronically using the Society for Assisted Reproductive Technology (SART) Clinic Outcome Reporting System software and submitted to the American Society for Reproductive Medicine/SART Registry. Three hundred eighty-five clinics submitted data on procedures performed in 2001. Data were collated after November 2002 [corrected] so that the outcomes of all pregnancies would be known. Incidence of clinical pregnancy, ectopic pregnancy, abortion, stillbirth, and delivery. Programs reported initiating 108,130 cycles of ART treatment. Of these, 79,042 cycles involved IVF (with and without micromanipulation), with a delivery rate per retrieval of 31.6%; 340 were cycles of gamete intrafallopian transfer, with a delivery rate per retrieval of 21.9%; 661 were cycles of zygote intrafallopian transfer, with a delivery rate per retrieval of 31.0%. The following additional ART procedures were also initiated: 8,147 fresh donor oocyte cycles, with a delivery rate per transfer of 47.3%; 14,509 frozen ET procedures, with a delivery rate per transfer of 23.5%; 3,187 frozen ETs employing donated oocytes or embryos, with a delivery rate per transfer of 27.4%; and 1,366 cycles using a host uterus, with a delivery rate per transfer of 38.7%. In addition, 112 cycles were reported as combinations of more than one treatment type, 8 cycles as research, and 85 as embryo banking. As a result of all procedures, 29,585 deliveries were reported, resulting in 41,168 neonates. In 2001, there were more programs reporting ART treatment and a significant increase in reported cycles compared with 2000.

  6. Automatic Planning Research Applied To Orbital Construction

    Science.gov (United States)

    Park, William T.

    1987-02-01

    Artificial intelligence research on automatic planning could result in a new class of management aids to reduce the cost of constructing the Space Station, and would have economically important spinoffs to terrestrial industry as well. Automatic planning programs could be used to plan and schedule launch activities, material deliveries to orbit, construction procedures, and the use of machinery and tools. Numerous automatic planning programs have been written since the 1050's. We describe PARPLAN, a recently-developed experimental automatic planning program written in the AI language Prolog, that can generate plans with parallel activities.

  7. Gravitational disturbances generated by the Sun, Phobos and Deimos in orbital maneuvers around Mars with automatic correction of the semi-major axis

    Science.gov (United States)

    Rocco, E. M.

    2015-10-01

    The objective of this work is to analyze orbital maneuvers of a spacecraft orbiting Mars, considering disturbance effects due to the gravitational attraction of the Sun, Phobos and Deimos, beyond the disturbances due to the gravitational potential of Mars. To simulate the trajectory, constructive aspects of the propulsion system were considered. Initially ideal thrusters, capable of applying infinite magnitude of the thrust, were used. Thus, impulsive optimal maneuvers were obtained by scanning the solutions of the Lambert's problem in order to select the maneuver of minimum fuel consumption. Due to the impossibility of applying an impulse, the orbital maneuver must be distributed in a propulsive arc around the position of the impulse given by the solution of the Lambert's problem. However the effect of the propulsive arc is not exactly equivalent to the application of an impulse due to the errors in magnitude and direction of applied thrust. Therefore, the influence of the thrusters’ capacity in the trajectory was evaluated for a more realistic model instead of the ideal case represented by the impulsive approach. Beyond the evaluation of the deviation in the orbital path, was considered an automatic correction of the semi-major axis using continuous low thrust controlled in closed loop to minimize the error in the trajectory after the application of the main thrust.

  8. Automatic Summarization of Online Debates

    OpenAIRE

    Sanchan, Nattapong; Aker, Ahmet; Bontcheva, Kalina

    2017-01-01

    Debate summarization is one of the novel and challenging research areas in automatic text summarization which has been largely unexplored. In this paper, we develop a debate summarization pipeline to summarize key topics which are discussed or argued in the two opposing sides of online debates. We view that the generation of debate summaries can be achieved by clustering, cluster labeling, and visualization. In our work, we investigate two different clustering approaches for the generation of...

  9. Automatic systems for assistance in improving pronunciations

    CSIR Research Space (South Africa)

    Badenhorst, JAC

    2006-11-01

    Full Text Available straightforward approach to this task would be to use a general-purpose speech-recognition system with a large vocabulary and unrestricted grammar to recognize the spoken utterance. However, general speech recognition is currently not sufficiently accurate... Department, http://htk.eng.cam.ac.uk/, 2005. [8] G. Kawai and K. Hirose, “Teaching the pronunciation of Japanese double-mora phonemes using speech recog- nition technology,” Speech Communication, vol. 30, pp. 131–143, 2000. [9] N. Govender, C. Kuun, V...

  10. Implementation of an Automatic System for the Monitoring of Start-up and Operating Regimes of the Cooling Water Installations of a Hydro Generator

    Directory of Open Access Journals (Sweden)

    Ioan Pădureanu

    2015-07-01

    Full Text Available The safe operation of a hydro generator depends on its thermal regime, the basic conditions being that the temperature in the stator winding fall within the limits of the insulation class. As the losses in copper depend on the square current in the stator winding, it is necessary that the cooling water debit should be adapted to the values of these losses, so that the winding temperature falls within the range of the values prescribed in the specifications. This paper presents an efficient solution of commanding and monitoring the water cooling installations of two high-power hydro generators.

  11. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast) Technology.

    Science.gov (United States)

    Álvarez de Toledo, Santiago; Anguera, Aurea; Barreiro, José M; Lara, Juan A; Lizcano, David

    2017-01-19

    Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p) learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment) are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  12. An Automatic Mosaicking Algorithm for the Generation of a Large-Scale Forest Height Map Using Spaceborne Repeat-Pass InSAR Correlation Magnitude

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2015-05-01

    Full Text Available This paper describes an automatic mosaicking algorithm for creating large-scale mosaic maps of forest height. In contrast to existing mosaicking approaches through using SAR backscatter power and/or InSAR phase, this paper utilizes the forest height estimates that are inverted from spaceborne repeat-pass cross-pol InSAR correlation magnitude. By using repeat-pass InSAR correlation measurements that are dominated by temporal decorrelation, it has been shown that a simplified inversion approach can be utilized to create a height-sensitive measure over the whole interferometric scene, where two scene-wide fitting parameters are able to characterize the mean behavior of the random motion and dielectric changes of the volume scatterers within the scene. In order to combine these single-scene results into a mosaic, a matrix formulation is used with nonlinear least squares and observations in adjacent-scene overlap areas to create a self-consistent estimate of forest height over the larger region. This automated mosaicking method has the benefit of suppressing the global fitting error and, thus, mitigating the “wallpapering” problem in the manual mosaicking process. The algorithm is validated over the U.S. state of Maine by using InSAR correlation magnitude data from ALOS/PALSAR and comparing the inverted forest height with Laser Vegetation Imaging Sensor (LVIS height and National Biomass and Carbon Dataset (NBCD basal area weighted (BAW height. This paper serves as a companion work to previously demonstrated results, the combination of which is meant to be an observational prototype for NASA’s DESDynI-R (now called NISAR and JAXA’s ALOS-2 satellite missions.

  13. A Reinforcement Learning Model Equipped with Sensors for Generating Perception Patterns: Implementation of a Simulated Air Navigation System Using ADS-B (Automatic Dependent Surveillance-Broadcast Technology

    Directory of Open Access Journals (Sweden)

    Santiago Álvarez de Toledo

    2017-01-01

    Full Text Available Over the last few decades, a number of reinforcement learning techniques have emerged, and different reinforcement learning-based applications have proliferated. However, such techniques tend to specialize in a particular field. This is an obstacle to their generalization and extrapolation to other areas. Besides, neither the reward-punishment (r-p learning process nor the convergence of results is fast and efficient enough. To address these obstacles, this research proposes a general reinforcement learning model. This model is independent of input and output types and based on general bioinspired principles that help to speed up the learning process. The model is composed of a perception module based on sensors whose specific perceptions are mapped as perception patterns. In this manner, similar perceptions (even if perceived at different positions in the environment are accounted for by the same perception pattern. Additionally, the model includes a procedure that statistically associates perception-action pattern pairs depending on the positive or negative results output by executing the respective action in response to a particular perception during the learning process. To do this, the model is fitted with a mechanism that reacts positively or negatively to particular sensory stimuli in order to rate results. The model is supplemented by an action module that can be configured depending on the maneuverability of each specific agent. The model has been applied in the air navigation domain, a field with strong safety restrictions, which led us to implement a simulated system equipped with the proposed model. Accordingly, the perception sensors were based on Automatic Dependent Surveillance-Broadcast (ADS-B technology, which is described in this paper. The results were quite satisfactory, and it outperformed traditional methods existing in the literature with respect to learning reliability and efficiency.

  14. Automatic Evolution of Molecular Nanotechnology Designs

    Science.gov (United States)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper describes strategies for automatically generating designs for analog circuits at the molecular level. Software maps out the edges and vertices of potential nanotechnology systems on graphs, then selects appropriate ones through evolutionary or genetic paradigms.

  15. Improved depth perception with three-dimensional auxiliary display and computer generated three-dimensional panoramic overviews in robot-assisted laparoscopy

    Science.gov (United States)

    Wieringa, Fokko P.; Bouma, Henri; Eendebak, Pieter T.; van Basten, Jean-Paul A.; Beerlage, Harrie P.; Smits, Geert A. H. J.; Bos, Jelte E.

    2014-01-01

    Abstract. In comparison to open surgery, endoscopic surgery offers impaired depth perception and narrower field-of-view. To improve depth perception, the Da Vinci robot offers three-dimensional (3-D) video on the console for the surgeon but not for assistants, although both must collaborate. We improved the shared perception of the whole surgical team by connecting live 3-D monitors to all three available Da Vinci generations, probed user experience after two years by questionnaire, and compared time measurements of a predefined complex interaction task performed with a 3-D monitor versus two-dimensional. Additionally, we investigated whether the complex mental task of reconstructing a 3-D overview from an endoscopic video can be performed by a computer and shared among users. During the study, 925 robot-assisted laparoscopic procedures were performed in three hospitals, including prostatectomies, cystectomies, and nephrectomies. Thirty-one users participated in our questionnaire. Eighty-four percent preferred 3-D monitors and 100% reported spatial-perception improvement. All participating urologists indicated quicker performance of tasks requiring delicate collaboration (e.g., clip placement) when assistants used 3-D monitors. Eighteen users participated in a timing experiment during a delicate cooperation task in vitro. Teamwork was significantly (40%) faster with the 3-D monitor. Computer-generated 3-D reconstructions from recordings offered very wide interactive panoramas with educational value, although the present embodiment is vulnerable to movement artifacts. PMID:26158026

  16. Generation of high-power terahertz radiation by nonlinear photon-assisted tunneling transport in plasmonic metamaterials

    Science.gov (United States)

    Chen, Pai-Yen; Salas, Rodolfo; Farhat, Mohamed

    2017-12-01

    We propose an optoelectronic terahertz oscillator based on the quantum tunneling effect in a plasmonic metamaterial, utilizing a nanostructured metal-insulator-metal (MIM) tunneling junction. The collective resonant response of meta-atoms can achieve >90% optical absorption and strongly localized optical fields within the MIM plasmonic nanojunction. By properly tailoring the radiation aperture, the nonlinear quantum conductance induced by the metamaterial-enhanced, photon-assisted tunneling may produce miliwatt-level terahertz radiation through the optical beating (or heterodyne down conversion) of two lasers with a slight frequency offset. We envisage that the interplay between photon-assisted tunneling and plasmon coupling within the MIM metamaterial/diode may substantially enhance the modulated terahertz photocurrent, and may therefore realize a practical high-power, room-temperature source in applications of terahertz electronics.

  17. Automatic Atlas Based Electron Density and Structure Contouring for MRI-based Prostate Radiation Therapy on the Cloud

    Science.gov (United States)

    Dowling, J. A.; Burdett, N.; Greer, P. B.; Sun, J.; Parker, J.; Pichler, P.; Stanwell, P.; Chandra, S.; Rivest-Hénault, D.; Ghose, S.; Salvado, O.; Fripp, J.

    2014-03-01

    Our group have been developing methods for MRI-alone prostate cancer radiation therapy treatment planning. To assist with clinical validation of the workflow we are investigating a cloud platform solution for research purposes. Benefits of cloud computing can include increased scalability, performance and extensibility while reducing total cost of ownership. In this paper we demonstrate the generation of DICOM-RT directories containing an automatic average atlas based electron density image and fast pelvic organ contouring from whole pelvis MR scans.

  18. Mechanism of transformation in Mycobacteria using a novel shockwave assisted technique driven by in-situ generated oxyhydrogen

    OpenAIRE

    Datey, Akshay; Subburaj, Janardhanraj; Gopalan, Jagadeesh; Chakravortty, Dipshikha

    2017-01-01

    We present a novel method for shockwave-assisted bacterial transformation using a miniature oxyhydrogen detonation-driven shock tube. We have obtained transformation efficiencies of about 1.28???106, 1.7???106, 5???106, 1???105, 1???105 and 2???105 transformants/?g of DNA for Escherichia coli, Salmonella Typhimurum, Pseudomonas aeruginosa, Mycobacterium smegmatis, Mycobacterium tuberculosis (Mtb) and Helicobacter pylori respectively using this method which are significantly higher than those ...

  19. Automatic Parallelization Using OpenMP Based on STL Semantics

    Energy Technology Data Exchange (ETDEWEB)

    Liao, C; Quinlan, D J; Willcock, J J; Panas, T

    2008-06-03

    Automatic parallelization of sequential applications using OpenMP as a target has been attracting significant attention recently because of the popularity of multicore processors and the simplicity of using OpenMP to express parallelism for shared-memory systems. However, most previous research has only focused on C and Fortran applications operating on primitive data types. C++ applications using high level abstractions such as STL containers are largely ignored due to the lack of research compilers that are readily able to recognize high level object-oriented abstractions of STL. In this paper, we use ROSE, a multiple-language source-to-source compiler infrastructure, to build a parallelizer that can recognize such high level semantics and parallelize C++ applications using certain STL containers. The idea of our work is to automatically insert OpenMP constructs using extended conventional dependence analysis and the known domain-specific semantics of high-level abstractions with optional assistance from source code annotations. In addition, the parallelizer is followed by an OpenMP translator to translate the generated OpenMP programs into multi-threaded code targeted to a popular OpenMP runtime library. Our work extends the applicability of automatic parallelization and provides another way to take advantage of multicore processors.

  20. Data-Driven Hint Generation in Vast Solution Spaces: A Self-Improving Python Programming Tutor

    Science.gov (United States)

    Rivers, Kelly; Koedinger, Kenneth R.

    2017-01-01

    To provide personalized help to students who are working on code-writing problems, we introduce a data-driven tutoring system, ITAP (Intelligent Teaching Assistant for Programming). ITAP uses state abstraction, path construction, and state reification to automatically generate personalized hints for students, even when given states that have not…

  1. [Wearable Automatic External Defibrillators].

    Science.gov (United States)

    Luo, Huajie; Luo, Zhangyuan; Jin, Xun; Zhang, Leilei; Wang, Changjin; Zhang, Wenzan; Tu, Quan

    2015-11-01

    Defibrillation is the most effective method of treating ventricular fibrillation(VF), this paper introduces wearable automatic external defibrillators based on embedded system which includes EGG measurements, bioelectrical impedance measurement, discharge defibrillation module, which can automatic identify VF signal, biphasic exponential waveform defibrillation discharge. After verified by animal tests, the device can realize EGG acquisition and automatic identification. After identifying the ventricular fibrillation signal, it can automatic defibrillate to abort ventricular fibrillation and to realize the cardiac electrical cardioversion.

  2. Automatic fluid dispenser

    Science.gov (United States)

    Sakellaris, P. C. (Inventor)

    1977-01-01

    Fluid automatically flows to individual dispensing units at predetermined times from a fluid supply and is available only for a predetermined interval of time after which an automatic control causes the fluid to drain from the individual dispensing units. Fluid deprivation continues until the beginning of a new cycle when the fluid is once again automatically made available at the individual dispensing units.

  3. Automatically predicting mood from expressed emotions

    NARCIS (Netherlands)

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused

  4. Experiences in automatic keywording of particle physics literature

    CERN Document Server

    Montejo Ráez, Arturo

    2001-01-01

    Attributing keywords can assist in the classification and retrieval of documents in the particle physics literature. As information services face a future with less available manpower and more and more documents being written, the possibility of keyword attribution being assisted by automatic classification software is explored. A project being carried out at CERN (the European Laboratory for Particle Physics) for the development and integration of automatic keywording is described.

  5. Synthesis, optical properties and residual strain effect of GaN nanowires generated via metal-assisted photochemical electroless etching

    KAUST Repository

    Najar, Adel

    2017-04-18

    Herein, we report on the studies of GaN nanowires (GaN NWs) prepared via a metal-assisted photochemical electroless etching method with Pt as the catalyst. It has been found that etching time greatly influences the growth of GaN NWs. The density and the length of nanowires increased with longer etching time, and excellent substrate coverage was observed. The average nanowire width and length are around 35 nm and 10 μm, respectively. Transmission electron microscopy (TEM) shows a single-crystalline wurtzite structure and is confirmed by X-ray measurements. The synthesis mechanism of GaN NWs using the metal-assisted photochemical electroless etching method was presented. Photoluminescence (PL) measurements of GaN NWs show red-shift PL peaks compared to the as-grown sample associated with the relaxation of compressive stress. Furthermore, a shift of the E2 peak to the lower frequency in the Raman spectra for the samples etched for a longer time confirms such a stress relaxation. Based on Raman measurements, the compressive stress σxx and the residual strain εxx were evaluated to be 0.23 GPa and 2.6 × 10−4, respectively. GaN NW synthesis using a low cost method might be used for the fabrication of power optoelectronic devices and gas sensors.

  6. Effects of assisted magnetic field to an atmospheric-pressure plasma jet on radical generation at the plasma-surface interface and bactericidal function

    Science.gov (United States)

    Liu, Chih-Tung; Kumakura, Takumi; Ishikawa, Kenji; Hashizume, Hiroshi; Takeda, Keigo; Ito, Masafumi; Hori, Masaru; Wu, Jong-Shinn

    2016-12-01

    A configuration of magnetic-assisted-plasma (MAP) on helium-based atmospheric-pressure plasma jet (APPJ) with an axial magnetic-field of 0.587 T is proposed, which provides good ability for killing bacteria Escherichia coli on the agar surface. Optically, we confirmed that the MAP increased approximately 2.4 times in the electron density estimated by the Stark broadening of H β line emission, and approximately 1.5 times enhancement of atomic oxygen concentration measured by vacuum ultraviolet absorption spectroscopy (VUVAS). Moreover, the generation of hydroxyl radical in the water increased 1.5 times, confirmed by the spin-trapping electron spin-resonance technique. In addition, the bactericidal experiments demonstrated 2.4 times higher for E. coli by the MAP treatment. The MAP configuration is proposed to be highly useful for future bio-medical applications by enhancing the radical generation at the plasma/substrate interface region.

  7. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J. (Elemental Technologies, American Fort, UT)

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  8. Automatic translation among spoken languages

    Science.gov (United States)

    Walter, Sharon M.; Costigan, Kelly

    1994-01-01

    The Machine Aided Voice Translation (MAVT) system was developed in response to the shortage of experienced military field interrogators with both foreign language proficiency and interrogation skills. Combining speech recognition, machine translation, and speech generation technologies, the MAVT accepts an interrogator's spoken English question and translates it into spoken Spanish. The spoken Spanish response of the potential informant can then be translated into spoken English. Potential military and civilian applications for automatic spoken language translation technology are discussed in this paper.

  9. LEARNING STRATEGY IMPLEMENTATION OF GENERATIVE LEARNING ASSISTED SCIENTIST’S CARD TO IMPROVE SELF EFFICACY OF JUNIOR HIGH SCHOOL STUDENT IN CLASS VIII

    Directory of Open Access Journals (Sweden)

    R. Yuliarti

    2016-01-01

    Full Text Available In general, self-efficacy of the students is still low. This study aims to determine the learning strategies implementation of generative learning assisted scientist's card in improving self-efficacy and cognitive learning outcomes of the students. The study designed form One Group Pretest-Posttest Design. The improvement of self-efficacy can be determined from the change in the questionnaire score before and after the learning and observations during the learning process. Cognitive learning outcomes are known from pretest and posttest scores. To determine the improvement, the data were analyzed by using the gain test. The results showed that N-gain of self-efficacy is 0.13 (low and N-gain of cognitive learning is 0.60 (medium. Based on the observation, students’ self-efficacy has increased each meeting. Cognitive learning results also achieved mastery learning as big as 72.88%. It could be concluded that the learning strategy of generative learning assisted scientist's card can improve self efficacy and cognitive learning outcomes of the students.Pada umumnya, self efficacy yang dimiliki siswa masih rendah. Penelitian ini bertujuan untuk mengetahui penerapan strategi pembelajaran generative learning berbantuan scientist’s card dalam meningkatkan self efficacy dan  hasil belajar  kognitif siswa.  Desain penelitian berbentuk One Group Pretest-Posttest Design. Peningkatan self efficacy dapat diketahui dari perubahan  skor angket sebelum dan sesudah pembelajaran dan hasil observasi selama pembelajaran. Hasil  belajar kognitif diketahui dari skor pretest dan posttest. Untuk mengetahui peningkatannya, data yang diperoleh dianalisis menggunakan uji gain. Hasil penelitian menunjukkan bahwa peningkatan self efficacy berkatagori rendah dan peningkatan hasil belajar kognitif berkatagori sedang. Berdasarkan hasil observasi, self efficacy siswa setiap pertemuan meningkat. Hasil belajar ranah kognitif juga mencapai ketuntasan belajar .Jadi dapat

  10. DRACO-STEM: An Automatic Tool to Generate High-Quality 3D Meshes of Shoot Apical Meristem Tissue at Cell Resolution.

    Science.gov (United States)

    Cerutti, Guillaume; Ali, Olivier; Godin, Christophe

    2017-01-01

    Context: The shoot apical meristem (SAM), origin of all aerial organs of the plant, is a restricted niche of stem cells whose growth is regulated by a complex network of genetic, hormonal and mechanical interactions. Studying the development of this area at cell level using 3D microscopy time-lapse imaging is a newly emerging key to understand the processes controlling plant morphogenesis. Computational models have been proposed to simulate those mechanisms, however their validation on real-life data is an essential step that requires an adequate representation of the growing tissue to be carried out. Achievements: The tool we introduce is a two-stage computational pipeline that generates a complete 3D triangular mesh of the tissue volume based on a segmented tissue image stack. DRACO (Dual Reconstruction by Adjacency Complex Optimization) is designed to retrieve the underlying 3D topological structure of the tissue and compute its dual geometry, while STEM (SAM Tissue Enhanced Mesh) returns a faithful triangular mesh optimized along several quality criteria (intrinsic quality, tissue reconstruction, visual adequacy). Quantitative evaluation tools measuring the performance of the method along those different dimensions are also provided. The resulting meshes can be used as input and validation for biomechanical simulations. Availability: DRACO-STEM is supplied as a package of the open-source multi-platform plant modeling library OpenAlea (http://openalea.github.io/) implemented in Python, and is freely distributed on GitHub (https://github.com/VirtualPlants/draco-stem) along with guidelines for installation and use.

  11. Automatic generation of zonal models to study air movement and temperature distribution in buildings; Generation automatique de modeles zonaux pour l'etude du comportement thermo-aeraulique des batiments

    Energy Technology Data Exchange (ETDEWEB)

    Musy, M.

    1999-07-01

    This study consists in showing that it is possible to automatically build zonal models that allow to predict air movement, temperature distribution and air quality in the whole building. Zonal models are based on a rough partitioning of the rooms. It is an intermediate approach between one-node models and CFD models. One-node models consider an homogeneous temperature in each room, and for that reason, do not permit to predict the thermal comfort in a room whereas CFD models require a great amount of simulation time To achieve this aim, the zonal model was entirely reformulated as the connection of small sets of equations. The equations describe, either the state of a sub-zone of the partitioning (such sets of equations are called 'cells'), or mass and energy transfers that occur between two sub-zones (then, they are called 'interfaces'). There are various 'cells' and 'interfaces' to represent different air flows that occur in buildings. They all have been translated into SPARK objects that form a model library. Building a simulation consists in choosing the appropriate models to represent the rooms, and connecting them. The last stage has been automated. So, the only thing the user has to do is to give the partitioning and to choose the models to be implemented. The resulting set of equations is solved iteratively with SPARK. Results of simulations in 3D-rooms are presented and compared with experimental data. examples of zonal models are also given. They are applied to the study of a group of two rooms, a building, and a room the geometry of which is complex. (author)

  12. SubClonal Hierarchy Inference from Somatic Mutations: Automatic Reconstruction of Cancer Evolutionary Trees from Multi-region Next Generation Sequencing.

    Directory of Open Access Journals (Sweden)

    Noushin Niknafs

    2015-10-01

    Full Text Available Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones--cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8 can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can

  13. Standard Setting for Next Generation TOEFL Academic Speaking Test (TAST): Reflections on the ETS Panel of International Teaching Assistant Developers

    Science.gov (United States)

    Papajohn, Dean

    2006-01-01

    While many institutions have utilized TOEFL scores for international admissions for many years, a speaking section has never before been a required part of TOEFL until the development of the iBT/Next Generation TOEFL. So institutions will need to determine how to set standards for the speaking section of TOEFL, also known as TOEFL Academic…

  14. A potential high risk for fatty liver disease was found in mice generated after assisted reproductive techniques.

    Science.gov (United States)

    Gu, Leilei; Zhang, Jingjing; Zheng, Meimei; Dong, Guoying; Xu, Jingyi; Zhang, Wuyue; Wu, Yibo; Yang, Yang; Zhu, Hui

    2018-02-01

    Abnormal gametogenesis and embryonic development may lead to poor health status of the offspring. The operations involved in the assisted reproductive technologies (ARTs) occur during the key stage of gametogenesis and early embryonic development. To assess the potential risk of abnormal lipid metabolism in the liver of adult ARTs offspring, two ARTs mice models derived from preimplantation genetic diagnosis (PGD group) and in vitro cultured embryos without biopsy (IVEM group) were constructed. And control mice were from in vivo naturally conceived (Normal group). The results showed that ARTs offspring had increased body weight and body fat content comparing to normal group. An increasing volume and amount of lipid droplets as well as lipid droplet fusion were found in the hepatocytes of ARTs mice, and a significantly increased liver TG content was also shown in the ARTs mice, which due to the increased TG synthesis and decreased TG transport in the liver. All the results indicated that the manipulations involved in ARTs might play an important role in the lipid accumulation of adult offspring. By analyzing the DNA methylation profiles of 7.5dpc embryos, we proposed that methylation deregulation of the genes related to liver development in ARTs embryos might contribute to the abnormal phenotype in the offspring. The study demonstrated that ARTs procedures have adverse effect on liver development which resulted in abnormal lipid metabolism and induced the potential high risk of fatty liver in adulthood. © 2017 Wiley Periodicals, Inc.

  15. Mobile-Cloud Assisted Video Summarization Framework for Efficient Management of Remote Sensing Data Generated by Wireless Capsule Sensors

    Directory of Open Access Journals (Sweden)

    Irfan Mehmood

    2014-09-01

    Full Text Available Wireless capsule endoscopy (WCE has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.

  16. Sample preconcentration utilizing nanofractures generated by junction gap breakdown assisted by self-assembled monolayer of gold nanoparticles.

    Directory of Open Access Journals (Sweden)

    Chun-Ping Jen

    Full Text Available The preconcentration of proteins with low concentrations can be used to increase the sensitivity and accuracy of detection. A nonlinear electrokinetic flow is induced in a nanofluidic channel due to the overlap of electrical double layers, resulting in the fast accumulation of proteins, referred to as the exclusion-enrichment effect. The proposed chip for protein preconcentration was fabricated using simple standard soft lithography with a polydimethylsiloxane replica. This study extends our previous paper, in which gold nanoparticles were manually deposited onto the surface of a protein preconcentrator. In the present work, nanofractures were formed by utilizing the self-assembly of gold-nanoparticle-assisted electric breakdown. This reliable method for nanofracture formation, involving self-assembled monolayers of nanoparticles at the junction gap between microchannels, also decreases the required electric breakdown voltage. The experimental results reveal that a high concentration factor of 1.5×10(4 for a protein sample with an extremely low concentration of 1 nM was achieved in 30 min by using the proposed chip, which is faster than our previously proposed chip at the same conditions. Moreover, an immunoassay of bovine serum albumin (BSA and anti-BSA was carried out to demonstrate the applicability of the proposed chip.

  17. Mechanism of transformation in Mycobacteria using a novel shockwave assisted technique driven by in-situ generated oxyhydrogen.

    Science.gov (United States)

    Datey, Akshay; Subburaj, Janardhanraj; Gopalan, Jagadeesh; Chakravortty, Dipshikha

    2017-08-17

    We present a novel method for shockwave-assisted bacterial transformation using a miniature oxyhydrogen detonation-driven shock tube. We have obtained transformation efficiencies of about 1.28 × 106, 1.7 × 106, 5 × 106, 1 × 105, 1 × 105 and 2 × 105 transformants/µg of DNA for Escherichia coli, Salmonella Typhimurum, Pseudomonas aeruginosa, Mycobacterium smegmatis, Mycobacterium tuberculosis (Mtb) and Helicobacter pylori respectively using this method which are significantly higher than those obtained using conventional methods. Mtb is the most difficult bacteria to be transformed and hence their genetic modification is hampered due to their poor transformation efficiency. Experimental results show that longer steady time duration of the shockwave results in higher transformation efficiencies. Measurements of Young's modulus and rigidity of cell wall give a good understanding of the transformation mechanism and these results have been validated computationally. We describe the development of a novel shockwave device for efficient bacterial transformation in complex bacteria along with experimental evidence for understanding the transformation mechanism.

  18. Natural language processing techniques for automatic test ...

    African Journals Online (AJOL)

    ... user and allows him/her to submit essay answers back into the application system. Evaluation results with the system show that the generated questions achieved average accuracies of 87.5% and 88.1% by two human experts. Keywords: Discourse Connectives, Machine Learning, Automatic Test Generation E-Learning.

  19. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  20. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  1. Automated Generation of Models of Activities of Daily Living

    OpenAIRE

    Saives, Jérémie; Faraut, Gregory

    2014-01-01

    International audience; In order to increase the safety of autonomous elderly people in their home, Ambient Assisted Living technologies are currently emerging. Namely, the recognition of their activities might be a way to detect eventual health problems, and can be performed in a Smarthome equipped with binary sensors. Hence, this communication aims at providing means to automatically generate a formal model of the Activities of Daily Living. A data mining approach in order to discover frequ...

  2. Simulating the parameters of multi-modal groundwater residence-time distributions using parsimonious and automatically generated numerical models that incorporate spatial variation of drainage density, aquifer thickness, recharge, and aquifer properties

    Science.gov (United States)

    Starn, J. J.; Belitz, K.; Kauffman, L. J.; Yager, R. M.

    2016-12-01

    Groundwater residence-time distributions (RTDs) are important for assessing groundwater vulnerability. Two methods for calculating RTDs are common: lumped parameter models (LPMs) and distributed numerical models (DNMs). Model parameters in either approach can be determined by fitting to tracer data. LPMs represent simple flow systems and have few parameters; however, they may not be appropriate where internal aquifer processes are important, in transient flow systems, or in spatially complex flow systems. DNMs are not constrained by assumptions of uniform aquifer thickness, recharge, or aquifer properties, but are time-consuming and expensive to construct. This approach seeks to bridge the gap between LPMs and DNMs. Generalized DNMs (GDNMs) are created having a small number of parameters compared to typical DNMs. GDNMs are created automatically at the HUC8 scale based on nationally available data sets and incorporate spatial variation not present in LPMs such as drainage density, aquifer thickness, recharge, and aquifer properties. Typical creation time from a model domain shapefile to a "calibrated" model is about 10 minutes. GDNMs were created in various hydrogeologic settings in the glaciated U.S. where age-tracer data exist. GDNMs are "calibrated" by automatically adjusting hydraulic conductivity in a small number of zones to minimize the fraction of cells where the simulated water table is above land surface and below perennial streams. RTDs are calculated for groundwater basins, public-supply wells, and stream reaches using particle tracking. RTDs, which often are multi-modal, are summarized by fitting statistical distributions (additive combinations of Weibull, gamma, or inverse Gaussian) to particle RTDs. GDNMs are tested by comparing simulated and observed heads and by applying convolution to generate breakthrough curves for tracers that can be compared to observed concentrations. GDNMs also are compared to highly distributed DNMs from other sources. The

  3. The automatic generation of information security profiles

    OpenAIRE

    2014-01-01

    D.Phil. (Computer Science) Security needs have changed considerably in the past decade as the economics of computer usage necessitates increased business reliance on computers. As more individuals need computers to perform their jobs, more detailed security controls are needed to offset the risk inherent in granting more people access to computer systems. Traditionally, computer security administrators have been tasked with configuring' , security systems by setting controls on the actions...

  4. Automatic generation of summaries for the Web

    Science.gov (United States)

    Kopf, Stephan; Haenselmann, Thomas; Farin, Dirk; Effelsberg, Wolfgang

    2003-12-01

    Many TV broadcasters and film archives are planning to make their collections available on the Web. However, a major problem with large film archives is the fact that it is difficult to search the content visually. A video summary is a sequence of video clips extracted from a longer video. Much shorter than the original, the summary preserves its essential messages. Hence, video summaries may speed up the search significantly. Videos that have full horizontal and vertical resolution will usually not be accepted on the Web, since the bandwidth required to transfer the video is generally very high. If the resolution of a video is reduced in an intelligent way, its content can still be understood. We introduce a new algorithm that reduces the resolution while preserving as much of the semantics as possible. In the MoCA (movie content analysis) project at the University of Mannheim we developed the video summarization component and tested it on a large collection of films. In this paper we discuss the particular challenges which the reduction of the video length poses, and report empirical results from the use of our summarization tool.

  5. Automatic Generation of Conditional Diagnostic Guidelines.

    Science.gov (United States)

    Baldwin, Tyler; Guo, Yufan; Syeda-Mahmood, Tanveer

    2016-01-01

    The diagnostic workup for many diseases can be extraordinarily nuanced, and as such reference material text often contains extensive information regarding when it is appropriate to have a patient undergo a given procedure. In this work we employ a three task pipeline for the extraction of statements indicating the conditions under which a procedure should be performed, given a suspected diagnosis. First, we identify each instance in the text where a procedure is being recommended. Next we examine the context around these recommendations to extract conditional statements that dictate the conditions under which the recommendation holds. Finally, corefering recommendations across the document are linked to produce a full recommendation summary. Results indicate that each underlying task can be performed with above baseline performance, and the output can be used to produce concise recommendation summaries.

  6. Automatic Generation of Conditional Diagnostic Guidelines

    Science.gov (United States)

    Baldwin, Tyler; Guo, Yufan; Syeda-Mahmood, Tanveer

    2016-01-01

    The diagnostic workup for many diseases can be extraordinarily nuanced, and as such reference material text often contains extensive information regarding when it is appropriate to have a patient undergo a given procedure. In this work we employ a three task pipeline for the extraction of statements indicating the conditions under which a procedure should be performed, given a suspected diagnosis. First, we identify each instance in the text where a procedure is being recommended. Next we examine the context around these recommendations to extract conditional statements that dictate the conditions under which the recommendation holds. Finally, corefering recommendations across the document are linked to produce a full recommendation summary. Results indicate that each underlying task can be performed with above baseline performance, and the output can be used to produce concise recommendation summaries. PMID:28269823

  7. The Optimization of Automatically Generated Compilers.

    Science.gov (United States)

    1987-01-01

    we propose a new type of attribute evaluator which, like the LR -attributed method, also works simultaneously with an LR parser . This new type of...reader to the very excellent introductions of the subject in [WaG83], [AhU77], [HoU79]. Consider a typical state of an LR parser as it is depicted in...input string. For example, Figure 4.2 shows what happens when a "structure tree building" LR parser is in the process of stack top 4 B CD] (a) - parse

  8. Mediation and Automatization.

    Science.gov (United States)

    Hutchins, Edwin

    This paper discusses the relationship between the mediation of task performance by some structure that is not inherent in the task domain itself and the phenomenon of automatization, in which skilled performance becomes effortless or phenomenologically "automatic" after extensive practice. The use of a common simple explicit mediating…

  9. Digital automatic gain control

    Science.gov (United States)

    Uzdy, Z.

    1980-01-01

    Performance analysis, used to evaluated fitness of several circuits to digital automatic gain control (AGC), indicates that digital integrator employing coherent amplitude detector (CAD) is best device suited for application. Circuit reduces gain error to half that of conventional analog AGC while making it possible to automatically modify response of receiver to match incoming signal conditions.

  10. AUTOMATIC INTRAVENOUS DRIP CONTROLLER*

    African Journals Online (AJOL)

    Both the nursing staff shortage and the need for precise control in the administration of dangerous drugs intra- venously have led to the development of various devices to achieve an automatic system. The continuous automatic control of the drip rate eliminates errors due to any physical effect such as movement of the ...

  11. Computational investigation of the flow field contribution to improve electricity generation in granular activated carbon-assisted microbial fuel cells

    Science.gov (United States)

    Zhao, Lei; Li, Jian; Battaglia, Francine; He, Zhen

    2016-11-01

    Microbial fuel cells (MFCs) offer an alternative approach to treat wastewater with less energy input and direct electricity generation. To optimize MFC anodic performance, adding granular activated carbon (GAC) has been proved to be an effective way, most likely due to the enlarged electrode surface for biomass attachment and improved mixing of the flow field. The impact of a flow field on the current enhancement within a porous anode medium (e.g., GAC) has not been well understood before, and thus is investigated in this study by using mathematical modeling of the multi-order Butler-Volmer equation with computational fluid dynamics (CFD) techniques. By comparing three different CFD cases (without GAC, with GAC as a nonreactive porous medium, and with GAC as a reactive porous medium), it is demonstrated that adding GAC contributes to a uniform flow field and a total current enhancement of 17%, a factor that cannot be neglected in MFC design. However, in an actual MFC operation, this percentage could be even higher because of the microbial competition and energy loss issues within a porous medium. The results of the present study are expected to help with formulating strategies to optimize MFC with a better flow pattern design.

  12. Direct determination of mercury in white vinegar by matrix assisted photochemical vapor generation atomic fluorescence spectrometry detection

    Energy Technology Data Exchange (ETDEWEB)

    Liu Qingyang, E-mail: liuqingyang0807@yahoo.com.c [Beijing Center for Physical and Chemical Analysis, Beijing 100089 (China)

    2010-07-15

    This paper proposes the use of photochemical vapor generation with acetic acid as sample introduction for the direct determination of ultra-trace mercury in white vinegars by atomic fluorescence spectrometry. Under ultraviolet irradiation, the sample matrix (acetic acid) can reduce mercury ion to atomic mercury Hg{sup 0}, which is swept by argon gas into an atomic fluorescence spectrometer for subsequent analytical measurements. The effects of several factors such as the concentration of acetic acid, irradiation time, the flow rate of the carrier gas and matrix effects were discussed and optimized to give detection limits of 0.08 ng mL{sup -1} for mercury. Using the experimental conditions established during the optimization (3% v/v acetic acid, 30 s irradiation time and 20 W mercury lamp), the precision levels, expressed as relative standard deviation, were 4.6% (one day) and 7.8% (inter-day) for mercury (n = 9). Addition/recovery tests for evaluation of the accuracy were in the range of 92-98% for mercury. The method was also validated by analysis of vinegar samples without detectable amount of Hg spiked with aqueous standard reference materials (GBW(E) 080392 and GBW(E) 080393). The results were also compared with those obtained by acid digestion procedure and determination of mercury by ICP-MS. There was no significant difference between the results obtained by the two methods based on a t-test (at 95% confidence level).

  13. Automatic lexical classification: bridging research and practice.

    Science.gov (United States)

    Korhonen, Anna

    2010-08-13

    Natural language processing (NLP)--the automatic analysis, understanding and generation of human language by computers--is vitally dependent on accurate knowledge about words. Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale.

  14. Automatic inference of indexing rules for MEDLINE

    Directory of Open Access Journals (Sweden)

    Shooshan Sonya E

    2008-11-01

    Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  15. Automatic inference of indexing rules for MEDLINE.

    Science.gov (United States)

    Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent

    2008-11-19

    Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.

  16. Methodology for Automatic Generation of Models for Large Urban Spaces Based on GIS Data/Metodología para la generación automática de modelos de grandes espacios urbanos desde información SIG/

    Directory of Open Access Journals (Sweden)

    Sergio Arturo Ordóñez Medina

    2012-12-01

    Full Text Available In the planning and evaluation stages of infrastructure projects, it is necessary to manage huge quantities of information. Cities are very complex systems, which need to be modeled when an intervention is required. Suchmodels allow us to measure the impact of infrastructure changes, simulating hypothetic scenarios and evaluating results. This paper describes a methodology for the automatic generation of urban space models from GIS sources. A Voronoi diagram is used to partition large urban regions and subsequently define zones of interest. Finally, some examples of application models are presented, one used for microsimulation of traffic and another for air pollution simulation.En las etapas de planeación y evaluación de proyectos de infraestructura es necesario manejar grandes cantidades de información. Las ciudades son sistemas complejos que deben ser modeladas para ser intervenidas. Estos modelos permitirón medir el impacto de los cambios de infraestructura, simular escenarios hipotéticos y evaluar resultados. Este artículo describe una metodología para generar automáticamente modelos espaciales urbanos desde fuentes SIG: Un diagrama de Voronoi es usado para dividir grandes regiones urbanas, y a continuación serán definidas las zonas de interés. Finalmente, algunos ejemplos de modelos de aplicación serán presentados, uno usado para microsimulación de tráfico y el otro para simular contaminación atmosférica.

  17. Image matching as a data source for forest inventory - Comparison of Semi-Global Matching and Next-Generation Automatic Terrain Extraction algorithms in a typical managed boreal forest environment

    Science.gov (United States)

    Kukkonen, M.; Maltamo, M.; Packalen, P.

    2017-08-01

    Image matching is emerging as a compelling alternative to airborne laser scanning (ALS) as a data source for forest inventory and management. There is currently an open discussion in the forest inventory community about whether, and to what extent, the new method can be applied to practical inventory campaigns. This paper aims to contribute to this discussion by comparing two different image matching algorithms (Semi-Global Matching [SGM] and Next-Generation Automatic Terrain Extraction [NGATE]) and ALS in a typical managed boreal forest environment in southern Finland. Spectral features from unrectified aerial images were included in the modeling and the potential of image matching in areas without a high resolution digital terrain model (DTM) was also explored. Plot level predictions for total volume, stem number, basal area, height of basal area median tree and diameter of basal area median tree were modeled using an area-based approach. Plot level dominant tree species were predicted using a random forest algorithm, also using an area-based approach. The statistical difference between the error rates from different datasets was evaluated using a bootstrap method. Results showed that ALS outperformed image matching with every forest attribute, even when a high resolution DTM was used for height normalization and spectral information from images was included. Dominant tree species classification with image matching achieved accuracy levels similar to ALS regardless of the resolution of the DTM when spectral metrics were used. Neither of the image matching algorithms consistently outperformed the other, but there were noticeably different error rates depending on the parameter configuration, spectral band, resolution of DTM, or response variable. This study showed that image matching provides reasonable point cloud data for forest inventory purposes, especially when a high resolution DTM is available and information from the understory is redundant.

  18. Fluid Dynamics and Biofilm Removal Generated by Syringe-delivered and 2 Ultrasonic-assisted Irrigation Methods: A Novel Experimental Approach.

    Science.gov (United States)

    Layton, Gillian; Wu, Wen-I; Selvaganapathy, Ponnambalam Ravi; Friedman, Shimon; Kishen, Anil

    2015-06-01

    Thorough understanding of fluid dynamics in root canal irrigation and corresponding antibiofilm capacity will support improved disinfection strategies. This study aimed to develop a standardized, simulated root canal model that allows real-time analysis of fluid/irrigation dynamics and its correlation with biofilm elimination. A maxillary incisor with an instrumented root canal was imaged with micro-computed tomography. The canal volume was reconstructed in 3 dimensions and replicated in soft lithography-based models microfabricated from polyethylene glycol-modified polydimethylsiloxane. Canals were irrigated by using a syringe (SI) and 2 ultrasonic-assisted methods, intermittent (IUAI) and continuous (CUAI). Real-time fluid movement within the apical 3 mm of canals was imaged by using microparticle image velocimetry. In similar models, canals were inoculated with Enterococcus faecalis to grow 3-week-old biofilms. Biofilm reduction by irrigation with SI, CUAI, and IUAI was assessed by using a crystal violet assay and compared with an untreated control. SI generated higher velocity and shear stress in the apical 1-2 mm than 0-1 and 2-3 mm. IUAI generated consistently low shear stress in the apical 3 mm. CUAI generated consistently high levels of velocity and shear stress; it was the highest of the groups in the apical 0-1 and 2-3 mm. Biofilm was significantly reduced compared with the control only by CUAI (two-sample permutation test, P = .005). CUAI exhibited the highest mechanical effects of fluid flow in the apical 3 mm, which correlated with significant biofilm reduction. The soft lithography-based models provided a novel model/method for study of correlations between fluid dynamics and the antibiofilm capacity of root canal irrigation methods. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  19. Automatic wire twister.

    Science.gov (United States)

    Smith, J F; Rodeheaver, G T; Thacker, J G; Morgan, R F; Chang, D E; Fariss, B L; Edlich, R F

    1988-06-01

    This automatic wire twister used in surgery consists of a 6-inch needle holder attached to a twisting mechanism. The major advantage of this device is that it twists wires significantly more rapidly than the conventional manual techniques. Testing has found that the ultimate force required to disrupt the wires twisted by either the automatic wire twister or manual techniques did not differ significantly and was directly related to the number of twists. The automatic wire twister reduces the time needed for wire twisting without altering the security of the twisted wire.

  20. Determination of methylmercury and inorganic mercury by coupling short-column ion chromatographic separation, on-line photocatalyst-assisted vapor generation, and inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Chen, Kuan-ju; Hsu, I-hsiang; Sun, Yuh-chang

    2009-12-18

    We have combined short-column ion chromatographic separation and on-line photocatalyst-assisted vapor generation (VG) techniques with inductively coupled plasma mass spectrometry to develop a simple and sensitive hyphenated method for the determination of aqueous Hg(2+) and MeHg(+) species. The separation of Hg(2+) and MeHg(+) was accomplished on a cation-exchange guard column using a glutathione (GSH)-containing eluent. To achieve optimal chromatographic separation and signal intensities, we investigated the influence of several of the operating parameters of the chromatographic and photocatalyst-assisted VG systems. Under the optimized conditions of VG process, the shortcomings of conventional SnCl(2)-based VG techniques for the vaporization of MeHg(+) was overcome; comparing to the concentric nebulizer-ICP-MS system, the analytical sensitivity of ICP-MS toward the detection of Hg(2+) and MeHg(+) were also improved to 25- and 7-fold, respectively. With the use of our established HPLC-UV/nano-TiO(2)-ICP-MS system, the precision for each analyte, based on three replicate injections of 2 ng/mL samples of each species, was better than 15% RSD. This hyphenated method also provided excellent detection limits--0.1 and 0.03 ng/mL for Hg(2+) and MeHg(+), respectively. A series of validation experiments--analysis of the NIST 2672a Standard Urine Reference Material and other urine samples--confirmed further that our proposed method could be applied satisfactorily to the determination of inorganic Hg(2+) and MeHg(+) species in real samples.

  1. Automatic learning-based beam angle selection for thoracic IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Amit, Guy; Marshall, Andrea [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca; Jaffray, David A. [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Techna Institute, University Health Network, Toronto, Ontario M5G 1P5 (Canada); Levinshtein, Alex [Department of Computer Science, University of Toronto, Toronto, Ontario M5S 3G4 (Canada); Hope, Andrew J.; Lindsay, Patricia [Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario M5G 2M9, Canada and Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5S 3E2 (Canada); Pekar, Vladimir [Philips Healthcare, Markham, Ontario L6C 2S3 (Canada)

    2015-04-15

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  2. Generator. Generator

    Energy Technology Data Exchange (ETDEWEB)

    Knoedler, R.; Bossmann, H.P.

    1992-03-12

    The invention refers to a thermo-electric generator, whose main part is a sodium concentration cell. In conventional thermo-electric generators of this kind, the sodium moving from a hot space to a colder space must be transported back to the hot space via a circulation pipe and a pump. The purpose of the invention is to avoid the disadvantages of this return transport. According to the invention, the thermo-electric generator is supported so that it can rotate, so that the position of each space relative to its propinquity to the heat source can be changed at any time.

  3. Automatic transmission vehicle injuries.

    Science.gov (United States)

    Fidler, M

    1973-04-07

    Four drivers sustained severe injuries when run down by their own automatic cars while adjusting the carburettor or throttle linkages. The transmission had been left in the "Drive" position and the engine was idling. This accident is easily avoidable.

  4. Automatic Payroll Deposit System.

    Science.gov (United States)

    Davidson, D. B.

    1979-01-01

    The Automatic Payroll Deposit System in Yakima, Washington's Public School District No. 7, directly transmits each employee's salary amount for each pay period to a bank or other financial institution. (Author/MLF)

  5. Automaticity or active control

    DEFF Research Database (Denmark)

    Tudoran, Ana Alina; Olsen, Svein Ottar

    aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses....

  6. Neural Bases of Automaticity.

    Science.gov (United States)

    Servant, Mathieu; Cassey, Peter; Woodman, Geoffrey F; Logan, Gordon D

    2017-09-21

    Automaticity allows us to perform tasks in a fast, efficient, and effortless manner after sufficient practice. Theories of automaticity propose that across practice processing transitions from being controlled by working memory to being controlled by long-term memory retrieval. Recent event-related potential (ERP) studies have sought to test this prediction, however, these experiments did not use the canonical paradigms used to study automaticity. Specifically, automaticity is typically studied using practice regimes with consistent mapping between targets and distractors and spaced practice with individual targets, features that these previous studies lacked. The aim of the present work was to examine whether the practice-induced shift from working memory to long-term memory inferred from subjects' ERPs is observed under the conditions in which automaticity is traditionally studied. We found that to be the case in 3 experiments, firmly supporting the predictions of theories. In addition, we found that the temporal distribution of practice (massed vs. spaced) modulates the shape of learning curves. The ERP data revealed that the switch to long-term memory is slower for spaced than massed practice, suggesting that memory systems are used in a strategic manner. This finding provides new constraints for theories of learning and automaticity. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Hydride generation coupled to microfunnel-assisted headspace liquid-phase microextraction for the determination of arsenic with UV-Vis spectrophotometry.

    Science.gov (United States)

    Hashemniaye-Torshizi, Reihaneh; Ashraf, Narges; Arbab-Zavar, Mohammad Hossein

    2014-12-01

    In this research, a microfunnel-assisted headspace liquid-phase microextraction technique has been used in combination with hydride generation to determine arsenic (As) by UV-Vis spectrophotometry. The method is based on the reduction of As to arsine (AsH3) in acidic media by sodium tetrahydroborate (NaBH4) followed by its subsequent reaction with silver diethyldithiocarbamate (AgDDC) to give an absorbing complex at 510 nm. The complexing reagent (AgDDC) has been dissolved in a 1:1 (by the volume ratio) mixture of chloroform/chlorobenzene microdroplet and exposed to the generated gaseous arsine via a reversed microfunnel in the headspace of the sample solution. Several operating parameters affecting the performance of the method have been examined and optimized. Acetonitrile solvent has been added to the working samples as a sensitivity enhancement agent. Under the optimized operating conditions, the detection limit has been measured to be 0.2 ng mL(-1) (based on 3sb/m criterion, n b = 8), and the calibration curve was linear in the range of 0.5-12 ng mL(-1). The relative standard deviation for eight replicate measurements was 1.9 %. Also, the effects of several potential interferences have been studied. The accuracy of the method was validated through the analysis of JR-1 geological standard reference material. The method has been successfully applied for the determination of arsenic in raw and spiked soft drink and water samples with the recoveries that ranged from 91 to 106 %.

  8. Computer Vision Assisted Virtual Reality Calibration

    Science.gov (United States)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  9. Emotional characters for automatic plot creation

    NARCIS (Netherlands)

    Theune, Mariet; Rensen, S.; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Nijholt, Antinus; Göbel, S.; Spierling, U.; Hoffmann, A.; Iurgel, I.; Schneider, O.; Dechau, J.; Feix, A.

    The Virtual Storyteller is a multi-agent framework for automatic story generation. In this paper we describe how plots emerge from the actions of semi-autonomous character agents, focusing on the influence of the characters’ emotions on plot development.

  10. Open-Source Assisted Laboratory Automation through Graphical User Interfaces and 3D Printers: Application to Equipment Hyphenation for Higher-Order Data Generation.

    Science.gov (United States)

    Siano, Gabriel G; Montemurro, Milagros; Alcaráz, Mirta R; Goicoechea, Héctor C

    2017-10-17

    Higher-order data generation implies some automation challenges, which are mainly related to the hidden programming languages and electronic details of the equipment. When techniques and/or equipment hyphenation are the key to obtaining higher-order data, the required simultaneous control of them demands funds for new hardware, software, and licenses, in addition to very skilled operators. In this work, we present Design of Inputs-Outputs with Sikuli (DIOS), a free and open-source code program that provides a general framework for the design of automated experimental procedures without prior knowledge of programming or electronics. Basically, instruments and devices are considered as nodes in a network, and every node is associated both with physical and virtual inputs and outputs. Virtual components, such as graphical user interfaces (GUIs) of equipment, are handled by means of image recognition tools provided by Sikuli scripting language, while handling of their physical counterparts is achieved using an adapted open-source three-dimensional (3D) printer. Two previously reported experiments of our research group, related to fluorescence matrices derived from kinetics and high-performance liquid chromatography, were adapted to be carried out in a more automated fashion. Satisfactory results, in terms of analytical performance, were obtained. Similarly, advantages derived from open-source tools assistance could be appreciated, mainly in terms of lesser intervention of operators and cost savings.

  11. Shape-Controlled Generation of Gold Nanoparticles Assisted by Dual-Molecules: The Development of Hydrogen Peroxide and Oxidase-Based Biosensors

    Directory of Open Access Journals (Sweden)

    Chifang Peng

    2014-01-01

    Full Text Available With the assist of dual-molecules, 2-(N-morpholinoethanesulfonic acid (MES and sodium citrate, gold nanoparticles (GNPs with different shapes can be generated in the H2O2-mediated reduction of chloroauric acid. This one-pot reaction can be employed to sensitively detect H2O2, probe substrates or enzymes in oxidase-based reactions as well as prepare branched GNPs controllably. By the “naked eye,” 20 μM H2O2, 0.1 μM glucose, and 0.26 U/mL catalase could be differentiated, respectively. By spectrophotometer, the detected limits of H2O2, glucose, and catalase were 1.0 μM, 0.01 μM, and 0.03 U/mL, respectively, and the detection linear ranges for them were 5.0–400 μM, 0.01–0.3 mM, and 0.03–0.78 U/mL, respectively. The proposed “dual-molecules assist” strategy probably paves a new way for the fabrication of nanosensors based on the growth of anisotropic metal nanoparticles, and the developed catalase sensor can probably be utilized to fabricate ultrasensitive ELISA methods for various analytes.

  12. Generator. Generator

    Energy Technology Data Exchange (ETDEWEB)

    Bossmann, H.P.; Knoedler, R.

    1992-03-12

    The invention refers to a thermo-electric generator, which contains sodium as the means of heat transport. The sodium moves from the space of higher temperature through a space into the space of lower temperature. One can do without a pump for transporting the sodium back from the space of lower temperature to the space of higher temperature, as the thermo-electric generator can rotate around an axis. It is therefore possible to interchange the position of the two spaces relative to the heat source.

  13. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  14. Accessibility assessment of assistive technology for the hearing impaired.

    Science.gov (United States)

    Áfio, Aline Cruz Esmeraldo; Carvalho, Aline Tomaz de; Caravalho, Luciana Vieira de; Silva, Andréa Soares Rocha da; Pagliuca, Lorita Marlena Freitag

    2016-01-01

    to assess the automatic accessibility of assistive technology in online courses for the hearing impaired. evaluation study guided by the Assessment and Maintenance step proposed in the Model of Development of Digital Educational Material. The software Assessor and Simulator for the Accessibility of Sites (ASES) was used to analyze the online course "Education on Sexual and Reproductive Health: the use of condoms" according to the accessibility standards of national and international websites. an error report generated by the program identified, in each didactic module, one error and two warnings related to two international principles and six warnings involved with six national recommendations. The warnings relevant to hearing-impaired people were corrected, and the course was considered accessible by automatic assessment. we concluded that the pages of the course were considered, by the software used, appropriate to the standards of web accessibility.

  15. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... honor in the Higher-Order and Symbolic Computation Journal in the years 2003 and 2005. Among them there are two papers by Bob: (i) a retrospective view of his research lines, and (ii) a proposal for future studies in the area of the automatic program derivation. The book also includes some papers...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  16. Automatic Program Development

    DEFF Research Database (Denmark)

    Automatic Program Development is a tribute to Robert Paige (1947-1999), our accomplished and respected colleague, and moreover our good friend, whose untimely passing was a loss to our academic and research community. We have collected the revised, updated versions of the papers published in his...... by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...

  17. Automatic text summarization

    CERN Document Server

    Torres Moreno, Juan Manuel

    2014-01-01

    This new textbook examines the motivations and the different algorithms for automatic document summarization (ADS). We performed a recent state of the art. The book shows the main problems of ADS, difficulties and the solutions provided by the community. It presents recent advances in ADS, as well as current applications and trends. The approaches are statistical, linguistic and symbolic. Several exemples are included in order to clarify the theoretical concepts.  The books currently available in the area of Automatic Document Summarization are not recent. Powerful algorithms have been develop

  18. Characterization of on-target generated tryptic peptides from Giberella zeae conidia spore proteins by means of matrix-assisted laser desorption/ionization mass spectrometry.

    Science.gov (United States)

    Dong, Hongjuan; Marchetti-Deschmann, Martina; Allmaier, Günter

    2014-01-01

    Traditionally characterization of microbial proteins is performed by a complex sequence of steps with the final step to be either Edman sequencing or mass spectrometry, which generally takes several weeks or months to be complete. In this work, we proposed a strategy for the characterization of tryptic peptides derived from Giberella zeae (anamorph: Fusarium graminearum) proteins in parallel to intact cell mass spectrometry (ICMS) in which no complicated and time-consuming steps were needed. Experimentally, after a simple washing treatment of the spores, the aliquots of the intact G. zeae macro conidia spores solution, were deposited two times onto one MALDI (matrix-assisted laser desorption ionization) mass spectrometry (MS) target (two spots). One spot was used for ICMS and the second spot was subject to a brief on-target digestion with bead-immobilized or non-immobilized trypsin. Subsequently, one spot was analyzed immediately by MALDI MS in the linear mode (ICMS) whereas the second spot containing the digested material was investigated by MALDI MS in the reflectron mode ("peptide mass fingerprint") followed by protonated peptide selection for MS/MS (post source decay (PSD) fragment ion) analysis. Based on the formed fragment ions of selected tryptic peptides a complete or partial amino acid sequence was generated by manual de novo sequencing. These sequence data were used for homology search for protein identification. Finally four different peptides of varying abundances have been identified successfully allowing the verification that our desorbed/ionized surface compounds were indeed derived from proteins. The presence of three different proteins could be found unambiguously. Interestingly, one of these proteins is belonging to the ribosomal superfamily which indicates that not only surface-associated proteins were digested. This strategy minimized the amount of time and labor required for obtaining deeper information on spore preparations within the

  19. Prevention of vascular dysfunction and arterial hypertension in mice generated by assisted reproductive technologies by addition of melatonin to culture media.

    Science.gov (United States)

    Rexhaj, Emrush; Pireva, Agim; Paoloni-Giacobino, Ariane; Allemann, Yves; Cerny, David; Dessen, Pierre; Sartori, Claudio; Scherrer, Urs; Rimoldi, Stefano F

    2015-10-01

    Assisted reproductive technologies (ART) induce vascular dysfunction in humans and mice. In mice, ART-induced vascular dysfunction is related to epigenetic alteration of the endothelial nitric oxide synthase (eNOS) gene, resulting in decreased vascular eNOS expression and nitrite/nitrate synthesis. Melatonin is involved in epigenetic regulation, and its administration to sterile women improves the success rate of ART. We hypothesized that addition of melatonin to culture media may prevent ART-induced epigenetic and cardiovascular alterations in mice. We, therefore, assessed mesenteric-artery responses to acetylcholine and arterial blood pressure, together with DNA methylation of the eNOS gene promoter in vascular tissue and nitric oxide plasma concentration in 12-wk-old ART mice generated with and without addition of melatonin to culture media and in control mice. As expected, acetylcholine-induced mesenteric-artery dilation was impaired (P = 0.008 vs. control) and mean arterial blood pressure increased (109.5 ± 3.8 vs. 104.0 ± 4.7 mmHg, P = 0.002, ART vs. control) in ART compared with control mice. These alterations were associated with altered DNA methylation of the eNOS gene promoter (P culture media prevented eNOS dysmethylation (P = 0.005, vs. ART + vehicle), normalized nitric oxide plasma concentration (23.1 ± 14.6 μM, P = 0.002 vs. ART + vehicle) and mesentery-artery responsiveness to acetylcholine (P culture media prevents ART-induced vascular dysfunction. We speculate that this approach will also allow preventing ART-induced premature atherosclerosis in humans. Copyright © 2015 the American Physiological Society.

  20. Analysis of the Air Flow Generated by an Air-Assisted Sprayer Equipped with Two Axial Fans Using a 3D Sonic Anemometer

    Directory of Open Access Journals (Sweden)

    Javier Aguirre

    2012-06-01

    Full Text Available The flow of air generated by a new design of air assisted sprayer equipped with two axial fans of reversed rotation was analyzed. For this goal, a 3D sonic anemometer has been used (accuracy: 1.5%; measurement range: 0 to 45 m/s. The study was divided into a static test and a dynamic test. During the static test, the air velocity in the working vicinity of the sprayer was measured considering the following machine configurations: (1 one activated fan regulated at three air flows (machine working as a traditional sprayer; (2 two activated fans regulated at three air flows for each fan. In the static test 72 measurement points were considered. The location of the measurement points was as follow: left and right sides of the sprayer; three sections of measurement (A, B and C; three measurement distances from the shaft of the machine (1.5 m, 2.5 m and 3.5 m; and four measurement heights (1 m, 2 m, 3 m and 4 m. The static test results have shown significant differences in the module and the vertical angle of the air velocity vector in function of the regulations of the sprayer. In the dynamic test, the air velocity was measured at 2.5 m from the axis of the sprayer considering four measurement heights (1 m, 2 m, 3 m and 4 m. In this test, the sprayer regulations were: one or two activated fans; one air flow for each fan; forward speed of 2.8 km/h. The use of one fan (back or two fans (back and front produced significant differences on the duration of the presence of wind in the measurement point and on the direction of the air velocity vector. The module of the air velocity vector was not affected by the number of activated fans.

  1. Ultrasound- and microwave-assisted extractions followed by hydride generation inductively coupled plasma optical emission spectrometry for lead determination in geological samples.

    Science.gov (United States)

    Welna, Maja; Borkowska-Burnecka, Jolanta; Popko, Malgorzata

    2015-11-01

    Followed the current idea of simplified sample pretratmet before analysis we evaluated the procedure for the determination of Pb in calcium-rich materials such as dolomites after ultrasound- or microwave- assisted extraction with diluted acids using hydride generation inductively coupled plasma optical emission spectrometry (HG-ICP-OES). Corresponding Pb hydride was generated in the reaction of an acidified sample solution with NaBH4 after pre-oxidation of Pb(II) to Pb(IV) by K3[Fe(CN)6]. Several chemical (acidic media: HCl, HNO3 or CH3COOH, concentration of the reductant as well as type and concentration of oxidazing agents) and physical (reagents flow rates, reaction coil length) parameters affecting the efficiency of plumbane formation were optimized in order to improve the detectability of Pb using HG-ICP-OES. Limitation of the method derived from the matrix effects was pointed out. Employing Pb separation by HG technique allows the significant reduction of interferences caused by sample matrix constituents (mainly Ca and Mg), nevertheless they could not be overcame at all, hence calibration based on the standard addition method was recommended for Pb quantification in dolomites. Under the selected conditions, i.e. 0.3 mol L(-1) HCl, HNO3 or CH3COOH, 1.5% NaBH4 and 3.0% K3[Fe(CN)6] the limits of detection (LODs) between 2.3-5.6 μg L(-1) (3.4-6.8 μg L(-1) considering matrix effects) and the precision below 5% were achieved. The accuracy of the procedure was verified by analysis of certified reference materials (NCS DC70308 (Carbonate Rock) and NIST 14000 (Bone Ash)) and recovery test with satisfactory results of Pb recoveries ranging between 94-108% (CRMs analysis) and 92-114% (standard addition method). The applicability of the proposed method was demonstrated by the determination of Pb in dolomites used by different fertiliser factories. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Automatic temperature controlled retinal photocoagulation

    Science.gov (United States)

    Schlott, Kerstin; Koinzer, Stefan; Ptaszynski, Lars; Bever, Marco; Baade, Alex; Roider, Johann; Birngruber, Reginald; Brinkmann, Ralf

    2012-06-01

    Laser coagulation is a treatment method for many retinal diseases. Due to variations in fundus pigmentation and light scattering inside the eye globe, different lesion strengths are often achieved. The aim of this work is to realize an automatic feedback algorithm to generate desired lesion strengths by controlling the retinal temperature increase with the irradiation time. Optoacoustics afford non-invasive retinal temperature monitoring during laser treatment. A 75 ns/523 nm Q-switched Nd:YLF laser was used to excite the temperature-dependent pressure amplitudes, which were detected at the cornea by an ultrasonic transducer embedded in a contact lens. A 532 nm continuous wave Nd:YAG laser served for photocoagulation. The ED50 temperatures, for which the probability of ophthalmoscopically visible lesions after one hour in vivo in rabbits was 50%, varied from 63°C for 20 ms to 49°C for 400 ms. Arrhenius parameters were extracted as ΔE=273 J mol-1 and A=3.1044 s-1. Control algorithms for mild and strong lesions were developed, which led to average lesion diameters of 162+/-34 μm and 189+/-34 μm, respectively. It could be demonstrated that the sizes of the automatically controlled lesions were widely independent of the treatment laser power and the retinal pigmentation.

  3. Automatic Complexity Analysis

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    1989-01-01

    One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract...

  4. Exploring Automatization Processes.

    Science.gov (United States)

    DeKeyser, Robert M.

    1996-01-01

    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  5. Automaticity and Reading: Perspectives from the Instance Theory of Automatization.

    Science.gov (United States)

    Logan, Gordon D.

    1997-01-01

    Reviews recent literature on automaticity, defining the criteria that distinguish automatic processing from non-automatic processing, and describing modern theories of the underlying mechanisms. Focuses on evidence from studies of reading and draws implications from theory and data for practical issues in teaching reading. Suggests that…

  6. The influence of age, hearing, and working memory on the speech comprehension benefit derived from an automatic speech recognition system.

    Science.gov (United States)

    Zekveld, Adriana A; Kramer, Sophia E; Kessens, Judith M; Vlaming, Marcel S M G; Houtgast, Tammo

    2009-04-01

    The aim of the current study was to examine whether partly incorrect subtitles that are automatically generated by an Automatic Speech Recognition (ASR) system, improve speech comprehension by listeners with hearing impairment. In an earlier study (Zekveld et al. 2008), we showed that speech comprehension in noise by young listeners with normal hearing improves when presenting partly incorrect, automatically generated subtitles. The current study focused on the effects of age, hearing loss, visual working memory capacity, and linguistic skills on the benefit obtained from automatically generated subtitles during listening to speech in noise. In order to investigate the effects of age and hearing loss, three groups of participants were included: 22 young persons with normal hearing (YNH, mean age = 21 years), 22 middle-aged adults with normal hearing (MA-NH, mean age = 55 years) and 30 middle-aged adults with hearing impairment (MA-HI, mean age = 57 years). The benefit from automatic subtitling was measured by Speech Reception Threshold (SRT) tests (Plomp & Mimpen, 1979). Both unimodal auditory and bimodal audiovisual SRT tests were performed. In the audiovisual tests, the subtitles were presented simultaneously with the speech, whereas in the auditory test, only speech was presented. The difference between the auditory and audiovisual SRT was defined as the audiovisual benefit. Participants additionally rated the listening effort. We examined the influences of ASR accuracy level and text delay on the audiovisual benefit and the listening effort using a repeated measures General Linear Model analysis. In a correlation analysis, we evaluated the relationships between age, auditory SRT, visual working memory capacity and the audiovisual benefit and listening effort. The automatically generated subtitles improved speech comprehension in noise for all ASR accuracies and delays covered by the current study. Higher ASR accuracy levels resulted in more benefit obtained

  7. Solar Generator

    Science.gov (United States)

    1985-01-01

    The Vanguard I dish-Stirling module program, initiated in 1982, produced the Vanguard I module, a commercial prototype erected by the Advanco Corporation. The module, which automatically tracks the sun, combines JPL mirrored concentrator technology, an advanced Stirling Solar II engine/generator, a low cost microprocessor-controlled parabolic dish. Vanguard I has a 28% sunlight to electricity conversion efficiency. If tests continue to prove the system effective, Advanco will construct a generating plant to sell electricity to local utilities. An agreement has also been signed with McDonnell Douglas to manufacture a similar module.

  8. Automatic Differentiation and Deep Learning

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Statistical learning has been getting more and more interest from the particle-physics community in recent times, with neural networks and gradient-based optimization being a focus. In this talk we shall discuss three things: automatic differention tools: tools to quickly build DAGs of computation that are fully differentiable. We shall focus on one such tool "PyTorch".  Easy deployment of trained neural networks into large systems with many constraints: for example, deploying a model at the reconstruction phase where the neural network has to be integrated into CERN's bulk data-processing C++-only environment Some recent models in deep learning for segmentation and generation that might be useful for particle physics problems.

  9. Automatic Epileptic Seizure Onset Detection Using Matching Pursuit

    DEFF Research Database (Denmark)

    Sorensen, Thomas Lynggaard; Olsen, Ulrich L.; Conradsen, Isa

    2010-01-01

    An automatic alarm system for detecting epileptic seizure onsets could be of great assistance to patients and medical staff. A novel approach is proposed using the Matching Pursuit algorithm as a feature extractor combined with the Support Vector Machine (SVM) as a classifier for this purpose. Th...

  10. Automatic Dialogue Scoring for a Second Language Learning System

    Science.gov (United States)

    Huang, Jin-Xia; Lee, Kyung-Soon; Kwon, Oh-Woog; Kim, Young-Kil

    2016-01-01

    This paper presents an automatic dialogue scoring approach for a Dialogue-Based Computer-Assisted Language Learning (DB-CALL) system, which helps users learn language via interactive conversations. The system produces overall feedback according to dialogue scoring to help the learner know which parts should be more focused on. The scoring measures…

  11. Comparing different approaches for automatic pronunciation error detection

    NARCIS (Netherlands)

    Strik, Helmer; Truong, Khiet Phuong; de Wet, Febe; Cucchiarini, Catia

    2009-01-01

    One of the biggest challenges in designing computer assisted language learning (CALL) applications that provide automatic feedback on pronunciation errors consists in reliably detecting the pronunciation errors at such a detailed level that the information provided can be useful to learners. In our

  12. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  13. Automatic Ultrasound Scanning

    DEFF Research Database (Denmark)

    Moshavegh, Ramin

    Medical ultrasound has been a widely used imaging modality in healthcare platforms for examination, diagnostic purposes, and for real-time guidance during surgery. However, despite the recent advances, medical ultrasound remains the most operator-dependent imaging modality, as it heavily relies...... on the user adjustments on the scanner interface to optimize the scan settings. This explains the huge interest in the subject of this PhD project entitled “AUTOMATIC ULTRASOUND SCANNING”. The key goals of the project have been to develop automated techniques to minimize the unnecessary settings...... on the scanners, and to improve the computer-aided diagnosis (CAD) in ultrasound by introducing new quantitative measures. Thus, four major issues concerning automation of the medical ultrasound are addressed in this PhD project. They touch upon gain adjustments in ultrasound, automatic synthetic aperture image...

  14. Automatic speech recognition

    Science.gov (United States)

    Espy-Wilson, Carol

    2005-04-01

    Great strides have been made in the development of automatic speech recognition (ASR) technology over the past thirty years. Most of this effort has been centered around the extension and improvement of Hidden Markov Model (HMM) approaches to ASR. Current commercially-available and industry systems based on HMMs can perform well for certain situational tasks that restrict variability such as phone dialing or limited voice commands. However, the holy grail of ASR systems is performance comparable to humans-in other words, the ability to automatically transcribe unrestricted conversational speech spoken by an infinite number of speakers under varying acoustic environments. This goal is far from being reached. Key to the success of ASR is effective modeling of variability in the speech signal. This tutorial will review the basics of ASR and the various ways in which our current knowledge of speech production, speech perception and prosody can be exploited to improve robustness at every level of the system.

  15. Automatic Language Identification

    Science.gov (United States)

    2000-08-01

    hundreds guish one language from another. The reader is referred of input languages would need to be supported , the cost of to the linguistics literature...eventually obtained bet- 108 TRAINING FRENCH GERMAN ITRAIING FRENCH M- ALGORITHM - __ GERMAN NHSPANISH TRAINING SPEECH SET OF MODELS: UTTERANCES ONE MODEL...i.e. vowels ) for each speech utterance are located malized to be insensitive to overall amplitude, pitch and automatically. Next, feature vectors

  16. Towards Automatic Threat Recognition

    Science.gov (United States)

    2006-12-01

    York: Bantam. Forschungsinstitut für Kommunikation , Informationsverarbeitung und Ergonomie FGAN Informationstechnik und Führungssysteme KIE Towards...Automatic Threat Recognition Dr. Ulrich Schade Joachim Biermann Miłosław Frey FGAN – FKIE Germany Forschungsinstitut für Kommunikation ...as Processing Principle Back to the Example Conclusion and Outlook Forschungsinstitut für Kommunikation , Informationsverarbeitung und Ergonomie FGAN

  17. Automatic food decisions

    DEFF Research Database (Denmark)

    Mueller Loose, Simone

    Consumers' food decisions are to a large extent shaped by automatic processes, which are either internally directed through learned habits and routines or externally influenced by context factors and visual information triggers. Innovative research methods such as eye tracking, choice experiments...... and food diaries allow us to better understand the impact of unconscious processes on consumers' food choices. Simone Mueller Loose will provide an overview of recent research insights into the effects of habit and context on consumers' food choices....

  18. Automatically predicting mood from expressed emotions

    OpenAIRE

    Katsimerou, C.

    2016-01-01

    Affect-adaptive systems have the potential to assist users that experience systematically negative moods. This thesis aims at building a platform for predicting automatically a person’s mood from his/her visual expressions. The key word is mood, namely a relatively long-term, stable and diffused affective state, as opposed to the short-term, volatile and intense emotion. This is emphasized, because mood and emotion often tend to be used as synonyms. However, since their differences are well e...

  19. Automatization of lexicographic work

    Directory of Open Access Journals (Sweden)

    Iztok Kosem

    2013-12-01

    Full Text Available A new approach to lexicographic work, in which the lexicographer is seen more as a validator of the choices made by computer, was recently envisaged by Rundell and Kilgarriff (2011. In this paper, we describe an experiment using such an approach during the creation of Slovene Lexical Database (Gantar, Krek, 2011. The corpus data, i.e. grammatical relations, collocations, examples, and grammatical labels, were automatically extracted from 1,18-billion-word Gigafida corpus of Slovene. The evaluation of the extracted data consisted of making a comparison between the time spent writing a manual entry and a (semi-automatic entry, and identifying potential improvements in the extraction algorithm and in the presentation of data. An important finding was that the automatic approach was far more effective than the manual approach, without any significant loss of information. Based on our experience, we would propose a slightly revised version of the approach envisaged by Rundell and Kilgarriff in which the validation of data is left to lower-level linguists or crowd-sourcing, whereas high-level tasks such as meaning description remain the domain of lexicographers. Such an approach indeed reduces the scope of lexicographer’s work, however it also results in the ability of bringing the content to the users more quickly.

  20. Teste de DNA para verificação de parentesco em cães: avaliação do método não automatizado com o auxílio do primer CMR S DNA test for parentage verification in dogs: evaluation of the non-automatized method with assistance of the primer CMR S

    Directory of Open Access Journals (Sweden)

    P.F. Oliveira

    2002-10-01

    Full Text Available To evaluate the precision of the DNA tests using the non-automatized technique for individual identification and parentage tests, 105 Rottweiler dogs were studied using the primer CMR S. The sample was composed of 39 animals belonging to 11 complete families and their progenies, and 66 non related individuals until the second generation, derived from kennels located in the states of Minas Gerais and São Paulo. The CMR S primer was used for the Polimerase Chain Reaction (PCR. The results showed the inefficiency of the technique, even when analyzed through the automated gel analysis system. Also showed the impossibility of its commercial use due to the fact of does not permit the storage of data for subsequent use.

  1. Automatic Discovery and Inferencing of Complex Bioinformatics Web Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A; Rocco, D; Critchlow, T; Buttler, D

    2003-12-22

    The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.

  2. Identification of fracture zones and its application in automatic bone fracture reduction.

    Science.gov (United States)

    Paulano-Godino, Félix; Jiménez-Delgado, Juan J

    2017-04-01

    The preoperative planning of bone fractures using information from CT scans increases the probability of obtaining satisfactory results, since specialists are provided with additional information before surgery. The reduction of complex bone fractures requires solving a 3D puzzle in order to place each fragment into its correct position. Computer-assisted solutions may aid in this process by identifying the number of fragments and their location, by calculating the fracture zones or even by computing the correct position of each fragment. The main goal of this paper is the development of an automatic method to calculate contact zones between fragments and thus to ease the computation of bone fracture reduction. In this paper, an automatic method to calculate the contact zone between two bone fragments is presented. In a previous step, bone fragments are segmented and labelled from CT images and a point cloud is generated for each bone fragment. The calculated contact zones enable the automatic reduction of complex fractures. To that end, an automatic method to match bone fragments in complex fractures is also presented. The proposed method has been successfully applied in the calculation of the contact zone of 4 different bones from the ankle area. The calculated fracture zones enabled the reduction of all the tested cases using the presented matching algorithm. The performed tests show that the reduction of these fractures using the proposed methods leaded to a small overlapping between fragments. The presented method makes the application of puzzle-solving strategies easier, since it does not obtain the entire fracture zone but the contact area between each pair of fragments. Therefore, it is not necessary to find correspondences between fracture zones and fragments may be aligned two by two. The developed algorithms have been successfully applied in different fracture cases in the ankle area. The small overlapping error obtained in the performed tests

  3. MARZ: Manual and automatic redshifting software

    Science.gov (United States)

    Hinton, S. R.; Davis, Tamara M.; Lidman, C.; Glazebrook, K.; Lewis, G. F.

    2016-04-01

    The Australian Dark Energy Survey (OzDES) is a 100-night spectroscopic survey underway on the Anglo-Australian Telescope using the fibre-fed 2-degree-field (2dF) spectrograph. We have developed a new redshifting application MARZ with greater usability, flexibility, and the capacity to analyse a wider range of object types than the RUNZ software package previously used for redshifting spectra from 2dF. MARZ is an open-source, client-based, Javascript web-application which provides an intuitive interface and powerful automatic matching capabilities on spectra generated from the AAOmega spectrograph to produce high quality spectroscopic redshift measurements. The software can be run interactively or via the command line, and is easily adaptable to other instruments and pipelines if conforming to the current FITS file standard is not possible. Behind the scenes, a modified version of the AUTOZ cross-correlation algorithm is used to match input spectra against a variety of stellar and galaxy templates, and automatic matching performance for OzDES spectra has increased from 54% (RUNZ) to 91% (MARZ). Spectra not matched correctly by the automatic algorithm can be easily redshifted manually by cycling automatic results, manual template comparison, or marking spectral features.

  4. An Automatic Indirect Immunofluorescence Cell Segmentation System

    Directory of Open Access Journals (Sweden)

    Yung-Kuan Chan

    2014-01-01

    Full Text Available Indirect immunofluorescence (IIF with HEp-2 cells has been used for the detection of antinuclear autoantibodies (ANA in systemic autoimmune diseases. The ANA testing allows us to scan a broad range of autoantibody entities and to describe them by distinct fluorescence patterns. Automatic inspection for fluorescence patterns in an IIF image can assist physicians, without relevant experience, in making correct diagnosis. How to segment the cells from an IIF image is essential in developing an automatic inspection system for ANA testing. This paper focuses on the cell detection and segmentation; an efficient method is proposed for automatically detecting the cells with fluorescence pattern in an IIF image. Cell culture is a process in which cells grow under control. Cell counting technology plays an important role in measuring the cell density in a culture tank. Moreover, assessing medium suitability, determining population doubling times, and monitoring cell growth in cultures all require a means of quantifying cell population. The proposed method also can be used to count the cells from an image taken under a fluorescence microscope.

  5. Improved depth perception with three-dimensional auxiliary display and computer generated three-dimensional panoramic overviews in robot-assisted laparoscopy

    NARCIS (Netherlands)

    Wieringa, F.P.; Bouma, H.; Eendebak, P.T.; Basten, J.P.A. van; Beerlage, H.P.; Smits, G.A.H.J.; Bos, J.E.

    2014-01-01

    In comparison to open surgery, endoscopic surgery offers impaired depth perception and narrower field-of-view. To improve depth perception, the Da Vinci robot offers three-dimensional (3-D) video on the console for the surgeon but not for assistants, although both must collaborate. We improved the

  6. Automatic Seismic Signal Processing

    Science.gov (United States)

    1982-02-04

    81-04 4 February 1982 AUTOMATIC SEISMIC SIGNAL PROCESSING FINAL TECHNICAL REPORT i j Contract F08606-80.C-0021" PREPARED BY ILKKA NOPONEN, ROBERT SAX...PERFORMING ORG. REPORT NUMBER SAS-FR-81-04 7. AUTHOR(e) a. CONTRACT OR GRANT NUMBER(e) F08606- 80-C-0021 ILKKA NOPONEN, ROBERT SAX AND F 6 C0 STEVEN...observed, as also Swindell and Snell (1977), that the distribu- tion of x was slightly skewed, we used the median of x instead of aver- age of x for U(x

  7. Automatic Program Development

    DEFF Research Database (Denmark)

    by members of the IFIP Working Group 2.1 of which Bob was an active member. All papers are related to some of the research interests of Bob and, in particular, to the transformational development of programs and their algorithmic derivation from formal specifications. Automatic Program Development offers...... a renewed stimulus for continuing and deepening Bob's research visions. A familiar touch is given to the book by some pictures kindly provided to us by his wife Nieba, the personal recollections of his brother Gary and some of his colleagues and friends....

  8. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  9. Automatic readout micrometer

    Science.gov (United States)

    Lauritzen, T.

    A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.

  10. Preliminary Design Through Graphs: A Tool for Automatic Layout Distribution

    Directory of Open Access Journals (Sweden)

    Carlo Biagini

    2015-02-01

    Full Text Available Diagrams are essential in the preliminary stages of design for understanding distributive aspects and assisting the decision-making process. By drawing a schematic graph, designers can visualize in a synthetic way the relationships between many aspects: functions and spaces, distribution of layouts, space adjacency, influence of traffic flows within a facility layout, and so on. This process can be automated through the use of modern Information and Communication Technologies tools (ICT that allow the designers to manage a large quantity of information. The work that we will present is part of an on-going research project into how modern parametric software influences decision-making on the basis of automatic and optimized layout distribution. The method involves two phases: the first aims to define the ontological relation between spaces, with particular reference to a specific building typology (rules of aggregation of spaces; the second entails the implementation of these rules through the use of specialist software. The generation of ontological relations begins with the collection of data from historical manuals and analyses of case studies. These analyses aim to generate a “relationship matrix” based on preferences of space adjacency. The phase of implementing the previously defined rules is based on the use of Grasshopper to analyse and visualize different layout configurations. The layout is generated by simulating a process involving the collision of spheres, which represents specific functions of the design program. The spheres are attracted or rejected as a function of the relationships matrix, as defined above. The layout thus obtained will remain in a sort of abstract state independent of information about the exterior form, but will still provide a useful tool for the decision-making process. In addition, preliminary results gathered through the analysis of case studies will be presented. These results provide a good variety

  11. Using process mining for automatic support of clinical pathways design.

    Science.gov (United States)

    Fernandez-Llatas, Carlos; Valdivieso, Bernardo; Traver, Vicente; Benedi, Jose Miguel

    2015-01-01

    The creation of tools supporting the automatization of the standardization and continuous control of healthcare processes can become a significant helping tool for clinical experts and healthcare systems willing to reduce variability in clinical practice. The reduction in the complexity of design and deployment of standard Clinical Pathways can enhance the possibilities for effective usage of computer assisted guidance systems for professionals and assure the quality of the provided care. Several technologies have been used in the past for trying to support these activities but they have not been able to generate the disruptive change required to foster the general adoption of standardization in this domain due to the high volume of work, resources, and knowledge required to adequately create practical protocols that can be used in practice. This chapter proposes the use of the PALIA algorithm, based in Activity-Based process mining techniques, as a new technology to infer the actual processes from the real execution logs to be used in the design and quality control of healthcare processes.

  12. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  13. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  14. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  15. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  16. Automatic photointerpretation via texture and morphology analysis

    Science.gov (United States)

    Tou, J. T.

    1982-01-01

    Computer-based techniques for automatic photointerpretation based upon information derived from texture and morphology analysis of images are discussed. By automatic photointerpretation, is meant the determination of semantic descriptions of the content of the images by computer. To perform semantic analysis of morphology, a heirarchical structure of knowledge representation was developed. The simplest elements in a morphology are strokes, which are used to form alphabets. The alphabets are the elements for generating words, which are used to describe the function or property of an object or a region. The words are the elements for constructing sentences, which are used for semantic description of the content of the image. Photointerpretation based upon morphology is then augmented by textural information. Textural analysis is performed using a pixel-vector approach.

  17. Automatic-Control System for Safer Brazing

    Science.gov (United States)

    Stein, J. A.; Vanasse, M. A.

    1986-01-01

    Automatic-control system for radio-frequency (RF) induction brazing of metal tubing reduces probability of operator errors, increases safety, and ensures high-quality brazed joints. Unit combines functions of gas control and electric-power control. Minimizes unnecessary flow of argon gas into work area and prevents electrical shocks from RF terminals. Controller will not allow power to flow from RF generator to brazing head unless work has been firmly attached to head and has actuated micro-switch. Potential shock hazard eliminated. Flow of argon for purging and cooling must be turned on and adjusted before brazing power applied. Provision ensures power not applied prematurely, causing damaged work or poor-quality joints. Controller automatically turns off argon flow at conclusion of brazing so potentially suffocating gas does not accumulate in confined areas.

  18. AUTOMATIC ARCHITECTURAL STYLE RECOGNITION

    Directory of Open Access Journals (Sweden)

    M. Mathias

    2012-09-01

    Full Text Available Procedural modeling has proven to be a very valuable tool in the field of architecture. In the last few years, research has soared to automatically create procedural models from images. However, current algorithms for this process of inverse procedural modeling rely on the assumption that the building style is known. So far, the determination of the building style has remained a manual task. In this paper, we propose an algorithm which automates this process through classification of architectural styles from facade images. Our classifier first identifies the images containing buildings, then separates individual facades within an image and determines the building style. This information could then be used to initialize the building reconstruction process. We have trained our classifier to distinguish between several distinct architectural styles, namely Flemish Renaissance, Haussmannian and Neoclassical. Finally, we demonstrate our approach on various street-side images.

  19. Automatic speech recognition systems

    Science.gov (United States)

    Catariov, Alexandru

    2005-02-01

    In this paper is presented analyses in automatic speech recognition (ASR) to find out what is the state of the arts in this direction and, eventually, it can be a starting point for the implementation of a real ASR system. In the second chapter of this work, it is revealed the structure of a typical speech recognition system and the used methods for each step of the recognition process, and in special, there are described two kinds of speech recognition algorithms, namely, Dynamic Time Warping (DTW) and Hidden Markov Model (HMM). The work continues with some results of ASR, in order to make conclusions about what is needed to be improved and what is more eligible to implement an ASR system.

  20. Dental Assistants

    Science.gov (United States)

    ... State & Area Data Explore resources for employment and wages by state and area for dental assistants. Similar Occupations Compare the job duties, education, job growth, and pay of dental assistants with ...

  1. Assistive Technology

    Science.gov (United States)

    ... Page Resize Text Printer Friendly Online Chat Assistive Technology Assistive technology (AT) is any service or tool that helps ... be difficult or impossible. For older adults, such technology may be a walker to improve mobility or ...

  2. Comparison of the effect of medical assistants versus certified athletic trainers on patient volumes and revenue generation in a sports medicine practice.

    Science.gov (United States)

    Pecha, Forrest Q; Xerogeanes, John W; Karas, Spero G; Himes, Megan E; Mines, Brandon A

    2013-07-01

    Research has shown increases in efficiency and productivity by using physician extenders (PEs) in medical practices. Certified athletic trainers (ATCs) that work as PEs in primary care sports medicine and orthopaedic practices improve clinic efficiency. When compared with a medical assistant (MA), the use of an ATC as a PE in a primary care sports medicine practice will result in an increase in patient volume, charges, and collections. Cross-sectional study. For 12 months, patient encounters, charges, and collections were obtained for the practices of 2 primary care sports medicine physicians. Each physician was assisted by an ATC for 6 months and by an MA for 6 months. Eighty full clinic days were examined for each physician. Statistically significant increases were found in all measured parameters for the ATC compared with the MA. Patient encounters increased 18% to 22% per day, and collections increased by 10% to 60% per day. ATCs can optimize orthopaedic sports medicine practice by increasing patient encounters, charges, and collections. Orthopaedic practices can be more efficient by using ATCs or MAs as PEs.

  3. Unbiased estimation of cell number using the automatic optical fractionator.

    Science.gov (United States)

    Mouton, Peter R; Phoulady, Hady Ahmady; Goldgof, Dmitry; Hall, Lawrence O; Gordon, Marcia; Morgan, David

    2017-03-01

    A novel stereology approach, the automatic optical fractionator, is presented for obtaining unbiased and efficient estimates of the number of cells in tissue sections. Used in combination with existing segmentation algorithms and ordinary immunostaining methods, automatic estimates of cell number are obtainable from extended depth of field images built from three-dimensional volumes of tissue (disector stacks). The automatic optical fractionator is more accurate, 100% objective and 8-10 times faster than the manual optical fractionator. An example of the automatic fractionator is provided for counts of immunostained neurons in neocortex of a genetically modified mouse model of neurodegeneration. Evidence is presented for the often overlooked prerequisite that accurate counting by the optical fractionator requires a thin focal plane generated by a high optical resolution lens. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Generating automated meeting summaries

    OpenAIRE

    Kleinbauer, Thomas

    2011-01-01

    The thesis at hand introduces a novel approach for the generation of abstractive summaries of meetings. While the automatic generation of document summaries has been studied for some decades now, the novelty of this thesis is mainly the application to the meeting domain (instead of text documents) as well as the use of a lexicalized representation formalism on the basis of Frame Semantics. This allows us to generate summaries abstractively (instead of extractively). Die vorliegende Arbeit ...

  5. Assistive Technologies

    Science.gov (United States)

    Auat Cheein, Fernando A., Ed.

    2012-01-01

    This book offers the reader new achievements within the Assistive Technology field made by worldwide experts, covering aspects such as assistive technology focused on teaching and education, mobility, communication and social interactivity, among others. Each chapter included in this book covers one particular aspect of Assistive Technology that…

  6. Analisis Skema Proteksi Arus Lebih Communication-Assisted pada Sistem Distribusi Radial-Tie Switch Jaring Menengah 20 kV dengan Distributed Generation (DG di PT. PLN Nusa Penida Bali

    Directory of Open Access Journals (Sweden)

    Putri Trisna Idha Ayuningtias

    2017-01-01

    Full Text Available Sistem distribusi PT. PLN Nusa Penida Bali merupakan sistem dengan jumlah section yang panjang sehingga untuk mengkoordinasikan peralatan proteksinya diperlukan setting rele yang tepat. Dengan menggunakan sistem proteksi konvensional, setting waktu rele pada sisi hulu akan semakin besar sehingga dapat melebihi waktu 1 detik. Hal ini kurang baik karena waktu kerja rele dan  pemutusan Circuit Breaker (CB akan semakin lama. Efeknya yaitu akan terjadi undervoltage pada sistem dengan waktu yang lama pula sehingga besar kemungkinan akan mengakibatkan kegagalan pembangkit dimana generator utama akan mati ketika terjadi gangguan hubung singkat yang terlalu lama dan sistem pengaman tidak mampu melokalisir gangguan dengan waktu yang cepat. Pada Tugas Akhir ini, dilakukan penelitian mengenai Analisis Skema Proteksi Arus Lebih Communication-Assisted pada Sistem Distribusi Radial-Tie Switch Jaring Menengah 20 kV dengan Distributed Generation (DG di PT. PLN Nusa Penida Bali. Pada skema proteksi arus lebih communication-assisted, setting waktu rele proteksi arus lebih yaitu 0,1 detik. Rele proteksi arus lebih dikoordinasikan dengan bantuan media komunikasi sebagai media untuk mengirimkan informasi logic berupa blocking signal. Ketika rele proteksi merasakan gangguan pada zona primernya, maka rele tersebut akan mengirimkan blocking signal pada rele lainnya. Blocking signal tersebut sekaligus memberi informasi kepada rele lainnya untuk trip sesuai setting time delay ketika CB pada zona primer gagal mengamankan. Hal ini membuat sistem proteksi lebih handal dan bekerja lebih cepat.

  7. Hue-assisted automatic registration of color point clouds

    Directory of Open Access Journals (Sweden)

    Hao Men

    2014-10-01

    Full Text Available This paper describes a variant of the extended Gaussian image based registration algorithm for point clouds with surface color information. The method correlates the distributions of surface normals for rotational alignment and grid occupancy for translational alignment with hue filters applied during the construction of surface normal histograms and occupancy grids. In this method, the size of the point cloud is reduced with a hue-based down sampling that is independent of the point sample density or local geometry. Experimental results show that use of the hue filters increases the registration speed and improves the registration accuracy. Coarse rigid transformations determined in this step enable fine alignment with dense, unfiltered point clouds or using Iterative Common Point (ICP alignment techniques.

  8. Automatic detection of retinal anatomy to assist diabetic retinopathy screening

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, Alan D [Biomedical Physics, University of Aberdeen, Aberdeen, AB25 2ZD (United Kingdom); Goatman, Keith A [Biomedical Physics, University of Aberdeen, Aberdeen, AB25 2ZD (United Kingdom); Philip, Sam [Grampian Diabetes Retinal Screening Programme, Woolmanhill Hospital, Aberdeen, AB25 1LD (United Kingdom); Olson, John A [Grampian Diabetes Retinal Screening Programme, Woolmanhill Hospital, Aberdeen, AB25 1LD (United Kingdom); Sharp, Peter F [Biomedical Physics, University of Aberdeen, Aberdeen, AB25 2ZD (United Kingdom)

    2007-01-21

    Screening programmes for diabetic retinopathy are being introduced in the United Kingdom and elsewhere. These require large numbers of retinal images to be manually graded for the presence of disease. Automation of image grading would have a number of benefits. However, an important prerequisite for automation is the accurate location of the main anatomical features in the image, notably the optic disc and the fovea. The locations of these features are necessary so that lesion significance, image field of view and image clarity can be assessed. This paper describes methods for the robust location of the optic disc and fovea. The elliptical form of the major retinal blood vessels is used to obtain approximate locations, which are refined based on the circular edge of the optic disc and the local darkening at the fovea. The methods have been tested on 1056 sequential images from a retinal screening programme. Positional accuracy was better than 0.5 of a disc diameter in 98.4% of cases for optic disc location, and in 96.5% of cases for fovea location. The methods are sufficiently accurate to form an important and effective component of an automated image grading system for diabetic retinopathy screening.

  9. Music playlist generation by adapted simulated annealing

    NARCIS (Netherlands)

    Pauws, S.C.; Verhaegh, W.F.J.; Vossen, M.P.H.

    2008-01-01

    We present the design of an algorithm for use in an interactivemusic system that automatically generates music playlists that fit the music preferences of a user. To this end, we introduce a formal model, define the problem of automatic playlist generation (APG), and proof its NP-hardness. We use a

  10. Resegregation of Public Schools: The Third Generation. A Report on the Condition of Desegregation in America's Public Schools by the Network of Regional Desegregation Assistance Centers.

    Science.gov (United States)

    Northwest Regional Educational Lab., Portland, OR. Center for National Origin, Race and Sex Equity.

    A third generation of school segregation has evolved, with the following problems: (1) renewed physical segregation; (2) limited teacher expectations for minority students; (3) culturally biased instructional methods; (4) persistence of sex stereotyping and bias; and (5) ability grouping that isolates students on the basis of race, national…

  11. Time-resolved processes in a pulsed electrical discharge in water generated with shock wave assistance in a plate-to-plate configuration

    Science.gov (United States)

    Stelmashuk, V.

    2014-12-01

    Plate-to-plate geometry is not usually used for a discharge generation in water because of a low electric field that is insufficient for electrical breakdown between electrodes. In the present research, a new method of the generation of electrical discharge in water using plate electrodes is proposed. A high voltage pulse is applied to a pair of disc electrodes at a time when a shock wave is passing between them. This method allows for depositing a higher electrical energy than with the case of pin-to-pin electrodes (or pin-to-plate electrodes) without their destruction. This discharge initiation occurs in numerous cavitation bubbles generated by the shock wave. The discharge evolution was studied using a high-speed framing camera. Two interesting effects have been observed. Firstly, multiple streamers are incepted on a cathode, which is not typical for the symmetrical electrode configuration. Secondly, the plasma in the spark channel reveals not to be homogeneous. The dynamics of a vapour bubble generated by this spark were studied by a shadowgraph method. The bubble’s growth, collapse and rebound are discussed.

  12. Housing Assistance

    Directory of Open Access Journals (Sweden)

    Emma Baker

    2013-07-01

    Full Text Available In Australia, an increasing number of households face problems of access to suitable housing in the private market. In response, the Federal and State Governments share responsibility for providing housing assistance to these, mainly low-income, households. A broad range of policy instruments are used to provide and maintain housing assistance across all housing tenures, for example, assisting entry into homeownership, providing affordability assistance in the private rental market, and the provision of socially owned and managed housing options. Underlying each of these interventions is the premise that secure, affordable, and appropriate housing provides not only shelter but also a number of nonshelter benefits to individuals and their households. Although the nonshelter outcomes of housing are well acknowledged in Australia, the understanding of the nonshelter outcomes of housing assistance is less clear. This paper explores nonshelter outcomes of three of the major forms of housing assistance provided by Australian governments—low-income mortgage assistance, social housing, and private rent assistance. It is based upon analysis of a survey of 1,353 low-income recipients of housing assistance, and specifically measures the formulation of health and well-being, financial stress, and housing satisfaction outcomes across these three assistance types. We find clear evidence that health, finance, and housing satisfaction outcomes are associated with quite different factors for individuals in these three major housing assistance types.

  13. The Perugia University Automatic Observatory

    Science.gov (United States)

    Tosti, Gino; Pascolini, Sergio; Fiorucci, Massimo

    1996-08-01

    In this paper we describe the hardware and software architecture of the Automatic Imaging Telescope (AIT), recently developed at the Perugia University Observatory. It is based on an existing 0.4 m telescope which was transformed into an automatic device. During the night, all the observatory functions are controlled by two PCs in an unattended mode. The system is equipped with an autoguider and the software was designed to allow the automatic reduction of the data at the end of the night. Since October 1994 the AIT has been collecting a large amount of BVR_cI_c data for about 30 blazars. (SECTION: Astronomical Instrumentation)

  14. Electronic amplifiers for automatic compensators

    CERN Document Server

    Polonnikov, D Ye

    1965-01-01

    Electronic Amplifiers for Automatic Compensators presents the design and operation of electronic amplifiers for use in automatic control and measuring systems. This book is composed of eight chapters that consider the problems of constructing input and output circuits of amplifiers, suppression of interference and ensuring high sensitivity.This work begins with a survey of the operating principles of electronic amplifiers in automatic compensator systems. The succeeding chapters deal with circuit selection and the calculation and determination of the principal characteristics of amplifiers, as

  15. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  16. Development of automatic steel coil recognition system for automated crane

    OpenAIRE

    Nishibe, Kunihiko; Fujiwara, Naofumi

    1999-01-01

    An automatic steel coil recognition system with two types of laser-assisted range sensor has been developed for full automated crane operation in the steel coil yard. Performance tests of recognizing full scale model coils were carried out by mounting the recognition system on a full size crane. As a result, recognition accuracy of coil center position, coil diameter and width were confirmed to be ±20 mm, which is enough fur practical applications. This recognition system was delivered to com...

  17. Automatic multicommmutated flow system for diffusion studies of pharmaceuticals through artificial enteric membrane.

    Science.gov (United States)

    Sales, M G; Reis, B F; Montenegro, M C

    2001-08-01

    An automatic flow procedure with spectrophotometric detection was developed for the study of pharmaceuticals diffusion through an artificial enteric membrane. The manifold comprised two independent flow pathways, gathered by a diffusion unit with two compartments and an enteric lipophilic membrane. The pathways were automatically filled with solutions simulating digestive and plasmatic conditions by means of four solenoid valves. The diffusion of pharmaceuticals from the enteric to the plasmatic compartment was performed in closed loop pathways, and was continuously monitored by a flow cell coupled to the acceptor solution pathway. The volumes of the digestive and plasmatic solutions were 6.0 and 3.6 ml, respectively, which comprised filling unit compartment, pumping tubing and connecting flow lines. Pumping flow rates of donor and acceptor solutions were maintained at 6.0 and 2.5 ml min(-1), respectively. The proposed system was employed in diffusion studies of caffeine and aminophylline, and in the evaluation of the influence of tensioactive agents on the diffusion process. After continuous solutions circulation for 60 min, caffeine concentration in the acceptor stream was ca. 18% of its initial concentration at the digestive compartment. The system could be programmed to perform several replicates, stopping them with different degrees of diffusion without operator assistance. The data generated by the spectrophotometer was read by the microcomputer as a time function, and stored for further mathematical treatment.

  18. Automatic Construction of Hypotheses for Linear Objects in Digital and Laser Scanning Images

    Directory of Open Access Journals (Sweden)

    Quintino Dalmolin

    2004-12-01

    Full Text Available This paper presents an automatic road hypotheses approach using digital image and laser scanning image combining various Digital Image Processing techniques. The semantic objects, in this work, are linear features, such as, roads and streets. The aim of this paper is extract automatically road hypotheses on image space and object space for use the information in automatic absolute orientation process. The results show that methodology is efficiency and the roads hypotheses are generate and validate.

  19. Automatic interpretation and writing report of the adult waking electroencephalogram.

    Science.gov (United States)

    Shibasaki, Hiroshi; Nakamura, Masatoshi; Sugi, Takenao; Nishida, Shigeto; Nagamine, Takashi; Ikeda, Akio

    2014-06-01

    Automatic interpretation of the EEG has so far been faced with significant difficulties because of a large amount of spatial as well as temporal information contained in the EEG, continuous fluctuation of the background activity depending on changes in the subject's vigilance and attention level, the occurrence of paroxysmal activities such as spikes and spike-and-slow-waves, contamination of the EEG with a variety of artefacts and the use of different recording electrodes and montages. Therefore, previous attempts of automatic EEG interpretation have been focussed only on a specific EEG feature such as paroxysmal abnormalities, delta waves, sleep stages and artefact detection. As a result of a long-standing cooperation between clinical neurophysiologists and system engineers, we report for the first time on a comprehensive, computer-assisted, automatic interpretation of the adult waking EEG. This system analyses the background activity, intermittent abnormalities, artefacts and the level of vigilance and attention of the subject, and automatically presents its report in written form. Besides, it also detects paroxysmal abnormalities and evaluates the effects of intermittent photic stimulation and hyperventilation on the EEG. This system of automatic EEG interpretation was formed by adopting the strategy that the qualified EEGers employ for the systematic visual inspection. This system can be used as a supplementary tool for the EEGer's visual inspection, and for educating EEG trainees and EEG technicians. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Comparison of intraoperative outcomes using the new and old generation da Vinci® robot for robot-assisted laparoscopic prostatectomy.

    Science.gov (United States)

    Shah, Ketul; Abaza, Ronney

    2011-11-01

    To review and compare intraoperative outcomes for robotic prostatectomy procedures performed on two generations of the da Vinci robotic surgery platform. We reviewed 100 consecutive robotic prostatectomy cases and compared intraoperative outcomes for procedures randomly performed on either the da Vinci S robot or first-generation standard robot. Baseline demographic data and intra-operative variables potentially impacting outcomes were reviewed and compared between the two groups. Mean total operative time was 191 min using the standard da Vinci robot (range 132-266) versus 169 min with S robot (range 98-230), representing a mean difference of 22 min (P = 0.002). This difference was statistically significant despite no difference in mean patient BMI of 30.6 (range 19-51) for standard versus 29.3 (range 21-37) for S (P = 0.31), no difference in mean prostate size of 54.6 g (range 26-101) for standard versus 57.3 g (range 32-151) for S (P = 0.55), and no difference in frequency of nerve-sparing (P = 0.99). There was also no difference in the portions of procedures performed by residents, which in some cases was none and some the entire procedure, but the standard was more often used for the surgeon's first case of the day (P = 0.006). There was no difference in blood loss (P = 0.08), positive margins (P = 0.87), or mean number of lymph nodes removed (10.7 vs 10.6). Both generations of da Vinci robotic technology are equally effective for PALP, but the S robot appears to allow shorter procedure times. Further such evaluations are necessary to guide institutions and public policy decision-makers on investments in newer generations of robotic technology as incremental advances continue. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.

  1. Improved in-gel approaches to generate peptide maps of integral membrane proteins with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    Science.gov (United States)

    van Montfort, Bart A; Canas, Benito; Duurkens, Ria; Godovac-Zimmermann, Jasminka; Robillard, George T

    2002-03-01

    This paper reports studies of in-gel digestion procedures to generate MALDI-MS peptide maps of integral membrane proteins. The methods were developed for the membrane domain of the mannitol permease of E. coli. In-gel digestion of this domain with trypsin, followed by extraction of the peptides from the gel, yields only 44% sequence coverage. Since lysines and arginines are seldomly found in the membrane-spanning regions, complete tryptic cleavage will generate large hydrophobic fragments, many of which are poorly soluble and most likely contribute to the low sequence coverage. Addition of the detergent octyl-beta-glucopyranoside (OBG), at 0.1% concentration, to the extraction solvent increases the total number of peptides detected to at least 85% of the total protein sequence. OBG facilitates the recovery of hydrophobic peptides when they are SpeedVac dried during the extraction procedure. Many of the newly recovered peptides are partial cleavage products. This seems to be advantageous since it generates hydrophobic fragments with a hydrophilic solubilizing part. In-gel CNBr cleavage resulted in 5-10-fold more intense spectra, 83% sequence coverage, fully cleaved fragments and no effect of OBG. In contrast to tryptic cleavage sites, the CNBr cleavage sites are found in transmembrane segments; cleavage at these sites generates smaller hydrophobic fragments, which are more soluble and do not need OBG. With the results of both cleavages, a complete sequence coverage of the membrane domain of the mannitol permease of E. coli is obtained without the necessity of using HPLC separation. The protocols were applied to two other integral membrane proteins, which confirmed the general applicability of CNBr cleavage and the observed effects of OBG in peptide recovery after tryptic digestion. Copyright 2002 John Wiley & Sons, Ltd.

  2. Trace element analysis of humus-rich natural water samples:method development for UV-LED assisted photocatalytic sample preparation and hydride generation ICP-MS analysis

    OpenAIRE

    Havia, J. (Johanna)

    2017-01-01

    Abstract Humus-rich natural water samples, containing high concentrations of dissolved organic carbon (DOC), are challenging for certain analytical methods used in trace element analysis, including hydride generation methods and electrochemical methods. In order to obtain reliable results, the samples must to be pretreated to release analytes from humic acid complexes prior to the determination. In this study, methods for both pretreatment and analysis steps were developed. Arsenic is ...

  3. Enhancement Approachof Object Constraint Language Generation

    Science.gov (United States)

    Salemi, Samin; Selamat, Ali

    2018-01-01

    OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.

  4. Clothes Dryer Automatic Termination Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    TeGrotenhuis, Ward E.

    2014-10-01

    Volume 2: Improved Sensor and Control Designs Many residential clothes dryers on the market today provide automatic cycles that are intended to stop when the clothes are dry, as determined by the final remaining moisture content (RMC). However, testing of automatic termination cycles has shown that many dryers are susceptible to over-drying of loads, leading to excess energy consumption. In particular, tests performed using the DOE Test Procedure in Appendix D2 of 10 CFR 430 subpart B have shown that as much as 62% of the energy used in a cycle may be from over-drying. Volume 1 of this report shows an average of 20% excess energy from over-drying when running automatic cycles with various load compositions and dryer settings. Consequently, improving automatic termination sensors and algorithms has the potential for substantial energy savings in the U.S.

  5. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  6. AUTOMATIC FUSION OF PARTIAL RECONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    A. Wendel

    2012-07-01

    Full Text Available Novel image acquisition tools such as micro aerial vehicles (MAVs in form of quad- or octo-rotor helicopters support the creation of 3D reconstructions with ground sampling distances below 1 cm. The limitation of aerial photogrammetry to nadir and oblique views in heights of several hundred meters is bypassed, allowing close-up photos of facades and ground features. However, the new acquisition modality also introduces challenges: First, flight space might be restricted in urban areas, which leads to missing views for accurate 3D reconstruction and causes fracturing of large models. This could also happen due to vegetation or simply a change of illumination during image acquisition. Second, accurate geo-referencing of reconstructions is difficult because of shadowed GPS signals in urban areas, so alignment based on GPS information is often not possible. In this paper, we address the automatic fusion of such partial reconstructions. Our approach is largely based on the work of (Wendel et al., 2011a, but does not require an overhead digital surface model for fusion. Instead, we exploit that patch-based semi-dense reconstruction of the fractured model typically results in several point clouds covering overlapping areas, even if sparse feature correspondences cannot be established. We approximate orthographic depth maps for the individual parts and iteratively align them in a global coordinate system. As a result, we are able to generate point clouds which are visually more appealing and serve as an ideal basis for further processing. Mismatches between parts of the fused models depend only on the individual point density, which allows us to achieve a fusion accuracy in the range of ±1 cm on our evaluation dataset.

  7. An automatic image recognition approach

    Directory of Open Access Journals (Sweden)

    Tudor Barbu

    2007-07-01

    Full Text Available Our paper focuses on the graphical analysis domain. We propose an automatic image recognition technique. This approach consists of two main pattern recognition steps. First, it performs an image feature extraction operation on an input image set, using statistical dispersion features. Then, an unsupervised classification process is performed on the previously obtained graphical feature vectors. An automatic region-growing based clustering procedure is proposed and utilized in the classification stage.

  8. Prospects for de-automatization.

    Science.gov (United States)

    Kihlstrom, John F

    2011-06-01

    Research by Raz and his associates has repeatedly found that suggestions for hypnotic agnosia, administered to highly hypnotizable subjects, reduce or even eliminate Stroop interference. The present paper sought unsuccessfully to extend these findings to negative priming in the Stroop task. Nevertheless, the reduction of Stroop interference has broad theoretical implications, both for our understanding of automaticity and for the prospect of de-automatizing cognition in meditation and other altered states of consciousness. Copyright © 2010 Elsevier Inc. All rights reserved.

  9. The automatization of journalistic narrative

    Directory of Open Access Journals (Sweden)

    Naara Normande

    2013-06-01

    Full Text Available This paper proposes an initial discussion about the production of automatized journalistic narratives. Despite being a topic discussed in specialized sites and international conferences in communication area, the concepts are still deficient in academic research. For this article, we studied the concepts of narrative, databases and algorithms, indicating a theoretical trend that explains this automatized journalistic narratives. As characterization, we use the cases of Los Angeles Times, Narrative Science and Automated Insights.

  10. Automatic Collision Avoidance Technology (ACAT)

    Science.gov (United States)

    Swihart, Donald E.; Skoog, Mark A.

    2007-01-01

    This document represents two views of the Automatic Collision Avoidance Technology (ACAT). One viewgraph presentation reviews the development and system design of Automatic Collision Avoidance Technology (ACAT). Two types of ACAT exist: Automatic Ground Collision Avoidance (AGCAS) and Automatic Air Collision Avoidance (AACAS). The AGCAS Uses Digital Terrain Elevation Data (DTED) for mapping functions, and uses Navigation data to place aircraft on map. It then scans DTED in front of and around aircraft and uses future aircraft trajectory (5g) to provide automatic flyup maneuver when required. The AACAS uses data link to determine position and closing rate. It contains several canned maneuvers to avoid collision. Automatic maneuvers can occur at last instant and both aircraft maneuver when using data link. The system can use sensor in place of data link. The second viewgraph presentation reviews the development of a flight test and an evaluation of the test. A review of the operation and comparison of the AGCAS and a pilot's performance are given. The same review is given for the AACAS is given.

  11. A Technique: Generating Alternative Thoughts

    Directory of Open Access Journals (Sweden)

    Serkan AKKOYUNLU

    2013-03-01

    Full Text Available Introduction: One of the basic techniques of cognitive therapy is examination of automatic thoughts and reducing the belief in them. By employing this, we can overcome the cognitive bias apparent in mental disorders. Despite this view, according to another cognitive perspective in a given situation, there are distinct cognitive representations competing for retrieval from memory just like positive and negative schemas. In this sense generating or strengthening alternative explanations or balanced thoughts that explain the situation better than negative automatic thoughts is one of the important process goals of cognitive therapy.Objective: Aim of this review is to describe methods used to generate alternative/balanced thoughts that are used in examining automatic thoughts and also a part of automatic thought records. Alternative/balanced thoughts are the summary and end point of automatic thought work. In this text different ways including listing alternative thoughts, using examining the evidence for generating balanced thoughts, decatastrophizing in anxiety and a meta-cognitive method named two explanations are discussed. Different ways to use this technique as a homework assignment is also reviewed. Remarkable aspects of generating alternative explanations and realistic/balanced thoughts are also reviewed and exemplified using therapy transcripts. Conclusion: Generating alternative explanations and balanced thoughts are the end point and important part of therapy work on automatic thoughts. When applied properly and rehearsed as homework between sessions, these methods may lead to improvement in many mental disorders

  12. Global Distribution Adjustment and Nonlinear Feature Transformation for Automatic Colorization

    Directory of Open Access Journals (Sweden)

    Terumasa Aoki

    2018-01-01

    Full Text Available Automatic colorization is generally classified into two groups: propagation-based methods and reference-based methods. In reference-based automatic colorization methods, color image(s are used as reference(s to reconstruct original color of a gray target image. The most important task here is to find the best matching pairs for all pixels between reference and target images in order to transfer color information from reference to target pixels. A lot of attractive local feature-based image matching methods have already been developed for the last two decades. Unfortunately, as far as we know, there are no optimal matching methods for automatic colorization because the requirements for pixel matching in automatic colorization are wholly different from those for traditional image matching. To design an efficient matching algorithm for automatic colorization, clustering pixel with low computational cost and generating descriptive feature vector are the most important challenges to be solved. In this paper, we present a novel method to address these two problems. In particular, our work concentrates on solving the second problem (designing a descriptive feature vector; namely, we will discuss how to learn a descriptive texture feature using scaled sparse texture feature combining with a nonlinear transformation to construct an optimal feature descriptor. Our experimental results show our proposed method outperforms the state-of-the-art methods in terms of robustness for color reconstruction for automatic colorization applications.

  13. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    Science.gov (United States)

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  14. Environmental Factors Influencing the Structural Dynamics of Soil Microbial Communities During Assisted Phytostabilization of Acid-Generating Mine Tailings: a Mesocosm Experiment

    Science.gov (United States)

    Valentín-Vargas, Alexis; Root, Robert A.; Neilson, Julia W; Chorover, Jon; Maier, Raina M.

    2014-01-01

    Compost-assisted phytostabilization has recently emerged as a robust alternative for reclamation of metalliferous mine tailings. Previous studies suggest that root-associated microbes may be important for facilitating plant establishment on the tailings, yet little is known about the long-term dynamics of microbial communities during reclamation. A mechanistic understanding of microbial community dynamics in tailings ecosystems undergoing remediation is critical because these dynamics profoundly influence both the biogeochemical weathering of tailings and the sustainability of a plant cover. Here we monitor the dynamics of soil microbial communities (i.e. bacteria, fungi, archaea) during a 12-month mesocosm study that included 4 treatments: 2 unplanted controls (unamended and compost-amended tailings) and 2 compost-amended seeded tailings treatments. Bacterial, fungal and archaeal communities responded distinctively to the revegetation process and concurrent changes in environmental conditions and pore water chemistry. Compost addition significantly increased microbial diversity and had an immediate and relatively long-lasting buffering-effect on pH, allowing plants to germinate and thrive during the early stages of the experiment. However, the compost buffering capacity diminished after six months and acidification took over as the major factor affecting plant survival and microbial community structure. Immediate changes in bacterial communities were observed following plant establishment, whereas fungal communities showed a delayed response that apparently correlated with the pH decline. Fluctuations in cobalt pore water concentrations, in particular, had a significant effect on the structure of all three microbial groups, which may be linked to the role of cobalt in metal detoxification pathways. The present study represents, to our knowledge, the first documentation of the dynamics of the three major microbial groups during revegetation of compost

  15. Analysis of heat generation of lithium ion rechargeable batteries used in implantable battery systems for driving undulation pump ventricular assist device.

    Science.gov (United States)

    Okamoto, Eiji; Nakamura, Masatoshi; Akasaka, Yuhta; Inoue, Yusuke; Abe, Yusuke; Chinzei, Tsuneo; Saito, Itsuro; Isoyama, Takashi; Mochizuki, Shuichi; Imachi, Kou; Mitamura, Yoshinori

    2007-07-01

    We have developed internal battery systems for driving an undulation pump ventricular assist device using two kinds of lithium ion rechargeable batteries. The lithium ion rechargeable batteries have high energy density, long life, and no memory effect; however, rise in temperature of the lithium ion rechargeable battery is a critical issue. Evaluation of temperature rise by means of numerical estimation is required to develop an internal battery system. Temperature of the lithium ion rechargeable batteries is determined by ohmic loss due to internal resistance, chemical loss due to chemical reaction, and heat release. Measurement results of internal resistance (R(cell)) at an ambient temperature of 37 degrees C were 0.1 Omega in the lithium ion (Li-ion) battery and 0.03 Omega in the lithium polymer (Li-po) battery. Entropy change (DeltaS) of each battery, which leads to chemical loss, was -1.6 to -61.1 J/(mol.K) in the Li-ion battery and -9.6 to -67.5 J/(mol.K) in the Li-po battery depending on state of charge (SOC). Temperature of each lithium ion rechargeable battery under a discharge current of 1 A was estimated by finite element method heat transfer analysis at an ambient temperature of 37 degrees C configuring with measured R(cell) and measured DeltaS in each SOC. Results of estimation of time-course change in the surface temperature of each battery coincided with results of measurement results, and the success of the estimation will greatly contribute to the development of an internal battery system using lithium ion rechargeable batteries.

  16. Determination of mercury compounds in fish by microwave-assisted extraction and liquid chromatography-vapor generation-inductively coupled plasma mass spectrometry

    Science.gov (United States)

    Chiou, Chwei-Sheng; Jiang, Shiuh-Jen; Kumar Danadurai, K. Suresh

    2001-07-01

    A method employing a vapor generation system and LC combined with inductively coupled plasma mass spectrometry (LC-ICP-MS) is presented for the determination of mercury in biological tissues. An open vessel microwave digestion system was used to extract the mercury compounds from the sample matrix. The efficiency of the mobile phase, a mixture of L-cysteine and 2-mercaptoethanol, was evaluated for LC separation of inorganic mercury [Hg(II)], methylmercury (methyl-Hg) and ethylmercury (ethyl-Hg). The sensitivity, detection limits and repeatability of the liquid chromatography (LC) ICP-MS system with a vapor generator were comparable to, or better than, that of an LC-ICP-MS system with conventional pneumatic nebulization, or other sample introduction techniques. The experimental detection limits for various mercury species were in the range of 0.05-0.09 ng ml -1 Hg, based on peak height. The proposed method was successfully applied to the determination of mercury compounds in a swordfish sample purchased from the local market. The accuracy of the method was evaluated by analyzing a marine biological certified reference material (DORM-2, NRCC).

  17. Assistive Technologies for Reading

    Science.gov (United States)

    Ruffin, Tiece M.

    2012-01-01

    Twenty-first century teachers working with diverse readers are often faced with the question of how to integrate technology in reading instruction that meets the needs of the techno-generation. Are today's teachers equipped with the knowledge of how to effectively use Assistive Technologies (AT) for reading? This position paper discusses AT for…

  18. Automatic detection of clinical mastitis is improved by in-line monitoring of somatic cell count

    NARCIS (Netherlands)

    Kamphuis, C.; Sherlock, R.; Jago, J.; Mein, G.; Hogeveen, H.

    2008-01-01

    This study explored the potential value of in-line composite somatic cell count (ISCC) sensing as a sole criterion or in combination with quarter-based electrical conductivity (EC) of milk, for automatic detection of clinical mastitis (CM) during automatic milking. Data generated from a New Zealand

  19. Integrated Engineering Policy in the Branch of Computer Science and Automatization,

    Science.gov (United States)

    A short description is given of the position of Czechoslovakia in the region of automatization and computer production. Automatization and computer ... science problems in Czechoslovakia are given and data on the future development of third-generation computers in socialistic countries, are described. (Author)

  20. Automatic annotation suggestions for audiovisual archives: Evaluation aspects

    NARCIS (Netherlands)

    Gazendam, L.J.B.; Malaisé, V.; de Jong, A.; Wartena, C.; Brugman, H.; Schreiber, A.Th.

    2009-01-01

    In the context of large and ever growing archives, generating annotation suggestions automatically from textual resources related to the documents to be archived is an interesting option in theory. It could save a lot of work in the time consuming and expensive task of manual annotation and it could

  1. Automatic Color Sorting of Hardwood Edge-Glued Panel Parts

    Science.gov (United States)

    D. Earl Kline; Richard Conners; Qiang Lu; Philip A. Araman

    1997-01-01

    This paper describes an automatic color sorting system for red oak edge-glued panel parts. The color sorting system simultaneously examines both faces of a panel part and then determines which face has the "best" color, and sorts the part into one of a number of color classes at plant production speeds. Initial test results show that the system generated over...

  2. Sign language perception research for improving automatic sign language recognition

    NARCIS (Netherlands)

    Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.

    2009-01-01

    Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of

  3. Automatic attention does not equal automatic fear: preferential attention without implicit valence.

    Science.gov (United States)

    Purkis, Helena M; Lipp, Ottmar V

    2007-05-01

    Theories of nonassociative fear acquisition hold that humans have an innate predisposition for some fears, such as fear of snakes and spiders. This predisposition may be mediated by an evolved fear module (Ohman & Mineka, 2001) that responds to basic perceptual features of threat stimuli by directing attention preferentially and generating an automatic fear response. Visual search and affective priming tasks were used to examine attentional processing and implicit evaluation of snake and spider pictures in participants with different explicit attitudes; controls (n = 25) and snake and spider experts (n = 23). Attentional processing and explicit evaluation were found to diverge; snakes and spiders were preferentially attended to by all participants; however, they were negative only for controls. Implicit evaluations of dangerous and nondangerous snakes and spiders, which have similar perceptual features, differed for expert participants, but not for controls. The authors suggest that although snakes and spiders are preferentially attended to, negative evaluations are not automatically elicited during this processing.

  4. AMPS/PC - AUTOMATIC MANUFACTURING PROGRAMMING SYSTEM

    Science.gov (United States)

    Schroer, B. J.

    1994-01-01

    The AMPS/PC system is a simulation tool designed to aid the user in defining the specifications of a manufacturing environment and then automatically writing code for the target simulation language, GPSS/PC. The domain of problems that AMPS/PC can simulate are manufacturing assembly lines with subassembly lines and manufacturing cells. The user defines the problem domain by responding to the questions from the interface program. Based on the responses, the interface program creates an internal problem specification file. This file includes the manufacturing process network flow and the attributes for all stations, cells, and stock points. AMPS then uses the problem specification file as input for the automatic code generator program to produce a simulation program in the target language GPSS. The output of the generator program is the source code of the corresponding GPSS/PC simulation program. The system runs entirely on an IBM PC running PC DOS Version 2.0 or higher and is written in Turbo Pascal Version 4 requiring 640K memory and one 360K disk drive. To execute the GPSS program, the PC must have resident the GPSS/PC System Version 2.0 from Minuteman Software. The AMPS/PC program was developed in 1988.

  5. FEMA Housing Assistance Owners - API

    Data.gov (United States)

    Department of Homeland Security — This dataset lists aggregated, non-PII dataset of FEMA Housing Assistance Program for House Owners The data was generated by FEMA's ECIM (Enterprise Coordination...

  6. FEMA Housing Assistance Renters - API

    Data.gov (United States)

    Department of Homeland Security — This dataset lists aggregated, non-PII dataset of FEMA Housing Assistance Program for House Renters The data was generated by FEMA's ECIM (Enterprise Coordination...

  7. AMMOS: A Software Platform to Assist in silico Screening

    Directory of Open Access Journals (Sweden)

    Lagorce D.

    2009-12-01

    Full Text Available Three software packages based on the common platform of AMMOS (Automated Molecular Mechanics Optimization tool for in silico Screening for assisting virtual ligand screening purposes have been recently developed. DG-AMMOS allows generation of 3D conformations of small molecules using distance geometry and molecular mechanics optimization. AMMOS_SmallMol is a package for structural refinement of compound collections that can be used prior to docking experiments. AMMOS_ProtLig is a package for energy minimization of protein-ligand complexes. It performs an automatic procedure for molecular mechanics minimization at different levels of flexibility - from rigid to fully flexible structures of both the ligand and the receptor. The packages have been tested on small molecules with a high structural diversity and proteins binding sites of completely different geometries and physicochemical properties. The platform is developed as an open source software and can be used in a broad range of in silico drug design studies.

  8. Slurry sampling-microwave assisted leaching prior to hydride generation-pervaporation-atomic fluorescence detection for the determination of extractable arsenic in soil.

    Science.gov (United States)

    Caballo-López, A; Luque De Castro, M D

    2003-05-01

    A flow injection-pervaporation method, where the sample was introduced as slurry, has been developed for the continuous derivatization and determination of arsenic in soil by hydride generation-atomic fluorescence spectrometry. The removal of arsenic is achieved with the help of a microwave digestor, which facilitates an on-line leaching in the flow injection manifold. Slurries, prepared by mixing the soil (particle size <65 microm) with 6 mol L(-)(1) HCl, were magnetically stirred for 3 min, and while stirring, the pump aspirated the aliquot and filled the loop (500 microL) of the injection valve. An industrial soil and five types of soil (sandy, clayey, slimy, limy, organic) were selected for the optimization of the leaching and determination steps of arsenic, respectively. The results obtained from three certified reference materials [stream sediment GBW 07311 (188 microg/mL As), river sediment CRM 320 (76.7 microg/mL As), and soil GBW 07405 (412 microg/mL As)] using direct calibration against aqueous standards demonstrate the reliability of the method. The relative standard deviation for within-laboratory reproducibility was 4.5%.

  9. Automatic control study of the icing research tunnel refrigeration system

    Science.gov (United States)

    Kieffer, Arthur W.; Soeder, Ronald H.

    1991-02-01

    The Icing Research Tunnel (IRT) at the NASA Lewis Research Center is a subsonic, closed-return atmospheric tunnel. The tunnel includes a heat exchanger and a refrigeration plant to achieve the desired air temperature and a spray system to generate the type of icing conditions that would be encountered by aircraft. At the present time, the tunnel air temperature is controlled by manual adjustment of freon refrigerant flow control valves. An upgrade of this facility calls for these control valves to be adjusted by an automatic controller. The digital computer simulation of the IRT refrigeration plant and the automatic controller that was used in the simulation are discussed.

  10. Automatic control study of the icing research tunnel refrigeration system

    Science.gov (United States)

    Kieffer, Arthur W.; Soeder, Ronald H.

    1991-01-01

    The Icing Research Tunnel (IRT) at the NASA Lewis Research Center is a subsonic, closed-return atmospheric tunnel. The tunnel includes a heat exchanger and a refrigeration plant to achieve the desired air temperature and a spray system to generate the type of icing conditions that would be encountered by aircraft. At the present time, the tunnel air temperature is controlled by manual adjustment of freon refrigerant flow control valves. An upgrade of this facility calls for these control valves to be adjusted by an automatic controller. The digital computer simulation of the IRT refrigeration plant and the automatic controller that was used in the simulation are discussed.

  11. Microbial fuel cell assisted band gap narrowed TiO2 for visible light-induced photocatalytic activities and power generation.

    Science.gov (United States)

    Khan, Mohammad Ehtisham; Khan, Mohammad Mansoob; Min, Bong-Ki; Cho, Moo Hwan

    2018-01-29

    This paper reports a simple, biogenic and green approach to obtain narrow band gap and visible light-active TiO2 nanoparticles. Commercial white TiO2 (w-TiO2) was treated in the cathode chamber of a Microbial Fuel Cell (MFC), which produced modified light gray TiO2 (g-TiO2) nanoparticles. The DRS, PL, XRD, EPR, HR-TEM, and XPS were performed to understand the band gap decline of g-TiO2. The optical study revealed a significant decrease in the band gap of the g-TiO2 (E g  = 2.80 eV) compared to the w-TiO2 (E g  = 3.10 eV). The XPS revealed variations in the surface states, composition, Ti4+ to Ti3+ ratio, and oxygen vacancies in the g-TiO2. The Ti3+ and oxygen vacancy-induced enhanced visible light photocatalytic activity of g-TiO2 was confirmed by degrading different model dyes. The enhanced photoelectrochemical response under visible light irradiation further supported the improved performance of the g-TiO2 owing to a decrease in the electron transfer resistance and an increase in charge transfer rate. During the TiO2 treatment process, electricity generation in MFC was also observed, which was ~0.3979 V corresponding to a power density of 70.39 mW/m2. This study confirms narrow band gap TiO2 can be easily obtained and used effectively as photocatalysts and photoelectrode material.

  12. Annual review in automatic programming

    CERN Document Server

    Goodman, Richard

    2014-01-01

    Annual Review in Automatic Programming focuses on the techniques of automatic programming used with digital computers. Topics covered range from the design of machine-independent programming languages to the use of recursive procedures in ALGOL 60. A multi-pass translation scheme for ALGOL 60 is described, along with some commercial source languages. The structure and use of the syntax-directed compiler is also considered.Comprised of 12 chapters, this volume begins with a discussion on the basic ideas involved in the description of a computing process as a program for a computer, expressed in

  13. Algorithms for skiascopy measurement automatization

    Science.gov (United States)

    Fomins, Sergejs; Trukša, Renārs; KrūmiĆa, Gunta

    2014-10-01

    Automatic dynamic infrared retinoscope was developed, which allows to run procedure at a much higher rate. Our system uses a USB image sensor with up to 180 Hz refresh rate equipped with a long focus objective and 850 nm infrared light emitting diode as light source. Two servo motors driven by microprocessor control the rotation of semitransparent mirror and motion of retinoscope chassis. Image of eye pupil reflex is captured via software and analyzed along the horizontal plane. Algorithm for automatic accommodative state analysis is developed based on the intensity changes of the fundus reflex.

  14. Traduction automatique et terminologie automatique (Automatic Translation and Automatic Terminology

    Science.gov (United States)

    Dansereau, Jules

    1978-01-01

    An exposition of reasons why a system of automatic translation could not use a terminology bank except as a source of information. The fundamental difference between the two tools is explained and examples of translation and mistranslation are given as evidence of the limits and possibilities of each process. (Text is in French.) (AMH)

  15. 77 FR 53914 - Horton Automatics, Inc., a Subsidiary of Overhead Door Corporation Including On-Site Leased...

    Science.gov (United States)

    2012-09-04

    ... day of August 2012. Elliott S. Kushner, Certifying Officer, Office of Trade Adjustment Assistance... Regarding Eligibility To Apply for Worker Adjustment Assistance In accordance with Section 223 of the Trade... of automatic sliding, swinging, and revolving doors. The notice was published in the Federal Register...

  16. Automatic design of decision-tree algorithms with evolutionary algorithms.

    Science.gov (United States)

    Barros, Rodrigo C; Basgalupp, Márcio P; de Carvalho, André C P L F; Freitas, Alex A

    2013-01-01

    This study reports the empirical analysis of a hyper-heuristic evolutionary algorithm that is capable of automatically designing top-down decision-tree induction algorithms. Top-down decision-tree algorithms are of great importance, considering their ability to provide an intuitive and accurate knowledge representation for classification problems. The automatic design of these algorithms seems timely, given the large literature accumulated over more than 40 years of research in the manual design of decision-tree induction algorithms. The proposed hyper-heuristic evolutionary algorithm, HEAD-DT, is extensively tested using 20 public UCI datasets and 10 microarray gene expression datasets. The algorithms automatically designed by HEAD-DT are compared with traditional decision-tree induction algorithms, such as C4.5 and CART. Experimental results show that HEAD-DT is capable of generating algorithms which are significantly more accurate than C4.5 and CART.

  17. Automatic recognizing of vocal fold disorders from glottis images.

    Science.gov (United States)

    Huang, Chang-Chiun; Leu, Yi-Shing; Kuo, Chung-Feng Jeffrey; Chu, Wen-Lin; Chu, Yueng-Hsiang; Wu, Han-Cheng

    2014-09-01

    The laryngeal video stroboscope is an important instrument to test glottal diseases and read vocal fold images and voice quality for physician clinical diagnosis. This study is aimed to develop a medical system with functionality of automatic intelligent recognition of dynamic images. The static images of glottis opening to the largest extent and closing to the smallest extent were screened automatically using color space transformation and image preprocessing. The glottal area was also quantized. As the tongue base movements affected the position of laryngoscope and saliva would result in unclear images, this study used the gray scale adaptive entropy value to set the threshold in order to establish an elimination system. The proposed system can improve the effect of automatically captured images of glottis and achieve an accuracy rate of 96%. In addition, the glottal area and area segmentation threshold were calculated effectively. The glottis area segmentation was corrected, and the glottal area waveform pattern was drawn automatically to assist in vocal fold diagnosis. When developing the intelligent recognition system for vocal fold disorders, this study analyzed the characteristic values of four vocal fold patterns, namely, normal vocal fold, vocal fold paralysis, vocal fold polyp, and vocal fold cyst. It also used the support vector machine classifier to identify vocal fold disorders and achieved an identification accuracy rate of 98.75%. The results can serve as a very valuable reference for diagnosis. © IMechE 2014.

  18. Automatic Recognition of Object Names in Literature

    Science.gov (United States)

    Bonnin, C.; Lesteven, S.; Derriere, S.; Oberto, A.

    2008-08-01

    SIMBAD is a database of astronomical objects that provides (among other things) their bibliographic references in a large number of journals. Currently, these references have to be entered manually by librarians who read each paper. To cope with the increasing number of papers, CDS develops a tool to assist the librarians in their work, taking advantage of the Dictionary of Nomenclature of Celestial Objects, which keeps track of object acronyms and of their origin. The program searches for object names directly in PDF documents by comparing the words with all the formats stored in the Dictionary of Nomenclature. It also searches for variable star names based on constellation names and for a large list of usual names such as Aldebaran or the Crab. Object names found in the documents often correspond to several astronomical objects. The system retrieves all possible matches, displays them with their object type given by SIMBAD, and lets the librarian make the final choice. The bibliographic reference can then be automatically added to the object identifiers in the database. Besides, the systematic usage of the Dictionary of Nomenclature, which is updated manually, permitted to automatically check it and to detect errors and inconsistencies. Last but not least, the program collects some additional information such as the position of the object names in the document (in the title, subtitle, abstract, table, figure caption...) and their number of occurrences. In the future, this will permit to calculate the 'weight' of an object in a reference and to provide SIMBAD users with an important new information, which will help them to find the most relevant papers in the object reference list.

  19. Automatic Synthesis of Anthropomorphic Pulmonary CT Phantoms

    Science.gov (United States)

    Jimenez-Carretero, Daniel; San Jose Estepar, Raul; Diaz Cacio, Mario; Ledesma-Carbayo, Maria J.

    2016-01-01

    The great density and structural complexity of pulmonary vessels and airways impose limitations on the generation of accurate reference standards, which are critical in training and in the validation of image processing methods for features such as pulmonary vessel segmentation or artery–vein (AV) separations. The design of synthetic computed tomography (CT) images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image is differentiated unequivocally. This work demonstrates a complete framework to generate computational anthropomorphic CT phantoms of the human lung automatically. Starting from biological and image-based knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. A dataset of 24 labeled anthropomorphic pulmonary CT phantoms were synthesized with the proposed system. Visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems show good correspondence between real and synthetic lungs (p > 0.05 with low Cohen’s d effect size and AUC values), supporting the potentiality of the tool and the usefulness of the generated phantoms in the biomedical image processing field. PMID:26731653

  20. Automatic Synthesis of Anthropomorphic Pulmonary CT Phantoms.

    Directory of Open Access Journals (Sweden)

    Daniel Jimenez-Carretero

    Full Text Available The great density and structural complexity of pulmonary vessels and airways impose limitations on the generation of accurate reference standards, which are critical in training and in the validation of image processing methods for features such as pulmonary vessel segmentation or artery-vein (AV separations. The design of synthetic computed tomography (CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image is differentiated unequivocally. This work demonstrates a complete framework to generate computational anthropomorphic CT phantoms of the human lung automatically. Starting from biological and image-based knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. A dataset of 24 labeled anthropomorphic pulmonary CT phantoms were synthesized with the proposed system. Visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems show good correspondence between real and synthetic lungs (p > 0.05 with low Cohen's d effect size and AUC values, supporting the potentiality of the tool and the usefulness of the generated phantoms in the biomedical image processing field.