WorldWideScience

Sample records for automatic coding method

  1. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  2. Focusing Automatic Code Inspections

    NARCIS (Netherlands)

    Boogerd, C.J.

    2010-01-01

    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  3. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  4. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  5. Stiffness and the automatic selection of ODE codes

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1984-01-01

    The author describes the basic ideas behind the most popular methods for the numerical solution of ordinary differential equations (ODEs). He takes up the qualitative behavior of solutions of ODEs and its relation ot the propagation of numerical error. Codes for ODEs are intended either for stiff problems or for non-stiff problems. The difference is explained. Users of codes do not have the information needed to recognize stiffness. A code, DEASY, which automatically recognizes stiffness and selects a suitable method is described

  6. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  7. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  8. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  9. Production ready feature recognition based automatic group technology part coding

    Energy Technology Data Exchange (ETDEWEB)

    Ames, A.L.

    1990-01-01

    During the past four years, a feature recognition based expert system for automatically performing group technology part coding from solid model data has been under development. The system has become a production quality tool, capable of quickly the geometry based portions of a part code with no human intervention. It has been tested on over 200 solid models, half of which are models of production Sandia designs. Its performance rivals that of humans performing the same task, often surpassing them in speed and uniformity. The feature recognition capability developed for part coding is being extended to support other applications, such as manufacturability analysis, automatic decomposition (for finite element meshing and machining), and assembly planning. Initial surveys of these applications indicate that the current capability will provide a strong basis for other applications and that extensions toward more global geometric reasoning and tighter coupling with solid modeler functionality will be necessary.

  10. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  11. AUTOET code (a code for automatically constructing event trees and displaying subsystem interdependencies)

    International Nuclear Information System (INIS)

    Wilson, J.R.; Burdick, G.R.

    1977-06-01

    This is a user's manual for AUTOET I and II. AUTOET I is a computer code for automatic event tree construction. It is designed to incorporate and display subsystem interdependencies and common or key component dependencies in the event tree format. The code is written in FORTRAN IV for the CDC Cyber 76 using the Integrated Graphics System (IGS). AUTOET II incorporates consequence and risk calculations, in addition to some other refinements. 5 figures

  12. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  13. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    Science.gov (United States)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  14. Development of tools for automatic generation of PLC code

    CERN Document Server

    Koutli, Maria; Rochez, Jacques

    This Master thesis was performed at CERN and more specifically in the EN-ICE-PLC section. The Thesis describes the integration of two PLC platforms, that are based on CODESYS development tool, to the CERN defined industrial framework, UNICOS. CODESYS is a development tool for PLC programming, based on IEC 61131-3 standard, and is adopted by many PLC manufacturers. The two PLC development environments are, the SoMachine from Schneider and the TwinCAT from Beckhoff. The two CODESYS compatible PLCs, should be controlled by the SCADA system of Siemens, WinCC OA. The framework includes a library of Function Blocks (objects) for the PLC programs and a software for automatic generation of the PLC code based on this library, called UAB. The integration aimed to give a solution that is shared by both PLC platforms and was based on the PLCOpen XML scheme. The developed tools were demonstrated by creating a control application for both PLC environments and testing of the behavior of the code of the library.

  15. Three Methods for Occupation Coding Based on Statistical Learning

    Directory of Open Access Journals (Sweden)

    Gweon Hyukjun

    2017-03-01

    Full Text Available Occupation coding, an important task in official statistics, refers to coding a respondent’s text answer into one of many hundreds of occupation codes. To date, occupation coding is still at least partially conducted manually, at great expense. We propose three methods for automatic coding: combining separate models for the detailed occupation codes and for aggregate occupation codes, a hybrid method that combines a duplicate-based approach with a statistical learning algorithm, and a modified nearest neighbor approach. Using data from the German General Social Survey (ALLBUS, we show that the proposed methods improve on both the coding accuracy of the underlying statistical learning algorithm and the coding accuracy of duplicates where duplicates exist. Further, we find defining duplicates based on ngram variables (a concept from text mining is preferable to one based on exact string matches.

  16. ANALYSIS OF EXISTING AND PROSPECTIVE TECHNICAL CONTROL SYSTEMS OF NUMERIC CODES AUTOMATIC BLOCKING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2013-09-01

    Full Text Available Purpose. To identify the characteristic features of the engineering control measures system of automatic block of numeric code, identifying their advantages and disadvantages, to analyze the possibility of their use in the problems of diagnosing status of the devices automatic block and setting targets for the development of new diagnostic systems. Methodology. In order to achieve targets the objective theoretical and analytical method and the method of functional analysis have been used. Findings. The analysis of existing and future facilities of the remote control and diagnostics automatic block devices had shown that the existing systems of diagnosis were not sufficiently informative, designed primarily to control the discrete parameters, which in turn did not allow them to construct a decision support subsystem. In developing of new systems of technical diagnostics it was proposed to use the principle of centralized distributed processing of diagnostic data, to include a subsystem support decision-making in to the diagnostics system, it will reduce the amount of work to maintain the devices blocking and reduce recovery time after the occurrence injury. Originality. As a result, the currently existing engineering controls facilities of automatic block can not provide a full assessment of the state distillation alarms and locks. Criteria for the development of new systems of technical diagnostics with increasing amounts of diagnostic information and its automatic analysis were proposed. Practical value. These results of the analysis can be used in practice in order to select the technical control of automatic block devices, as well as the further development of diagnostic systems automatic block that allows for a gradual transition from a planned preventive maintenance service model to the actual state of the monitored devices.

  17. Bug Forecast: A Method for Automatic Bug Prediction

    Science.gov (United States)

    Ferenc, Rudolf

    In this paper we present an approach and a toolset for automatic bug prediction during software development and maintenance. The toolset extends the Columbus source code quality framework, which is able to integrate into the regular builds, analyze the source code, calculate different quality attributes like product metrics and bad code smells; and monitor the changes of these attributes. The new bug forecast toolset connects to the bug tracking and version control systems and assigns the reported and fixed bugs to the source code classes from the past. It then applies machine learning methods to learn which values of which quality attributes typically characterized buggy classes. Based on this information it is able to predict bugs in current and future versions of the classes.

  18. Design of Wireless Automatic Synchronization for the Low-Frequency Coded Ground Penetrating Radar

    Directory of Open Access Journals (Sweden)

    Zhenghuan Xia

    2015-01-01

    Full Text Available Low-frequency coded ground penetrating radar (GPR with a pair of wire dipole antennas has some advantages for deep detection. Due to the large distance between the two antennas, the synchronization design is a major challenge of implementing the GPR system. This paper proposes a simple and stable wireless automatic synchronization method based on our developed GPR system, which does not need any synchronization chips or modules and reduces the cost of the hardware system. The transmitter omits the synchronization preamble and pseudorandom binary sequence (PRBS at an appropriate time interval, while receiver automatically estimates the synchronization time and receives the returned signal from the underground targets. All the processes are performed in a single FPGA. The performance of the proposed synchronization method is validated with experiment.

  19. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling.

    Directory of Open Access Journals (Sweden)

    Florencio Rusty Punzalan

    Full Text Available Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs. Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code

  20. Occupational self-coding and automatic recording (OSCAR): a novel web-based tool to collect and code lifetime job histories in large population-based studies.

    Science.gov (United States)

    De Matteis, Sara; Jarvis, Deborah; Young, Heather; Young, Alan; Allen, Naomi; Potts, James; Darnton, Andrew; Rushton, Lesley; Cullinan, Paul

    2017-03-01

    Objectives The standard approach to the assessment of occupational exposures is through the manual collection and coding of job histories. This method is time-consuming and costly and makes it potentially unfeasible to perform high quality analyses on occupational exposures in large population-based studies. Our aim was to develop a novel, efficient web-based tool to collect and code lifetime job histories in the UK Biobank, a population-based cohort of over 500 000 participants. Methods We developed OSCAR (occupations self-coding automatic recording) based on the hierarchical structure of the UK Standard Occupational Classification (SOC) 2000, which allows individuals to collect and automatically code their lifetime job histories via a simple decision-tree model. Participants were asked to find each of their jobs by selecting appropriate job categories until they identified their job title, which was linked to a hidden 4-digit SOC code. For each occupation a job title in free text was also collected to estimate Cohen's kappa (κ) inter-rater agreement between SOC codes assigned by OSCAR and an expert manual coder. Results OSCAR was administered to 324 653 UK Biobank participants with an existing email address between June and September 2015. Complete 4-digit SOC-coded lifetime job histories were collected for 108 784 participants (response rate: 34%). Agreement between the 4-digit SOC codes assigned by OSCAR and the manual coder for a random sample of 400 job titles was moderately good [κ=0.45, 95% confidence interval (95% CI) 0.42-0.49], and improved when broader job categories were considered (κ=0.64, 95% CI 0.61-0.69 at a 1-digit SOC-code level). Conclusions OSCAR is a novel, efficient, and reasonably reliable web-based tool for collecting and automatically coding lifetime job histories in large population-based studies. Further application in other research projects for external validation purposes is warranted.

  1. FAST PALMPRINT AUTHENTICATION BY SOBEL CODE METHOD

    Directory of Open Access Journals (Sweden)

    Jyoti Malik

    2011-05-01

    Full Text Available The ideal real time personal authentication system should be fast and accurate to automatically identify a person’s identity. In this paper, we have proposed a palmprint based biometric authentication method with improvement in time and accuracy, so as to make it a real time palmprint authentication system. Several edge detection methods, wavelet transform, phase congruency etc. are available to extract line feature from the palmprint. In this paper, Multi-scale Sobel Code operators of different orientations (0?, 45?, 90?, and 135? are applied to the palmprint to extract Sobel-Palmprint features in different direc- tions. The Sobel-Palmprint features extracted are stored in Sobel- Palmprint feature vector and matched using sliding window with Hamming Distance similarity measurement method. The sliding win- dow method is accurate but time taking process. In this paper, we have improved the sliding window method so that the matching time reduces. It is observed that there is 39.36% improvement in matching time. In addition, a Min Max Threshold Range (MMTR method is proposed that helps in increasing overall system accuracy by reducing the False Acceptance Rate (FAR. Experimental results indicate that the MMTR method improves the False Acceptance Rate drastically and improvement in sliding window method reduces the comparison time. The accuracy improvement and matching time improvement leads to proposed real time authentication system.

  2. CURRENT STATE ANALYSIS OF AUTOMATIC BLOCK SYSTEM DEVICES, METHODS OF ITS SERVICE AND MONITORING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2014-01-01

    Full Text Available Purpose. Development of formalized description of automatic block system of numerical code based on the analysis of characteristic failures of automatic block system and procedure of its maintenance. Methodology. For this research a theoretical and analytical methods have been used. Findings. Typical failures of the automatic block systems were analyzed, as well as basic reasons of failure occur were found out. It was determined that majority of failures occurs due to defects of the maintenance system. Advantages and disadvantages of the current service technology of automatic block system were analyzed. Works that can be automatized by means of technical diagnostics were found out. Formal description of the numerical code of automatic block system as a graph in the state space of the system was carried out. Originality. The state graph of the numerical code of automatic block system that takes into account gradual transition from the serviceable condition to the loss of efficiency was offered. That allows selecting diagnostic information according to attributes and increasing the effectiveness of recovery operations in the case of a malfunction. Practical value. The obtained results of analysis and proposed the state graph can be used as the basis for the development of new means of diagnosing devices for automatic block system, which in turn will improve the efficiency and service of automatic block system devices in general.

  3. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  4. A bar-code reader for an alpha-beta automatic counting system - FAG

    International Nuclear Information System (INIS)

    Levinson, S.; Shemesh, Y.; Ankry, N.; Assido, H.; German, U.; Peled, O.

    1996-01-01

    A bar-code laser system for sample number reading was integrated into the FAG Alpha-Beta automatic counting system. The sample identification by means of an attached bar-code label enables unmistakable and reliable attribution of results to the counted sample. Installation of the bar-code reader system required several modifications: Mechanical changes in the automatic sample changer, design and production of new sample holders, modification of the sample planchettes, changes in the electronic system, update of the operating software of the system (authors)

  5. Automatic Annotation Method on Learners' Opinions in Case Method Discussion

    Science.gov (United States)

    Samejima, Masaki; Hisakane, Daichi; Komoda, Norihisa

    2015-01-01

    Purpose: The purpose of this paper is to annotate an attribute of a problem, a solution or no annotation on learners' opinions automatically for supporting the learners' discussion without a facilitator. The case method aims at discussing problems and solutions in a target case. However, the learners miss discussing some of problems and solutions.…

  6. APPLICATION OF FOURIER TRANSFORM AND WAVELET DECOMPOSITION FOR DECODING THE CONTINUOUS AUTOMATIC LOCOMOTIVE SIGNALING CODE

    Directory of Open Access Journals (Sweden)

    O. O. Hololobova

    2017-02-01

    Full Text Available Purpose. The existing system of automatic locomotive signaling (ALS was developed at the end of the last century. This system uses the principle of a numerical code which is implemented on the basis of relay engineering, and therefore, it is exposed to various types of interferences. Over the years, the system has been upgraded several times, but the causes of faults and failures in its operation are still the subject of research. It is known that the frequency and the phase modulation of signal has a higher interference immunity as compared to the amplitude modulation. Therefore, the purpose of the article is to study the possibility of using the frequency methods such as Fourier series expansion and wavelet decomposition to extract the informational component of the received code from ALS signals under the action of various types of interferences. Methodology. One can extract the information unavailable in time representation of the signal using the signal studies in the frequency domain. The wavelet decomposition has been used for this purpose. This makes it possible to represent the local characteristics of the signal and to provide time-frequency decomposition in two spaces at the same time. Due to the high accuracy of the signal representation it is possible to analyze the time localization of spectral components and eliminate interference components even in the case of coincidence of interference frequency with the signal carrier frequency. Findings. To compare informativity of the methods of Fourier expansion and wavelet decomposition it was studied the reference and noisy signal of green fire code using the software package MATLAB. Detailed analysis of the obtained spectral characteristics showed that the wavelet decomposition provides a more correct decoding of the signal. Originality. Replacing the electromagnetic relays in the ALS system by microprocessor hardware involves the use of some mathematical tool for decoding, in order to

  7. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    Science.gov (United States)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including distributed software systems, sensor networks, robot operation, complex scripts for spacecraft integration and testing, and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  8. Code Commentary and Automatic Refactorings using Feedback from Multiple Compilers

    DEFF Research Database (Denmark)

    Jensen, Nicklas Bo; Probst, Christian W.; Karlsson, Sven

    2014-01-01

    Optimizing compilers are essential to the performance of parallel programs on multi-core systems. It is attractive to expose parallelism to the compiler letting it do the heavy lifting. Unfortunately, it is hard to write code that compilers are able to optimize aggressively and therefore tools...... exist that can guide programmers with refactorings allowing the compilers to optimize more aggressively. We target the problem with many false positives that these tools often generate, where the amount of feedback can be overwhelming for the programmer. Our approach is to use a filtering scheme based...... on feedback from multiple compilers and show how we are able to filter out 87.6% of the comments by only showing the most promising comments....

  9. AFTC Code for Automatic Fault Tree Construction: Users Manual

    International Nuclear Information System (INIS)

    Gopika Vinod; Saraf, R.K.; Babar, A.K.

    1999-04-01

    Fault Trees perform a predominant role in reliability and safety analysis of system. Manual construction of fault tree is a very time consuming task and moreover, it won't give a formalized result, since it relies highly on analysts experience and heuristics. This necessitates a computerised fault tree construction, which is still attracting interest of reliability analysts. AFTC software is a user friendly software model for constructing fault trees based on decision tables. Software is equipped with libraries of decision tables for components commonly used in various Nuclear Power Plant (NPP) systems. User is expected to make a nodal diagram of the system, for which fault tree is to be constructed, from the flow sheets available. The text nodal diagram goes as the sole input defining the system flow chart. AFTC software is a rule based expert system which draws the fault tree from the system flow chart and component decision tables. AFTC software gives fault tree in both text and graphic format. Help is provided as how to enter system flow chart and component decision tables. The software is developed in 'C' language. Software is verified with simplified version of the fire water system of an Indian PHWR. Code conversion will be undertaken to create a window based version. (author)

  10. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  11. Automatic temperature control method of shipping can

    International Nuclear Information System (INIS)

    Nishikawa, Kaoru.

    1992-01-01

    The present invention provides a method of rapidly and accurately controlling the temperature of a shipping can, which is used upon shipping inspection for a nuclear fuel assembly. That is, a measured temperature value of the shipping can is converted to a gas pressure setting value in a jacket of the shipping can by conducting a predetermined logic calculation by using a fuzzy logic. A gas pressure control section compares the pressure setting value of a fuzzy estimation section and the measured value of the gas pressure in the jacket of the shipping can, and conducts air supply or exhaustion of the jacket gas so as to adjust the measured value with the setting value. These fuzzy estimation section and gas pressure control section control the gas pressure in the jacket of the shipping can to control the water level in the jacket. As a result, the temperature of the shipping can is controlled. With such procedures, since the water level in the jacket can be controlled directly and finely, temperature of the shipping can is automatically controlled rapidly and accurately compared with a conventional case. (I.S.)

  12. Multigrid method for integral equations and automatic programs

    Science.gov (United States)

    Lee, Hosae

    1993-01-01

    Several iterative algorithms based on multigrid methods are introduced for solving linear Fredholm integral equations of the second kind. Automatic programs based on these algorithms are introduced using Simpson's rule and the piecewise Gaussian rule for numerical integration.

  13. New Channel Coding Methods for Satellite Communication

    Directory of Open Access Journals (Sweden)

    J. Sebesta

    2010-04-01

    Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.

  14. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    Science.gov (United States)

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result.

  15. Improving Utility of GPU in Accelerating Industrial Applications with User-centred Automatic Code Translation

    DEFF Research Database (Denmark)

    Yang, Po; Dong, Feng; Codreanu, Valeriu

    2018-01-01

    to the lack of specialist GPU (Graphics processing units) programming skills, the explosion of GPU power has not been fully utilized in general SME applications by inexperienced users. Also, existing automatic CPU-to-GPU code translators are mainly designed for research purposes with poor user interface...... design and hard-to-use. Little attentions have been paid to the applicability, usability and learnability of these tools for normal users. In this paper, we present an online automated CPU-to-GPU source translation system, (GPSME) for inexperienced users to utilize GPU capability in accelerating general...

  16. Design and construction of a graphical interface for automatic generation of simulation code GEANT4

    International Nuclear Information System (INIS)

    Driss, Mozher; Bouzaine Ismail

    2007-01-01

    This work is set in the context of the engineering studies final project; it is accomplished in the center of nuclear sciences and technologies in Sidi Thabet. This project is about conceiving and developing a system based on graphical user interface which allows an automatic codes generation for simulation under the GEANT4 engine. This system aims to facilitate the use of GEANT4 by scientific not necessary expert in this engine and to be used in different areas: research, industry and education. The implementation of this project uses Root library and several programming languages such as XML and XSL. (Author). 5 refs

  17. AUTO_DERIV: Tool for automatic differentiation of a Fortran code

    Science.gov (United States)

    Stamatiadis, S.; Farantos, S. C.

    2010-10-01

    . Solution method: The mathematical rules for differentiation of sums, products, quotients, elementary functions in conjunction with the chain rule for compound functions are applied. The function should be expressed as one or more Fortran 77/90/95 procedures. A new type of variables is defined and the overloading mechanism of functions and operators provided by the Fortran 95 language is extensively used to implement the differentiation rules. Reasons for new version: The new version supports Fortran 95, handles properly the floating-point exceptions, and is faster due to internal reorganization. All discovered bugs are fixed. Summary of revisions:The code was rewritten extensively to benefit from features introduced in Fortran 95. Additionally, there was a major internal reorganization of the code, resulting in faster execution. The user interface described in the original paper was not changed. The values that the user must or should specify before compilation (essentially, the number of independent variables) were moved into ad_types module. There were many minor bug fixes. One important bug was found and fixed; the code did not handle correctly the overloading of ∗ in aλ when a=0. The case of division by zero and the discontinuity of the function at the requested point are indicated by standard IEEE exceptions ( IEEE_DIVIDE_BY_ZERO and IEEE_INVALID respectively). If the compiler does not support IEEE exceptions, a module with the appropriate name is provided, imitating the behavior of the 'standard' module in the sense that it raises the corresponding exceptions. It is up to the compiler (through certain flags probably) to detect them. Restrictions: None imposed by the program. There are certain limitations that may appear mostly due to the specific implementation chosen in the user code. They can always be overcome by recoding parts of the routines developed by the user or by modifying AUTO_DERIV according to specific instructions given in [1]. The common

  18. A comparison of accurate automatic hippocampal segmentation methods.

    Science.gov (United States)

    Zandifar, Azar; Fonov, Vladimir; Coupé, Pierrick; Pruessner, Jens; Collins, D Louis

    2017-07-15

    The hippocampus is one of the first brain structures affected by Alzheimer's disease (AD). While many automatic methods for hippocampal segmentation exist, few studies have compared them on the same data. In this study, we compare four fully automated hippocampal segmentation methods in terms of their conformity with manual segmentation and their ability to be used as an AD biomarker in clinical settings. We also apply error correction to the four automatic segmentation methods, and complete a comprehensive validation to investigate differences between the methods. The effect size and classification performance is measured for AD versus normal control (NC) groups and for stable mild cognitive impairment (sMCI) versus progressive mild cognitive impairment (pMCI) groups. Our study shows that the nonlinear patch-based segmentation method with error correction is the most accurate automatic segmentation method and yields the most conformity with manual segmentation (κ=0.894). The largest effect size between AD versus NC and sMCI versus pMCI is produced by FreeSurfer with error correction. We further show that, using only hippocampal volume, age, and sex as features, the area under the receiver operating characteristic curve reaches up to 0.8813 for AD versus NC and 0.6451 for sMCI versus pMCI. However, the automatic segmentation methods are not significantly different in their performance. Copyright © 2017. Published by Elsevier Inc.

  19. Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems

    Science.gov (United States)

    Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.

    2008-08-01

    This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.

  20. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  1. METHOD FOR AUTOMATIC RAISING AND LEVELING OF SUPPORT PLATFORM

    OpenAIRE

    A. G. Stryzhniou

    2017-01-01

    The paper presents the method for automatic raising and leveling of support platform that differ from others in simplicity and versatility. The method includes four phases of raising and leveling when performance capabilities of the system is defined and the soil condition is tested. In addition, the current condition of the system is controlled and corrected with the issuance of control parameters to the control panel. The method can be used not only for static, but also for dynamic leveling...

  2. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    CERN Document Server

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques

    2005-01-01

    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  3. Automatic seamless image mosaic method based on SIFT features

    Science.gov (United States)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  4. A Semantic Analysis Method for Scientific and Engineering Code

    Science.gov (United States)

    Stewart, Mark E. M.

    1998-01-01

    This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.

  5. An Automatic Shadow Detection Method for VHR Remote Sensing Orthoimagery

    Directory of Open Access Journals (Sweden)

    Qiongjie Wang

    2017-05-01

    Full Text Available The application potential of very high resolution (VHR remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.

  6. CERPI and CEREL, two computer codes for the automatic identification and determination of gamma emitters in thermal-neutron-activated samples

    International Nuclear Information System (INIS)

    Giannini, M.; Oliva, P.R.; Ramorino, M.C.

    1979-01-01

    A computer code that automatically analyzes gamma-ray spectra obtained with Ge(Li) detectors is described. The program contains such features as automatic peak location and fitting, determination of peak energies and intensities, nuclide identification, and calculation of masses and errors. Finally, the results obtained with this computer code for a lunar sample are reported and briefly discussed

  7. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  8. The Automatic Parallelisation of Scientific Application Codes Using a Computer Aided Parallelisation Toolkit

    Science.gov (United States)

    Ierotheou, C.; Johnson, S.; Leggett, P.; Cross, M.; Evans, E.; Jin, Hao-Qiang; Frumkin, M.; Yan, J.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. Historically, the lack of a programming standard for using directives and the rather limited performance due to scalability have affected the take-up of this programming model approach. Significant progress has been made in hardware and software technologies, as a result the performance of parallel programs with compiler directives has also made improvements. The introduction of an industrial standard for shared-memory programming with directives, OpenMP, has also addressed the issue of portability. In this study, we have extended the computer aided parallelization toolkit (developed at the University of Greenwich), to automatically generate OpenMP based parallel programs with nominal user assistance. We outline the way in which loop types are categorized and how efficient OpenMP directives can be defined and placed using the in-depth interprocedural analysis that is carried out by the toolkit. We also discuss the application of the toolkit on the NAS Parallel Benchmarks and a number of real-world application codes. This work not only demonstrates the great potential of using the toolkit to quickly parallelize serial programs but also the good performance achievable on up to 300 processors for hybrid message passing and directive-based parallelizations.

  9. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  10. A Method for Improving the Progressive Image Coding Algorithms

    Directory of Open Access Journals (Sweden)

    Ovidiu COSMA

    2014-12-01

    Full Text Available This article presents a method for increasing the performance of the progressive coding algorithms for the subbands of images, by representing the coefficients with a code that reduces the truncation error.

  11. A novel method of generating and remembering international morse codes

    Digital Repository Service at National Institute of Oceanography (India)

    Charyulu, R.J.K.

    untethered communications have been advanced, despite as S.O.S International Morse Code will be at rescue as an emergency tool, when all other modes fail The details of hte method and actual codes have been enumerated....

  12. METHOD FOR AUTOMATIC RAISING AND LEVELING OF SUPPORT PLATFORM

    Directory of Open Access Journals (Sweden)

    A. G. Stryzhniou

    2017-01-01

    Full Text Available The paper presents the method for automatic raising and leveling of support platform that differ from others in simplicity and versatility. The method includes four phases of raising and leveling when performance capabilities of the system is defined and the soil condition is tested. In addition, the current condition of the system is controlled and corrected with the issuance of control parameters to the control panel. The method can be used not only for static, but also for dynamic leveling systems, such as active suspension. The method assumes identification and dynamics testing of reference units. The synchronization of reference units moving was implemented to avoid dangerous skewing of support platform. The recommendations for the system implementation and experimental model identification of support platform are presented.

  13. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  14. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  15. Automatic heart positioning method in computed tomography scout images.

    Science.gov (United States)

    Li, Hong; Liu, Kaihua; Sun, Hang; Bao, Nan; Wang, Xu; Tian, Shi; Qi, Shouliang; Kang, Yan

    2014-01-01

    Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart.

  16. A method for scientific code coupling in a distributed environment

    International Nuclear Information System (INIS)

    Caremoli, C.; Beaucourt, D.; Chen, O.; Nicolas, G.; Peniguel, C.; Rascle, P.; Richard, N.; Thai Van, D.; Yessayan, A.

    1994-12-01

    This guide book deals with coupling of big scientific codes. First, the context is introduced: big scientific codes devoted to a specific discipline coming to maturity, and more and more needs in terms of multi discipline studies. Then we describe different kinds of code coupling and an example of code coupling: 3D thermal-hydraulic code THYC and 3D neutronics code COCCINELLE. With this example we identify problems to be solved to realize a coupling. We present the different numerical methods usable for the resolution of coupling terms. This leads to define two kinds of coupling: with the leak coupling, we can use explicit methods, and with the strong coupling we need to use implicit methods. On both cases, we analyze the link with the way of parallelizing code. For translation of data from one code to another, we define the notion of Standard Coupling Interface based on a general structure for data. This general structure constitutes an intermediary between the codes, thus allowing a relative independence of the codes from a specific coupling. The proposed method for the implementation of a coupling leads to a simultaneous run of the different codes, while they exchange data. Two kinds of data communication with message exchange are proposed: direct communication between codes with the use of PVM product (Parallel Virtual Machine) and indirect communication with a coupling tool. This second way, with a general code coupling tool, is based on a coupling method, and we strongly recommended to use it. This method is based on the two following principles: re-usability, that means few modifications on existing codes, and definition of a code usable for coupling, that leads to separate the design of a code usable for coupling from the realization of a specific coupling. This coupling tool available from beginning of 1994 is described in general terms. (authors). figs., tabs

  17. Developing Automatic Multi-Objective Optimization Methods for Complex Actuators

    Directory of Open Access Journals (Sweden)

    CHIS, R.

    2017-11-01

    Full Text Available This paper presents the analysis and multiobjective optimization of a magnetic actuator. By varying just 8 parameters of the magnetic actuator’s model the design space grows to more than 6 million configurations. Much more, the 8 objectives that must be optimized are conflicting and generate a huge objectives space, too. To cope with this complexity, we use advanced heuristic methods for Automatic Design Space Exploration. FADSE tool is one Automatic Design Space Exploration framework including different state of the art multi-objective meta-heuristics for solving NP-hard problems, which we used for the analysis and optimization of the COMSOL and MATLAB model of the magnetic actuator. We show that using a state of the art genetic multi-objective algorithm, response surface modelling methods and some machine learning techniques, the timing complexity of the design space exploration can be reduced, while still taking into consideration objective constraints so that various Pareto optimal configurations can be found. Using our developed approach, we were able to decrease the simulation time by at least a factor of 10, compared to a run that does all the simulations, while keeping prediction errors to around 1%.

  18. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  19. Vortex flows in the solar chromosphere. I. Automatic detection method

    Science.gov (United States)

    Kato, Y.; Wedemeyer, S.

    2017-05-01

    Solar "magnetic tornadoes" are produced by rotating magnetic field structures that extend from the upper convection zone and the photosphere to the corona of the Sun. Recent studies show that these kinds of rotating features are an integral part of atmospheric dynamics and occur on a large range of spatial scales. A systematic statistical study of magnetic tornadoes is a necessary next step towards understanding their formation and their role in mass and energy transport in the solar atmosphere. For this purpose, we develop a new automatic detection method for chromospheric swirls, meaning the observable signature of solar tornadoes or, more generally, chromospheric vortex flows and rotating motions. Unlike existing studies that rely on visual inspections, our new method combines a line integral convolution (LIC) imaging technique and a scalar quantity that represents a vortex flow on a two-dimensional plane. We have tested two detection algorithms, based on the enhanced vorticity and vorticity strength quantities, by applying them to three-dimensional numerical simulations of the solar atmosphere with CO5BOLD. We conclude that the vorticity strength method is superior compared to the enhanced vorticity method in all aspects. Applying the method to a numerical simulation of the solar atmosphere reveals very abundant small-scale, short-lived chromospheric vortex flows that have not been found previously by visual inspection.

  20. Methods of RECORD, an LWR fuel assembly burnup code

    International Nuclear Information System (INIS)

    Skardhamar, T.; Naess, H.K.

    1982-06-01

    The RECORD computer code is a detailed rector physics code for performing efficient LWR fuel assembly calculations, taking into account most of the features found in BWR and PWR fuel designs. The code calculates neutron spectrum, reaction rates and reactivity as a function of fuel burnup, and it generates the few-group data required for use in full scale core simulation and fuel management calculations. The report describes the methods of the RECORD computer code and the basis for fundamental models selected, and gives a review of code qualifications against measured data. (Auth. /RF)

  1. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  2. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  3. Image Segmentation Method Using Thresholds Automatically Determined from Picture Contents

    Directory of Open Access Journals (Sweden)

    Yuan Been Chen

    2009-01-01

    Full Text Available Image segmentation has become an indispensable task in many image and video applications. This work develops an image segmentation method based on the modified edge-following scheme where different thresholds are automatically determined according to areas with varied contents in a picture, thus yielding suitable segmentation results in different areas. First, the iterative threshold selection technique is modified to calculate the initial-point threshold of the whole image or a particular block. Second, the quad-tree decomposition that starts from the whole image employs gray-level gradient characteristics of the currently-processed block to decide further decomposition or not. After the quad-tree decomposition, the initial-point threshold in each decomposed block is adopted to determine initial points. Additionally, the contour threshold is determined based on the histogram of gradients in each decomposed block. Particularly, contour thresholds could eliminate inappropriate contours to increase the accuracy of the search and minimize the required searching time. Finally, the edge-following method is modified and then conducted based on initial points and contour thresholds to find contours precisely and rapidly. By using the Berkeley segmentation data set with realistic images, the proposed method is demonstrated to take the least computational time for achieving fairly good segmentation performance in various image types.

  4. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  5. Automatic Code Checking Applied to Fire Fighting and Panic Projects in a BIM Environment - BIMSCIP

    Directory of Open Access Journals (Sweden)

    Marcelo Franco Porto

    2017-06-01

    Full Text Available This work presents a computational implementation of an automatic conformity verification of building projects using a 3D modeling platform for BIM. This program was developed in C# language and based itself on the 9th Technical Instruction from Military Fire Brigade of the State of Minas Gerais which covers regulations of fire load in buildings and hazardous areas.

  6. Validation study of automatically generated codes in colonoscopy using the endoscopic report system Endobase

    NARCIS (Netherlands)

    Groenen, Marcel J. M.; van Buuren, Henk R.; van Berge Henegouwen, Gerard P.; Fockens, Paul; van der Lei, Johan; Stuifbergen, Wouter N. H. M.; van der Schaar, Peter J.; Kuipers, Ernst J.; Ouwendijk, Rob J. Th

    2010-01-01

    OBJECTIVE: Gastrointestinal endoscopy databases are important for surveillance, epidemiology, quality control and research. A good quality of automatically generated databases to enable drawing justified conclusions based on the data is of key importance. The aim of this study is to validate the

  7. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  8. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  9. Statistical methods for accurately determining criticality code bias

    International Nuclear Information System (INIS)

    Trumble, E.F.; Kimball, K.D.

    1997-01-01

    A system of statistically treating validation calculations for the purpose of determining computer code bias is provided in this paper. The following statistical treatments are described: weighted regression analysis, lower tolerance limit, lower tolerance band, and lower confidence band. These methods meet the criticality code validation requirements of ANS 8.1. 8 refs., 5 figs., 4 tabs

  10. A simple method for automatic measurement of excitation functions

    International Nuclear Information System (INIS)

    Ogawa, M.; Adachi, M.; Arai, E.

    1975-01-01

    An apparatus has been constructed to perform the sequence control of a beam-analysing magnet for automatic excitation function measurements. This device is also applied to the feedback control of the magnet to lock the beam energy. (Auth.)

  11. Computer codes for automatic tuning of the beam transport at the UNILAC

    International Nuclear Information System (INIS)

    Dahl, L.; Ehrich, A.

    1984-01-01

    For application in routine operation fully automatic computer controlled algorithms are developed for tuning of beam transport elements at the Unilac. Computations, based on emittance measurements, simulate the beam behaviour and evaluate quadrupole settings, in order to produce defined beam properties at specified positions along the accelerator. The interactive program is controlled using a graphic display on which the beam emittances and envelopes are plotted. To align the beam onto the ion-optical axis of the accelerator two automatic computer controlled procedures have been developed. The misalignment of the beam is determined by variation of quadrupole or steering magnet settings with simultaneous measurement of the beam distribution on profile grids. According to the result a pair of steering magnet settings are adjusted to bend the beam on the axis. The effects of computer controlled tuning on beam quality and operation are reported

  12. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  13. The variational celular method - the code implantation

    International Nuclear Information System (INIS)

    Rosato, A.; Lima, M.A.P.

    1980-12-01

    The process to determine the potential energy curve for diatomic molecules by the Variational Cellular Method is discussed. An analysis of the determination of the electronic eigenenergies and the electrostatic energy of these molecules is made. An explanation of the input data and their meaning is also presented. (Author) [pt

  14. A code for obtaining temperature distribution by finite element method

    International Nuclear Information System (INIS)

    Bloch, M.

    1984-01-01

    The ELEFIB Fortran language computer code using finite element method for calculating temperature distribution of linear and two dimensional problems, in permanent region or in the transient phase of heat transfer, is presented. The formulation of equations uses the Galerkin method. Some examples are shown and the results are compared with other papers. The comparative evaluation shows that the elaborated code gives good values. (M.C.K.) [pt

  15. A robust recognition and accurate locating method for circular coded diagonal target

    Science.gov (United States)

    Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin

    2017-10-01

    As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.

  16. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  17. Methods and computer codes for probabilistic sensitivity and uncertainty analysis

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1985-01-01

    This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables

  18. Parallelization methods study of thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Gaudart, Catherine

    2000-01-01

    The variety of parallelization methods and machines leads to a wide selection for programmers. In this study we suggest, in an industrial context, some solutions from the experience acquired through different parallelization methods. The study is about several scientific codes which simulate a large variety of thermal-hydraulics phenomena. A bibliography on parallelization methods and a first analysis of the codes showed the difficulty of our process on the whole applications to study. Therefore, it would be necessary to identify and extract a representative part of these applications and parallelization methods. The linear solver part of the codes forced itself. On this particular part several parallelization methods had been used. From these developments one could estimate the necessary work for a non initiate programmer to parallelize his application, and the impact of the development constraints. The different methods of parallelization tested are the numerical library PETSc, the parallelizer PAF, the language HPF, the formalism PEI and the communications library MPI and PYM. In order to test several methods on different applications and to follow the constraint of minimization of the modifications in codes, a tool called SPS (Server of Parallel Solvers) had be developed. We propose to describe the different constraints about the optimization of codes in an industrial context, to present the solutions given by the tool SPS, to show the development of the linear solver part with the tested parallelization methods and lastly to compare the results against the imposed criteria. (author) [fr

  19. Extending a User Interface Prototyping Tool with Automatic MISRA C Code Generation

    Directory of Open Access Journals (Sweden)

    Gioacchino Mauro

    2017-01-01

    Full Text Available We are concerned with systems, particularly safety-critical systems, that involve interaction between users and devices, such as the user interface of medical devices. We therefore developed a MISRA C code generator for formal models expressed in the PVSio-web prototyping toolkit. PVSio-web allows developers to rapidly generate realistic interactive prototypes for verifying usability and safety requirements in human-machine interfaces. The visual appearance of the prototypes is based on a picture of a physical device, and the behaviour of the prototype is defined by an executable formal model. Our approach transforms the PVSio-web prototyping tool into a model-based engineering toolkit that, starting from a formally verified user interface design model, will produce MISRA C code that can be compiled and linked into a final product. An initial validation of our tool is presented for the data entry system of an actual medical device.

  20. Automatic detecting method of LED signal lamps on fascia based on color image

    Science.gov (United States)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  1. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  2. Automaticity of multiplication facts with cognitive behavioral method

    OpenAIRE

    Ferlin, Sara

    2017-01-01

    Slovenian students are achieving good results in math, yet the attitude on this subject remains negative. The automaticity of multiplication facts is one of the main learning objectives in 4th grade math. If the student does not automate multiplication, he or she may solve assignments at a slower rate and make mistakes during the process. Failure may contribute to a change in their attitude toward multiplication and, later on, math. This can be avoided by effectively addressing the issue. On ...

  3. The efficiency of geophysical adjoint codes generated by automatic differentiation tools

    Science.gov (United States)

    Vlasenko, A. V.; Köhl, A.; Stammer, D.

    2016-02-01

    The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the

  4. A Method for the Construction of Minimum-Redundancy Codes*

    Indian Academy of Sciences (India)

    Summary - An optimum method of coding an ensemble of mes- sages consisting of a finite number of ... One important method of transmitting messages is to transmit in their place sequences of symbols. If there are more .... There messages are then combined to form a single composite message with probability unity, and ...

  5. Calibration of three rainfall simulators with automatic measurement methods

    Science.gov (United States)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  6. Model of automatic fuel management for the Atucha II nuclear central with the PUMA IV code

    International Nuclear Information System (INIS)

    Marconi G, J.F.; Tarazaga, A.E.; Romero, L.D.

    2007-01-01

    The Atucha II central is a heavy water power station and natural uranium. For this reason and due to the first floor reactivity excess that have this type of reactors, it is necessary to carry out a continuous fuel management and with the central in power (for the case of Atucha II every 0.7 days approximately). To maintain in operation these centrals and to achieve a good fuels economy, different types of negotiate of fuels that include areas and roads where the fuels displace inside the core are proved; it is necessary to prove the great majority of these managements in long periods in order to corroborate the behavior of the power station and the burnt of extraction of the fuel elements. To carry out this work it is of great help that a program implements the approaches to continue in each replacement, using the roads and areas of each administration type to prove, and this way to obtain as results the one regulations execution in the time and the average burnt of extraction of the fuel elements, being fundamental this last data for the operator company of the power station. To carry out the previous work it is necessary that a physicist with experience in fuel management proves each one of the possible managements, even those that quickly can be discarded if its don't fulfill with the regulatory standards or its possess an average extraction burnt too much low. For this it is of fundamental help that with an automatic model the different administrations are proven and lastly the physicist analyzes the more important cases. The pattern in question not only allows to program different types of roads and areas of fuel management, but rather it also foresees the possibility to disable some of the approaches. (Author)

  7. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  8. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  9. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  10. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  11. Optical Methods For Automatic Rating Of Engine Test Components

    Science.gov (United States)

    Pritchard, James R.; Moss, Brian C.

    1989-03-01

    In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.

  12. Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods

    OpenAIRE

    Livshin , Arie; Rodet , Xavier

    2009-01-01

    cote interne IRCAM: Livshin09a; None / None; National audience; Compilation of musical instrument sample databases requires careful elimination of badly recorded samples and validation of sample classification into correct categories. This paper introduces algorithms for automatic removal of bad instrument samples using Automatic Musical Instrument Recognition and Outlier Detection techniques. Best evaluation results on a methodically contaminated sound database are achieved using the introdu...

  13. Convergence acceleration of the Proteus computer code with multigrid methods

    Science.gov (United States)

    Demuren, A. O.; Ibraheem, S. O.

    1995-01-01

    This report presents the results of a study to implement convergence acceleration techniques based on the multigrid concept in the two-dimensional and three-dimensional versions of the Proteus computer code. The first section presents a review of the relevant literature on the implementation of the multigrid methods in computer codes for compressible flow analysis. The next two sections present detailed stability analysis of numerical schemes for solving the Euler and Navier-Stokes equations, based on conventional von Neumann analysis and the bi-grid analysis, respectively. The next section presents details of the computational method used in the Proteus computer code. Finally, the multigrid implementation and applications to several two-dimensional and three-dimensional test problems are presented. The results of the present study show that the multigrid method always leads to a reduction in the number of iterations (or time steps) required for convergence. However, there is an overhead associated with the use of multigrid acceleration. The overhead is higher in 2-D problems than in 3-D problems, thus overall multigrid savings in CPU time are in general better in the latter. Savings of about 40-50 percent are typical in 3-D problems, but they are about 20-30 percent in large 2-D problems. The present multigrid method is applicable to steady-state problems and is therefore ineffective in problems with inherently unstable solutions.

  14. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  15. CNC LATHE MACHINE PRODUCING NC CODE BY USING DIALOG METHOD

    Directory of Open Access Journals (Sweden)

    Yakup TURGUT

    2004-03-01

    Full Text Available In this study, an NC code generation program utilising Dialog Method was developed for turning centres. Initially, CNC lathes turning methods and tool path development techniques were reviewed briefly. By using geometric definition methods, tool path was generated and CNC part program was developed for FANUC control unit. The developed program made CNC part program generation process easy. The program was developed using BASIC 6.0 programming language while the material and cutting tool database were and supported with the help of ACCESS 7.0.

  16. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  17. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  18. Development of an automatic evaluation method for patient positioning error.

    Science.gov (United States)

    Kubota, Yoshiki; Tashiro, Mutsumi; Shinohara, Ayaka; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2015-07-08

    Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time-consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block-matching method. The zero-mean normalized cross correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23 ± 0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32 ± 0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites.

  19. Providing Automatic Support for Heuristic Rules of Methods

    NARCIS (Netherlands)

    Tekinerdogan, B.; Aksit, Mehmet; Demeyer, Serge; Bosch, H.G.P.; Bosch, Jan

    In method-based software development, software engineers create artifacts based on the heuristic rules of the adopted method. Most CASE tools, however, do not actively assist software engineers in applying the heuristic rules. To provide an active support, the rules must be formalized, implemented

  20. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  1. Automatic diagnostic methods of nuclear reactor collected signals

    International Nuclear Information System (INIS)

    Lavison, P.

    1978-03-01

    This work is the first phase of an opwall study of diagnosis limited to problems of monitoring the operating state; this allows to show all what the pattern recognition methods bring at the processing level. The present problem is the research of the control operations. The analysis of the state of the reactor gives a decision which is compared with the history of the control operations, and if there is not correspondence, the state subjected to the analysis will be said 'abnormal''. The system subjected to the analysis is described and the problem to solve is defined. Then, one deals with the gaussian parametric approach and the methods to evaluate the error probability. After one deals with non parametric methods and an on-line detection has been tested experimentally. Finally a non linear transformation has been studied to reduce the error probability previously obtained. All the methods presented have been tested and compared to a quality index: the error probability [fr

  2. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  3. An automatic method to quantify the vibration properties of human vocal folds via videokymography

    NARCIS (Netherlands)

    Qiu, QJ; Schutte, HK; Gu, L; Yu, QL

    2003-01-01

    The study offers an automatical quantitative method to obtain vibration properties of human vocal folds via videokymography. The presented method is based on image processing, which combines an active contour model with a genetic algorithm to improve detecting precision and processing speed, can

  4. Support subspaces method for synthetic aperture radar automatic target recognition

    Directory of Open Access Journals (Sweden)

    Vladimir Fursov

    2016-09-01

    Full Text Available This article offers a new object recognition approach that gives high quality using synthetic aperture radar images. The approach includes image preprocessing, clustering and recognition stages. At the image preprocessing stage, we compute the mass centre of object images for better image matching. A conjugation index of a recognition vector is used as a distance function at clustering and recognition stages. We suggest a construction of the so-called support subspaces, which provide high recognition quality with a significant dimension reduction. The results of the experiments demonstrate that the proposed method provides higher recognition quality (97.8% than such methods as support vector machine (95.9%, deep learning based on multilayer auto-encoder (96.6% and adaptive boosting (96.1%. The proposed method is stable for objects processed from different angles.

  5. Statistical and neural net methods for automatic glaucoma diagnosis determination

    Czech Academy of Sciences Publication Activity Database

    Pluháček, F.; Pospíšil, Jaroslav

    2004-01-01

    Roč. 1, č. 2 (2004), s. 12-24 ISSN 1644-3608 Institutional research plan: CEZ:AV0Z1010921 Keywords : glaucoma * diagnostic methods * pallor * image analysis * statistical evaluation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.375, year: 2004

  6. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    Science.gov (United States)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial

  7. Method of automatic detection of tumors in mammogram

    Science.gov (United States)

    Xie, Mei; Ma, Zheng

    2001-09-01

    Prevention and early diagnosis of tumors in mammogram are foremost. Unfortunately, these images are often corrupted by the noise due to the film noise and the background texture of the images, which did not allow isolation of the target information from the background noise, and often results in the suspicious area to be analyzed inaccurately. In order to achieve more accurate detection and segmentation tumors, the quality of the images need to improve, (including to suppressing noise and enhancing the contrast of the image). This paper presents a new adaptive histogram threshold method approach for segmentation of suspicious mass regions in digitized images. The method use multi-scale wavelet decomposition and a threshold selection criterion based on a transformed imageís histogram. This separation can help eliminate background noise and discriminates against objects of different size and shape. The tumors are extracted by used an adaptively bayesian classifier. We demonstrate that the method proposed can greatly improve the accuracy of detection in tumors.

  8. A METHOD OF AUTOMATIC DETERMINATION OF THE NUMBER OF THE ELECTRICAL MOTORS SIMULTANEOUSLY WORKING IN GROUP

    Directory of Open Access Journals (Sweden)

    A. V. Voloshko

    2016-11-01

    Full Text Available Purpose. Propose a method of automatic determination of the number of operating high voltage electric motors in the group of the same type based on the determination and analysis of the account data of power consumption, obtained from of electric power meters installed at the connection of motors. Results. The algorithm of the automatic determination program for the number of working in the same group of electric motors, which is based on the determination of the motor power minimum value at which it is considered on, was developed. Originality. For the first time a method of automatic determination of the number of working of the same type high-voltage motors group was proposed. Practical value. Obtained results may be used for the introduction of an automated accounting run of each motor, calculating the parameters of the equivalent induction motor or a synchronous motor.

  9. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    GUIDINI, Annie

    1981-01-01

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author) [fr

  10. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  11. Accuracy of structure-based sequence alignment of automatic methods

    Directory of Open Access Journals (Sweden)

    Lee Byungkook

    2007-09-01

    Full Text Available Abstract Background Accurate sequence alignments are essential for homology searches and for building three-dimensional structural models of proteins. Since structure is better conserved than sequence, structure alignments have been used to guide sequence alignments and are commonly used as the gold standard for sequence alignment evaluation. Nonetheless, as far as we know, there is no report of a systematic evaluation of pairwise structure alignment programs in terms of the sequence alignment accuracy. Results In this study, we evaluate CE, DaliLite, FAST, LOCK2, MATRAS, SHEBA and VAST in terms of the accuracy of the sequence alignments they produce, using sequence alignments from NCBI's human-curated Conserved Domain Database (CDD as the standard of truth. We find that 4 to 9% of the residues on average are either not aligned or aligned with more than 8 residues of shift error and that an additional 6 to 14% of residues on average are misaligned by 1–8 residues, depending on the program and the data set used. The fraction of correctly aligned residues generally decreases as the sequence similarity decreases or as the RMSD between the Cα positions of the two structures increases. It varies significantly across CDD superfamilies whether shift error is allowed or not. Also, alignments with different shift errors occur between proteins within the same CDD superfamily, leading to inconsistent alignments between superfamily members. In general, residue pairs that are more than 3.0 Å apart in the reference alignment are heavily (>= 25% on average misaligned in the test alignments. In addition, each method shows a different pattern of relative weaknesses for different SCOP classes. CE gives relatively poor results for β-sheet-containing structures (all-β, α/β, and α+β classes, DaliLite for "others" class where all but the major four classes are combined, and LOCK2 and VAST for all-β and "others" classes. Conclusion When the sequence

  12. Automatic ultrasonic image analysis method for defect detection

    International Nuclear Information System (INIS)

    Magnin, I.; Perdrix, M.; Corneloup, G.; Cornu, B.

    1987-01-01

    Ultrasonic examination of austenitic steel weld seams raises well known problems of interpreting signals perturbed by this type of material. The JUKEBOX ultrasonic imaging system developed at the Cadarache Nuclear Research Center provides a major improvement in the general area of defect localization and characterization, based on processing overall images obtained by (X, Y) scanning. (X, time) images are formed by juxtaposing input signals. A series of parallel images shifted on the Y-axis is also available. The authors present a novel defect detection method based on analysing the timeline positions of the maxima and minima recorded on (X, time) images. This position is statistically stable when a defect is encountered, and is random enough under spurious noise conditions to constitute a discriminating parameter. The investigation involves calculating the trace variance: this parameters is then taken into account for detection purposes. Correlation with parallel images enhances detection reliability. A significant increase in the signal-to-noise ratio during tests on artificial defects is shown

  13. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  14. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  15. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  16. The development of an automatic scanning method for CR-39 neutron dosimeter

    International Nuclear Information System (INIS)

    Tawara, Hiroko; Miyajima, Mitsuhiro; Sasaki, Shin-ichi; Hozumi, Ken-ichi

    1989-01-01

    A method of measuring low level neutron dose has been developed with CR-39 track detectors using an automatic scanning system. It is composed of the optical microscope with a video camera, an image processor and a personal computer. The focus point of the microscope and the X-Y stage are controlled from the computer. The minimum detectable neutron dose is estimated at 4.6 mrem in the uniform field of neutron with equivalent energy spectrum to Am-Be source from the results of automatic measurements. (author)

  17. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  18. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  19. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  20. A procedure for automatic updating of total cross section libraries of the Mercure IV code for nuclear safeguard applications

    International Nuclear Information System (INIS)

    Vicini, C.; Amici, S.

    1991-01-01

    The measuring utilization of Montecarlo codes for the simulation of the measurement techniques used in the field of Nuclear Safeguards and the high performances required (error<1%), needs the implementation of libraries with updated nuclear data. MERCURE IV is a computer code especially developed for the non destructive measurement techniques simulation. In addition to an analysis of the MERCURE IV code features, this work presents an algorithm developed for generating the library of the total gamma cross section used by the code

  1. A Classification Method of Inquiry E-mails for Describing FAQ with Automatic Setting Mechanism of Judgment Thresholds

    Science.gov (United States)

    Tsuda, Yuki; Akiyoshi, Masanori; Samejima, Masaki; Oka, Hironori

    In this paper the authors propose a classification method of inquiry e-mails for describing FAQ (Frequently Asked Questions) and automatic setting mechanism of judgment thresholds. In this method, a dictionary used for classification of inquiries is generated and updated automatically by statistical information of characteristic words in clusters, and inquiries are classified correctly to each proper cluster by using the dictionary. Threshold values are automatically set by using statistical information.

  2. Coupling of partitioned physics codes with quasi-Newton methods

    CSIR Research Space (South Africa)

    Haelterman, R

    2017-03-01

    Full Text Available Many physics problems can only be studied by coupling various numerical codes, each modeling a subaspect of the physics problem that is addressed. Often, each of these codes needs to be considered as a black box, either because the codes were...

  3. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Commissioning (Method Development) of the Panasonic UD-794 Automatic Thermoluminescent Dosemeter (TLD) Irradiator

    International Nuclear Information System (INIS)

    McKittrick Leo

    2005-08-01

    This study is presented in two parts. Part 1, the Literature Survey examines the history, theory and application of TL dosimetry. A general overview of the thermoluminescent dosemeter is presented together with a complete in-depth look at the Panasonic UD-716 TLD Reader. The irradiation and calibration of TL Dosemeters is also examined together with an overview of past papers and research carried out on related topics. Part 2 documents the study of commissioning the Panasonic UD-794 Automatic TLD Irradiator, the section includes; methods and procedures used with several dosimetry instruments and materials; method development with the irradiator; results and findings from the irradiator compared to a certified method of irradiation, using the methods developed; investigations of irradiator parameters-features; conclusions made from carrying out the study of commissioning the Panasonic UD-794 Automatic TLD Irradiator

  5. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  6. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  7. The Base 32 Method: An Improved Method for Coding Sibling Constellations.

    Science.gov (United States)

    Perfetti, Lawrence J. Carpenter

    1990-01-01

    Offers new sibling constellation coding method (Base 32) for genograms using binary and base 32 numbers that saves considerable microcomputer memory. Points out that new method will result in greater ability to store and analyze larger amounts of family data. (Author/CM)

  8. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  9. Development of an automatic validation system for simulation codes of the fusion research; Entwicklung eines automatischen Validierungssystems fuer Simulationscodes der Fusionsforschung

    Energy Technology Data Exchange (ETDEWEB)

    Galonska, Andreas

    2010-03-15

    In the present master thesis the development oa an automatic validation system for the simulation code ERO is documented. This 3D Monte-carlo code models the transport of impurities as well as plasma-wall interaction processes and has great importance for the fusion research. The validation system is based on JuBE (Julich Benchmarking Environment), the flexibility of which allows a slight extension of the system to other codes, for instance such, which are operated in the framework of the EU Task Force ITM (Integrated Tokamak Modelling). The chosen solution - JuBE and a special program for the ''intellectual'' comparison of actual and reference-edition data of ERO is described and founded. The use of this program and the configuration of JuBE are detailedly described. Simulations to different plasma experiments, which serve as reference cases for the automatic validation, are explained. The working of the system is illustrated by the description of a test case. This treats the failure localization and improvement in the parallelization of an important ERO module (tracking of physically eroded particle). It is demonstrated, how the system reacts in an erroneous validation and the subsequently performed error correction leads to a positive result. Finally a speed-up curve of the parallelization is established by means of the output data of JuBE.

  10. Automatic counting method for complex overlapping erythrocytes based on seed prediction in microscopic imaging

    Directory of Open Access Journals (Sweden)

    Xudong Wei

    2016-09-01

    Full Text Available Blood cell counting is an important medical test to help medical staffs diagnose various symptoms and diseases. An automatic segmentation of complex overlapping erythrocytes based on seed prediction in microscopic imaging is proposed. The four main innovations of this research are as follows: (1 Regions of erythrocytes extracted rapidly and accurately based on the G component. (2 K-means algorithm is applied on edge detection of overlapping erythrocytes. (3 Traces of erythrocytes’ biconcave shape are utilized to predict erythrocyte’s position in overlapping clusters. (4 A new automatic counting method which aims at complex overlapping erythrocytes is presented. The experimental results show that the proposed method is efficient and accurate with very little running time. The average accuracy of the proposed method reaches 97.0%.

  11. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  12. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  13. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods.

    Science.gov (United States)

    Coppieters 't Wallant, Dorothée; Maquet, Pierre; Phillips, Christophe

    2016-01-01

    Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation) and individual characteristics (intellectual quotient). Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  14. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  15. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  16. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  17. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  18. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  19. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  20. An automatic glioma grading method based on multi-feature extraction and fusion.

    Science.gov (United States)

    Zhan, Tianming; Feng, Piaopiao; Hong, Xunning; Lu, Zhenyu; Xiao, Liang; Zhang, Yudong

    2017-07-20

    An accurate assessment of tumor malignancy grade in the preoperative situation is important for clinical management. However, the manual grading of gliomas from MRIs is both a tiresome and time consuming task for radiologists. Thus, it is a priority to design an automatic and effective computer-aided diagnosis (CAD) tool to assist radiologists in grading gliomas. To design an automatic computer-aided diagnosis for grading gliomas using multi-sequence magnetic resonance imaging. The proposed method consists of two steps: (1) the features of high and low grade gliomas are extracted from multi-sequence magnetic resonance images, and (2) then, a KNN classifier is trained to grade the gliomas. In the feature extraction step, the intensity, volume, and local binary patterns (LBP) of the gliomas are extracted, and PCA is used to reduce the data dimension. The proposed "Intensity-Volume-LBP-PCA-KNN" method is validated on the MICCAI 2015 BraTS challenge dataset, and an average grade accuracy of 87.59% is obtained. The proposed method is an effective method for automatically grading gliomas and can be applied to real situations.

  1. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  2. Systems and methods for automatically identifying and linking names in digital resources

    Science.gov (United States)

    Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.

    2017-06-06

    The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.

  3. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    Oliveira, J.F. de

    1988-01-01

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author) [pt

  4. Automatic planning for robots: review of methods and some ideas about structure and learning

    Energy Technology Data Exchange (ETDEWEB)

    Cuena, J.; Salmeron, C.

    1983-01-01

    After a brief review of the problems involved in the design of an automatic planner system, the attention is focused in the particular problems that appear when the planner is used to control the actions of a robot. As conclusion, the introduction of techniques for learning in order to improve the efficiency of a planner are suggested, and a method for it, at present in development, is presented. 14 references.

  5. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    Garcia, O.; Perez, M.; Morales, N.

    2012-01-01

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  6. Recent advances in neutral particle transport methods and codes

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1996-01-01

    An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned

  7. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    Science.gov (United States)

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Development of automatic control method for cryopump system for JT-60 neutral beam injector

    International Nuclear Information System (INIS)

    Shibanuma, Kiyoshi; Akino, Noboru; Dairaku, Masayuki; Ohuchi, Yutaka; Shibata, Takemasa

    1991-10-01

    A cryopump system for JT-60 neutral beam injector (NBI) is composed of 14 cryopumps with the largest total pumping speed of 20000 m 3 /s in the world, which are cooled by liquid helium through a long-distance liquid helium transferline of about 500 m from a helium refrigerator with the largest capacity of 3000 W at 3.6 K in Japan. An automatic control method of the cryopump system has been developed and tested. Features of the automatic control method are as follows. 1) Suppression control of the thermal imbalance in cooling-down of the 14 cryopumps. 2) Stable cooling control of the cryopump due to liquid helium supply to six cryopanels by natural circulation in steady-state mode. 3) Stable liquid helium supply control for the cryopumps from the liquid helium dewar in all operation modes of the cryopumps, considering the helium quantities held in respective components of the closed helium loop. 4) Stable control of the helium refrigerator for the fluctuation in thermal load from the cryopumps and the change of operation mode of the cryopumps. In the automatic operation of the cryopump system by the newly developed control method, the cryopump system including the refrigerator was stably operated for all operation modes of the cryopumps, so that the cool-down of 14 cryopumps was completed in 16 hours from the start of cool-down of the system and the cryopumps was stably cooled by natural circulation cooling in steady-state mode. (author)

  9. [A wavelet-transform-based method for the automatic detection of late-type stars].

    Science.gov (United States)

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  10. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  11. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... Rhine- Westphalia, Germany. A link to an example with ASTER data to detect change with the same method after the 2005 Kashmir earthquake is given. The method is also used to automatically normalize multitemporal, multispectral Landsat ETM+ data radiometrically. IDL/ENVI, Python and Matlab software...

  12. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    Science.gov (United States)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  13. A method for the automatic separation of the images of galaxies and stars from measurements made with the COSMOS machine

    International Nuclear Information System (INIS)

    MacGillivray, H.T.; Martin, R.; Pratt, N.M.; Reddish, V.C.; Seddon, H.; Alexander, L.W.G.; Walker, G.S.; Williams, P.R.

    1976-01-01

    A method has been developed which allows the computer to distinguish automatically between the images of galaxies and those of stars from measurements made with the COSMOS automatic plate-measuring machine at the Royal Observatory, Edinburgh. Results have indicated that a 90 to 95 per cent separation between galaxies and stars is possible. (author)

  14. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  15. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  16. MERCURE: a 3D industrial code for gamma rays transport by straight line attenuation method. Shielding applications

    International Nuclear Information System (INIS)

    Suteau, C.; Chiron, M.; Luneville, L.; Berger, L.; Huver, M.

    2003-01-01

    The M.E.R.C.U.R.E. calculation code (version 6.3) simulate the photons transport from 15 keV to 10 MeV in three dimensional geometries between volume sources and calculation points. It is based in the integration of attenuation punctual nuclei in straight line with accumulation factors. The accumulation factors take into account the following physical phenomena: photoelectric effect, coherent diffusion, incoherent diffusion, pairs production, radiation secondary sources coming from Bremsstrahlung and fluorescence. The code determines the accumulation factor of a succession of several screens with an innovative iterative method. M.E.R.C.U.R.E. -6.3 integers the punctual nuclei by a Monte Carlo method for which it automatically determines the importance distributions. The results of this code are compared with these ones of the Sn T.W.O.D.A.N.T. code in two one-dimensional configurations. One includes five screens composed of four different materials and the other one three screens. In the configuration with three screens, the second screen is of an infinitesimal thickness. (N.C.)

  17. Brazil nut sorting for aflatoxin prevention: a comparison between automatic and manual shelling methods

    Directory of Open Access Journals (Sweden)

    Ariane Mendonça Pacheco

    2013-06-01

    Full Text Available The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc, water activity (Aw, and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.

  18. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. A novel method for automatic genotyping of microsatellite markers based on parametric pattern recognition.

    Science.gov (United States)

    Johansson, Asa; Karlsson, Patrik; Gyllensten, Ulf

    2003-09-01

    Genetic mapping of loci affecting complex phenotypes in human and other organisms is presently being conducted on a very large scale, using either microsatellite or single nucleotide polymorphism (SNP) markers and by partly automated methods. A critical step in this process is the conversion of the instrument output into genotypes, both a time-consuming and error prone procedure. Errors made during this calling of genotypes will dramatically reduce the ability to map the location of loci underlying a phenotype. Accurate methods for automatic genotype calling are therefore important. Here, we describe novel algorithms for automatic calling of microsatellite genotypes using parametric pattern recognition. The analysis of microsatellite data is complicated both by the occurrence of stutter bands, which arise from Taq polymerase misreading the number of repeats, and additional bands derived form the non-template dependent addition of a nucleotide to the 3' end of the PCR products. These problems, together with the fact that the lengths of two alleles in a heterozygous individual may differ by only two nucleotides, complicate the development of an automated process. The novel algorithms markedly reduce the need for manual editing and the frequency of miscalls, and compares very favourably with commercially available software for automatic microsatellite genotyping.

  20. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  1. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  2. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    Science.gov (United States)

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  3. Automatic measuring method of catenary geometric parameters based on laser scanning and imaging

    Science.gov (United States)

    Fu, Luhua; Chang, Songhong; Liu, Changjie

    2018-01-01

    The catenary geometric parameters are important factors that affect the safe operation of the railway. Among them, height of conductor and stagger value are two key parameters. At present, the two parameters are mainly measured by laser distance sensor and angle measuring device with manual aiming method, with low measuring speed and poor efficiency. In order to improve the speed and accuracy of catenary geometric parameters detection, a new automatic measuring method of contact wire's parameters based on laser scanning and imaging is proposed. The DLT method is used to calibrate the parameters of the linear array CCD camera. The direction of the scanning laser beam and the spatial coordinate of the starting point of the beam are calculated by geometric method. Finally, the equation is established using the calibrated parameters and the imaginary coordinates of the imaging point, to solve the spatial coordinate of the measured point on the contact wire, so as to calculate height of conductor and stagger value. Different from the traditional hand-held laser phase measuring method, the new method can achieve measurement of the catenary geometric parameters automatically without manual aiming. Through measurement results, accuracy can reach 2mm.

  4. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  5. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  6. A comparison of coronal mass ejections identified by manual and automatic methods

    Directory of Open Access Journals (Sweden)

    S. Yashiro

    2008-10-01

    Full Text Available Coronal mass ejections (CMEs are related to many phenomena (e.g. flares, solar energetic particles, geomagnetic storms, thus compiling of event catalogs is important for a global understanding these phenomena. CMEs have been identified manually for a long time, but in the SOHO era, automatic identification methods are being developed. In order to clarify the advantage and disadvantage of the manual and automatic CME catalogs, we examined the distributions of CME properties listed in the CDAW (manual and CACTus (automatic catalogs. Both catalogs have a good agreement on the wide CMEs (width>120° in their properties, while there is a significant discrepancy on the narrow CMEs (width≤30°: CACTus has a larger number of narrow CMEs than CDAW. We carried out an event-by-event examination of a sample of events and found that the CDAW catalog have missed many narrow CMEs during the solar maximum. Another significant discrepancy was found on the fast CMEs (speed>1000 km/s: the majority of the fast CDAW CMEs are wide and originate from low latitudes, while the fast CACTus CMEs are narrow and originate from all latitudes. Event-by-event examination of a sample of events suggests that CACTus has a problem on the detection of the fast CMEs.

  7. Research on large spatial coordinate automatic measuring system based on multilateral method

    Science.gov (United States)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  8. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  9. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  10. Proposal of the Measurement Method of the Transmission Line Constants by Automatic Oscillograph Utilization

    Science.gov (United States)

    Ooura, Yoshifumi

    The author devised new method for measurement of the transmission line constants of high precision with the automatic oscillograph. This paper is proposal of new method for measurement of the transmission line constants. The author utilized that the inherent eigenvector matrixs of transmission line had an equal relation with four-terminal constants eigenvector matrixs of transmission line. And the author calculated four-terminal constants of transmission line from the data (voltage-current data of the automatic oscillograph) of six cases of transmission line system faults and devised the method for measurement for transmission line constants from analysis of the four-terminal constants of transmission line next. Furthermore, the author inspected this new method in the system fault simulations of the EMTP transmission line system model. It was shown that the result is the measurement method of high accuracy. From now on, the author advances the measurement of the transmission line constants from actual system faults data of the transmission line and its periphery with the cooperation power system companies.

  11. Fluid dynamics and heat transfer methods for the TRAC code

    International Nuclear Information System (INIS)

    Reed, W.H.; Kirchner, W.L.

    1977-01-01

    A computer code called TRAC is being developed for analysis of loss-of-coolant accidents and other transients in light water reactors. This code involves a detailed, multidimensional description of two-phase flow coupled implicitly through appropriate heat transfer coefficients with a simulation of the temperature field in fuel and structural material. Because TRAC utilizes about 1000 fluid mesh cells to describe an LWR system, whereas existing lumped parameter codes typically involve fewer than 100 fluid cells, we have developed new highly implicit difference techniques that yield acceptable computing times on modern computers. Several test problems for which experimental data are available, including blowdown of single pipe and loop configurations with and without heated walls, have been computed with TRAC. Excellent agreement with experimental results has been obtained. (author)

  12. A new method for ecoacoustics? Toward the extraction and evaluation of ecologically-meaningful soundscape components using sparse coding methods.

    Science.gov (United States)

    Eldridge, Alice; Casey, Michael; Moscoso, Paola; Peck, Mika

    2016-01-01

    Passive acoustic monitoring is emerging as a promising non-invasive proxy for ecological complexity with potential as a tool for remote assessment and monitoring (Sueur & Farina, 2015). Rather than attempting to recognise species-specific calls, either manually or automatically, there is a growing interest in evaluating the global acoustic environment. Positioned within the conceptual framework of ecoacoustics, a growing number of indices have been proposed which aim to capture community-level dynamics by (e.g., Pieretti, Farina & Morri, 2011; Farina, 2014; Sueur et al., 2008b) by providing statistical summaries of the frequency or time domain signal. Although promising, the ecological relevance and efficacy as a monitoring tool of these indices is still unclear. In this paper we suggest that by virtue of operating in the time or frequency domain, existing indices are limited in their ability to access key structural information in the spectro-temporal domain. Alternative methods in which time-frequency dynamics are preserved are considered. Sparse-coding and source separation algorithms (specifically, shift-invariant probabilistic latent component analysis in 2D) are proposed as a means to access and summarise time-frequency dynamics which may be more ecologically-meaningful.

  13. An automatic multigrid method for the solution of sparse linear systems

    Science.gov (United States)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  14. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    Science.gov (United States)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  15. Automatic segmentation of corpus callosum using Gaussian mixture modeling and Fuzzy C means methods.

    Science.gov (United States)

    İçer, Semra

    2013-10-01

    This paper presents a comparative study of the success and performance of the Gaussian mixture modeling and Fuzzy C means methods to determine the volume and cross-sectionals areas of the corpus callosum (CC) using simulated and real MR brain images. The Gaussian mixture model (GMM) utilizes weighted sum of Gaussian distributions by applying statistical decision procedures to define image classes. In the Fuzzy C means (FCM), the image classes are represented by certain membership function according to fuzziness information expressing the distance from the cluster centers. In this study, automatic segmentation for midsagittal section of the CC was achieved from simulated and real brain images. The volume of CC was obtained using sagittal sections areas. To compare the success of the methods, segmentation accuracy, Jaccard similarity and time consuming for segmentation were calculated. The results show that the GMM method resulted by a small margin in more accurate segmentation (midsagittal section segmentation accuracy 98.3% and 97.01% for GMM and FCM); however the FCM method resulted in faster segmentation than GMM. With this study, an accurate and automatic segmentation system that allows opportunity for quantitative comparison to doctors in the planning of treatment and the diagnosis of diseases affecting the size of the CC was developed. This study can be adapted to perform segmentation on other regions of the brain, thus, it can be operated as practical use in the clinic. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Mapping Saldana's Coding Methods onto the Literature Review Process

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Frels, Rebecca K.; Hwang, Eunjin

    2016-01-01

    Onwuegbuzie and Frels (2014) provided a step-by-step guide illustrating how discourse analysis can be used to analyze literature. However, more works of this type are needed to address the way that counselor researchers conduct literature reviews. Therefore, we present a typology for coding and analyzing information extracted for literature…

  17. Method for quantitative assessment of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Dearien, J.A.; Davis, C.B.; Matthews, L.J.

    1979-01-01

    A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison

  18. The role of social cues in the deployment of spatial attention: Head-body relationships automatically activate directional spatial codes in a Simon task

    Directory of Open Access Journals (Sweden)

    Iwona ePomianowska

    2012-02-01

    Full Text Available The role of body orientation in the orienting and allocation of social attention was examined using an adapted Simon paradigm. Participants categorized the facial expression of forward facing, computer-generated human figures by pressing one of two response keys, each located left or right of the observers’ body midline, while the orientation of the stimulus figure’s body (trunk, arms, and legs, which was the task-irrelevant feature of interest, was manipulated (oriented towards the left or right visual hemifield with respect to the spatial location of the required response. We found that when the orientation of the body was compatible with the required response location, responses were slower relative to when body orientation was incompatible with the response location. This reverse compatibility effect suggests that body orientation is automatically processed into a directional spatial code, but that this code is based on an integration of head and body orientation within an allocentric-based frame of reference. Moreover, we argue that this code may be derived from the motion information implied in the image of a figure when head and body orientation are incongruent. Our results have implications for understanding the nature of the information that affects the allocation of attention for social orienting.

  19. On Young’s modulus profile across anisotropic nonhomogeneous polymeric fibre using automatic transverse interferometric method

    Science.gov (United States)

    Sokkar, T. Z. N.; Shams El-Din, M. A.; El-Tawargy, A. S.

    2012-09-01

    This paper provides the Young's modulus profile across anisotropic nonhomogeneous polymeric fibre using an accurate transverse interferometric method. A mathematical model based on optical and tensile concepts is presented to calculate the mechanical parameter profiles of fibres. The proposed model with the aid of Mach-Zehnder interferometer combined with an automated drawing device are used to determine the Young's modulus profiles for three drawn polypropylene (PP) fibres (virgin, recycled and virgin recycled 50/50). The obtained microinterferograms are analyzed automatically using fringe processor programme to determine the phase distribution.

  20. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  1. A FUZZY AUTOMATIC CAR DETECTION METHOD BASED ON HIGH RESOLUTION SATELLITE IMAGERY AND GEODESIC MORPHOLOGY

    Directory of Open Access Journals (Sweden)

    N. Zarrinpanjeh

    2017-09-01

    Full Text Available Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  2. a Fuzzy Automatic CAR Detection Method Based on High Resolution Satellite Imagery and Geodesic Morphology

    Science.gov (United States)

    Zarrinpanjeh, N.; Dadrassjavan, F.

    2017-09-01

    Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  3. A semi-automatic semantic method for mapping SNOMED CT concepts to VCM Icons.

    Science.gov (United States)

    Lamy, Jean-Baptiste; Tsopra, Rosy; Venot, Alain; Duclos, Catherine

    2013-01-01

    VCM (Visualization of Concept in Medicine) is an iconic language for representing key medical concepts by icons. However, the use of this language with reference terminologies, such as SNOMED CT, will require the mapping of its icons to the terms of these terminologies. Here, we present and evaluate a semi-automatic semantic method for the mapping of SNOMED CT concepts to VCM icons. Both SNOMED CT and VCM are compositional in nature; SNOMED CT is expressed in description logic and VCM semantics are formalized in an OWL ontology. The proposed method involves the manual mapping of a limited number of underlying concepts from the VCM ontology, followed by automatic generation of the rest of the mapping. We applied this method to the clinical findings of the SNOMED CT CORE subset, and 100 randomly-selected mappings were evaluated by three experts. The results obtained were promising, with 82 of the SNOMED CT concepts correctly linked to VCM icons according to the experts. Most of the errors were easy to fix.

  4. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  5. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    International Nuclear Information System (INIS)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl

    2008-10-01

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained

  6. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  7. Sparse Coding-Based Method Comparison For Land-Use Classification

    Directory of Open Access Journals (Sweden)

    Dewa Made Sri Arsa

    2017-06-01

    Full Text Available Land-use classification utilize  high-resolution remote sensing image. The image is utilized for improving the classification problem. Nonetheless, in other side, the problem becomes more challenging cause the image is too complex. We have to represent the image appropriately. On of the common method to deal with it is Bag of Visual Word (BOVW.  The method needs  a coding process to get the final data interpretation. There are many methods to do coding such as Hard Quantization Coding (HQ, Sparse Coding (SC, and Locality-constrained Linear Coding (LCC. However, that coding methods use a different assumption. Therefore, we have to compare the result of each coding method. The coding method affects classification accuracy. The best coding method will produce the better classification result. Dataset UC Merced consisted 21 classes is used in this research. The experiment result shows that LCC got better performance / accuracy than SC and HQ. LCC method got 86.48 % accuracy. Furthermore, LCC also got the best performance on various number of training data for each class.

  8. Note related to the elaboration of a coding by key sentences for the programming of a document automatic selection system

    International Nuclear Information System (INIS)

    Leroy, A.; Braffort, P.

    1959-01-01

    This note deals with the providing of CEA documentalists with a tool for coding studies. The authors first discuss issues related to code selection criteria (author classification, topic classification, and so on), and propose an overview and a discussion of linguistic models. They also comment how diagrams illustrating relationships between words are built up, and propose a diagram representation example which includes different concepts such as conditions, properties, object, tools or processes (for example hardness for a steel, batch processing for a condition, or sintering for a process), and also the introduction of negation. Then, the authors address how basic concepts can be highlighted, describe how key sentences can be built up, and propose an example analysis in the case of a published article dealing with nuclear reactors (in this case, the study of a liquid-metal neutron absorber for the control of a gas-cooled power reactor). Perspectives of evolution are finally discussed

  9. Automatic trend estimation

    CERN Document Server

    Vamos¸, C˘alin

    2013-01-01

    Our book introduces a method to evaluate the accuracy of trend estimation algorithms under conditions similar to those encountered in real time series processing. This method is based on Monte Carlo experiments with artificial time series numerically generated by an original algorithm. The second part of the book contains several automatic algorithms for trend estimation and time series partitioning. The source codes of the computer programs implementing these original automatic algorithms are given in the appendix and will be freely available on the web. The book contains clear statement of the conditions and the approximations under which the algorithms work, as well as the proper interpretation of their results. We illustrate the functioning of the analyzed algorithms by processing time series from astrophysics, finance, biophysics, and paleoclimatology. The numerical experiment method extensively used in our book is already in common use in computational and statistical physics.

  10. Experience with the Incomplete Cholesky Conjugate Gradient method in a diffusion code

    International Nuclear Information System (INIS)

    Hoebel, W.

    1985-01-01

    For the numerical solution of sparse systems of linear equations arising from finite difference approximation of the multidimensional neutron diffusion equation fast methods are needed. Effective algorithms for scalar computers may not be likewise suitable on vector computers. In the improved version DIXY2 of the Karlsruhe two-dimensional neutron diffusion code for rectangular geometries an Incomplete Cholesky Conjugate Gradient (ICCG) algorithm has been combined with the originally implemented Cyclically Reduced 4-Lines SOR (CR4LSOR) inner iteration method. The combined procedure is automatically activated for slowly converging applications, thus leading to a drastic reduction of iterations as well as CPU-times on a scalar computer. In a follow-up benchmark study necessary modifications to ICCG and CR4LSOR for their use on a vector computer were investigated. It was found that a modified preconditioning for the ICCG algorithm restricted to the block diagonal matrix is an effective method both on scalar and vector computers. With a splitting of the 9-band-matrix in two triangular Cholesky matrices necessary inversions are performed on a scalar machine by recursive forward and backward substitutions. On vector computers an additional factorization of the triangular matrices into four bidiagonal matrices enables Buneman reduction and the recursive inversion is restricted to a small system. A similar strategy can be realized with CR4LSOR if the unvectorizable Gauss-Seidel iteration is replaced by Double Jacobi and Buneman technique for a vector computer. Compared to single line blocking over the original mesh the cyclical 4-lines reduction of the DIXY inner iteration scheme reduces numbers of iterations and CPU-times considerably

  11. Methods for the development of large computer codes under LTSS

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1977-06-01

    TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset

  12. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  13. Rational automatic search method for stable docking models of protein and ligand.

    Science.gov (United States)

    Mizutani, M Y; Tomioka, N; Itai, A

    1994-10-21

    An efficient automatic method has been developed for docking a ligand molecule to a protein molecule. The method can construct energetically favorable docking models, considering specific interactions between the two molecules and conformational flexibility in the ligand. In the first stage of docking, likely binding modes are searched and estimated effectively in terms of hydrogen bonds, together with conformations in part of the ligand structure that includes hydrogen bonding groups. After that part is placed in the protein cavity and is optimized, conformations in the remaining part are also examined systematically. Finally, several stable docking models are obtained after optimization of the position, orientation and conformation of the whole ligand molecule. In all the screening processes, the total potential energy including intra- and intermolecular interaction energy, consisting of van der Waals, electrostatic and hydrogen bonding energies, is used as the index. The characteristics of our docking method are high accuracy of the results, fully automatic generation of models and short computational time. The efficiency of the method was confirmed by four docking trials using two enzyme systems. In two attempts to dock methotrexate to dihydrofolate reductase and 2'-GMP to ribonuclease T1, the exact structures of complexes in crystals were reproduced as the most stable docking models, without any assumptions concerning the binding modes and ligand conformations. The most stable docking models of dihydrofolate and trimethoprim, respectively, to dihydrofolate reductase were also in good agreement with those suggested by experiment. In all test cases, it was shown that our method can accurately predict the correct docking structures, discriminating the correct model from incorrect ones. The efficiency of our method was further tested from the viewpoint of ability to predict the relative stability of the docking structures of two triazine derivatives to

  14. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Automatic method of analysis and measurement of additional parameters of corneal deformation in the Corvis tonometer.

    Science.gov (United States)

    Koprowski, Robert

    2014-11-19

    The method for measuring intraocular pressure using the Corvis tonometer provides a sequence of images of corneal deformation. Deformations of the cornea are recorded using the ultra-high-speed Scheimpflug camera. This paper presents a new and reproducible method of analysis of corneal deformation images that allows for automatic measurements of new features, namely new three parameters unavailable in the original software. The images subjected to processing had a resolution of 200 × 576 × 140 pixels. They were acquired from the Corvis tonometer and simulation. In total 14,000 2D images were analysed. The image analysis method proposed by the author automatically detects the edge of the cornea and sclera fragments. For this purpose, new methods of image analysis and processing proposed by the author as well as those well-known, such as Canny filter, binarization, median filtering etc., have been used. The presented algorithms were implemented in Matlab (version 7.11.0.584-R2010b) with Image Processing toolbox (version 7.1-R2010b) using both known algorithms for image analysis and processing and those proposed by the author. Owing to the proposed algorithm it is possible to determine three parameters: (1) the degree of the corneal reaction relative to the static position; (2) the corneal length changes; (3) the ratio of amplitude changes to the corneal deformation length. The corneal reaction is smaller by about 30.40% compared to its static position. The change in the corneal length during deformation is very small, approximately 1% of its original length. Parameter (3) enables to determine the applanation points with a correlation of 92% compared to the conventional method for calculating corneal flattening areas. The proposed algorithm provides reproducible results fully automatically within a few seconds/per patient using Core i7 processor. Using the proposed algorithm, it is possible to measure new, additional parameters of corneal deformation, which

  16. The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading

    Science.gov (United States)

    Sidor, Kamil; Szlachta, Anna

    2017-04-01

    The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.

  17. An automatic method for the determination of saturation curve and metastable zone width of lysine monohydrochloride

    Science.gov (United States)

    Rabesiaka, Mihasina; Porte, Catherine; Bonnin-Paris, Johanne; Havet, Jean-Louis

    2011-10-01

    An essential tool in the study of crystallization is the saturation curve and metastable zone width, since the shape of the solubility curve defines the crystallization mode and the supersaturation conditions, which are the driving force of crystallization. The purpose of this work was to determine saturation and supersaturation curves of lysine monohydrochloride by an automatic method based on the turbidity of the crystallization medium. As lysine solution is colored, the interest of turbidimetry is demonstrated. An automated installation and the procedure to determine several points on the saturation curve and metastable zone width were set up in the laboratory. On-line follow-up of the solution turbidity and temperature enabled the dissolution and nucleation temperatures of the crystals to be determined by measuring attenuation of the light beam by suspended particles. The thermal regulation system was programmed so that the heating rate took into account the system inertia, i.e. duration related to the dissolution rate of the compound. Using this automatic method, the saturation curve and the metastable zone width of lysine monohydrochloride were plotted.

  18. A chest-shape target automatic detection method based on Deformable Part Models

    Science.gov (United States)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  19. A novel automatic method for monitoring Tourette motor tics through a wearable device.

    Science.gov (United States)

    Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe

    2010-09-15

    The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.

  20. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

  1. Determination of problematic ICD-9-CM subcategories for further study of coding performance: Delphi method.

    Science.gov (United States)

    Zeng, Xiaoming; Bell, Paul D

    2011-01-01

    In this study, we report on a qualitative method known as the Delphi method, used in the first part of a research study for improving the accuracy and reliability of ICD-9-CM coding. A panel of independent coding experts interacted methodically to determine that the three criteria to identify a problematic ICD-9-CM subcategory for further study were cost, volume, and level of coding confusion caused. The Medicare Provider Analysis and Review (MEDPAR) 2007 fiscal year data set as well as suggestions from the experts were used to identify coding subcategories based on cost and volume data. Next, the panelists performed two rounds of independent ranking before identifying Excisional Debridement as the subcategory that causes the most confusion among coders. As a result, they recommended it for further study aimed at improving coding accuracy and variation. This framework can be adopted at different levels for similar studies in need of a schema for determining problematic subcategories of code sets.

  2. Automatic methods for alveolar bone loss degree measurement in periodontitis periapical radiographs.

    Science.gov (United States)

    Lin, P L; Huang, P Y; Huang, P W

    2017-09-01

    Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone loss measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an automatic length-based alveolar bone loss measurement system with emphasis on a cementoenamel junction (CEJ) localization method: CEJ_LG. The bone loss measurement system first adopts the methods TSLS and ABLifBm, which we presented previously, to extract teeth contours and bone loss areas from periodontitis radiograph images. It then applies the proposed methods to locate the positions of CEJ, alveolar crest (ALC), and apex of tooth root (APEX), respectively. Finally the system computes the ratio of the distance between the positions of CEJ and ALC to the distance between the positions of CEJ and APEX as the degree of bone loss for that tooth. The method CEJ_LG first obtains the gradient of the tooth image then detects the border between the lower enamel and dentin (EDB) from the gradient image. Finally, the method identifies a point on the tooth contour that is horizontally closest to the EDB. Experimental results on 18 tooth images segmented from 12 periodontitis periapical radiographs, including 8 views of upper-jaw teeth and 10 views of lower-jaw teeth, show that 53% of the localized CEJs are within 3 pixels deviation (∼ 0.15 mm) from the positions marked by dentists and 90% have deviation less than 9 pixels (∼ 0.44 mm). For degree of alveolar bone loss, more than half of the measurements using our system have deviation less than 10% from the ground truth, and all measurements using our system are within 25% deviation from the ground truth. Our results suggest that the proposed automatic system can effectively estimate degree of horizontal alveolar bone loss in periodontitis radiograph images. We believe that our proposed system, if implemented in routine clinical practice, can serve as a valuable tool for early and accurate

  3. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  4. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  5. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    International Nuclear Information System (INIS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  6. Automatic plume episode identification and cloud shine reconstruction method for ambient gamma dose rates during nuclear accidents.

    Science.gov (United States)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Haller, Christoph; Yuan, Hongyong

    2017-11-01

    Ambient gamma dose rate (GDR) is the primary observation quantity for nuclear emergency management due to its high acquisition frequency and dense spatial deployment. However, ambient GDR is the sum of both cloud and ground shine, which hinders its effective utilization. In this study, an automatic method is proposed to identify the radioactive plume passage and to separate the cloud and ground shine in the total GDR. The new method is evaluated against a synthetic GDR dataset generated by JRODOS (Real Time On-line Decision Support) System and compared with another method (Hirayama, H. et al., 2014. Estimation of I-131 concentration using time history of pulse height distribution at monitoring post and detector response for radionuclide in plume. Transactions of the Atomic Energy Society of Japan 13:119-126, in Japanese (with English abstract)). The reconstructed cloud shine agrees well with the actual values for the whole synthetic dataset (1451 data points), with a very small absolute fractional bias (FB = 0.02) and normalized mean square error (NMSE = 2.04) as well as a large correlation coefficient (r = 0.95). The new method significantly outperforms the existing one (more than 95% reduction of FB and NMSE, and 61% improvement of the correlation coefficient), mainly due to the modification for high deposition events. The code of the proposed methodology and all the test data are available for academic and non-commercial use. The new approach with the detailed interpretation of the in-situ environment data will help improving the ability of off-site source term inverse estimation for nuclear accidents. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  8. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  9. A semi-automatic computer-aided method for surgical template design.

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  10. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    Science.gov (United States)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  11. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    Directory of Open Access Journals (Sweden)

    Hamid Reza Pourreza

    2009-03-01

    Full Text Available Introduction: Diabetic retinopathy (DR is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic retinopathy. Materials and Methods: The algorithm can be divided into three main steps. The purpose of the first step or pre-processing is background normalization and contrast enhancement of the images. The second step aims to detect candidates, i.e., all patterns possibly corresponding to MA, which is achieved using a local radon transform, Then, features are extracted, which are used in the last step to automatically classify the candidates into real MA or other objects using the SVM method. A database of 100 annotated images was used to test the algorithm. The algorithm was compared to manually obtained gradings of these images. Results: The sensitivity of diagnosis for DR was 100%, with specificity of 90% and the sensitivity of precise MA localization was 97%, at an average number of 5 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using the local radon transform in this algorithm eliminates the noise sensitivity for MA detection in retinal image analysis.

  12. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    Directory of Open Access Journals (Sweden)

    Enrique Valero

    2012-11-01

    Full Text Available In this paper we present a method that automatically yields Boundary Representation Models (B-rep for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  13. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  14. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    Science.gov (United States)

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  15. Research on automatic current sharing control methods for control power supply

    Directory of Open Access Journals (Sweden)

    Dai Xian Bin

    2016-01-01

    Full Text Available High-power switching devices in control power supply have different saturated forward voltage drops and the inconsistency of turning on/off times and they lead to the inconsistency in the external characteristics of the inverter modules in parallel operation. Modules with good performance in external characteristics undertake more currents and lead to overloading status and modules with bad performance in external characteristics stay in light-loading status, which increases the thermal stress of module undertaking more currents and influences the service life of high-power switching devices. Based on the simulation analysis of the small-signal module using control power supply automatic current sharing method, it is able to find out the characteristics of current-sharing loop control, namely, slow response speed of the current-sharing loop, which is beneficial for improving the stability of the entire control power supply system.

  16. A novel method for automatically locating the pylorus in the wireless capsule endoscopy.

    Science.gov (United States)

    Zhou, Shangbo; Yang, Han; Siddique, Muhammad Abubakar; Xu, Jie; Zhou, Ping

    2017-02-01

    Wireless capsule endoscopy (WCE) is a non-invasive technique used to examine the interiors of digestive tracts. Generally, the digestive tract can be divided into four segments: the entrance; stomach; small intestine; and large intestine. The stomach and the small intestine have a higher risk of infections than the other segments. In order to locate the diseased organ, an appropriate classification of the WCE images is necessary. In this article, a novel method is proposed for automatically locating the pylorus in WCE. The location of the pylorus is determined on two levels: rough-level and refined-level. In the rough-level, a short-term color change at the boundary between stomach and intestine can help us to find approximately 70-150 positions. In the refined-level, an improved Weber local descriptor (WLD) feature extraction method is designed for gray-scale images. Compared to the original WLD calculation method, the method for calculating the differential excitation is improved to give a higher level of robustness. A K-nearest neighbor (KNN) classifier is incorporated to segment these images around the approximate position into different regions. The proposed algorithm locates three most probable positions of the pylorus that were marked by the clinician. The experimental results indicate that the proposed method is effective.

  17. Cluster based statistical feature extraction method for automatic bleeding detection in wireless capsule endoscopy video.

    Science.gov (United States)

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A; Zhu, Wei-Ping; Ahmad, M Omair

    2018-03-01

    Wireless capsule endoscopy (WCE) is capable of demonstrating the entire gastrointestinal tract at an expense of exhaustive reviewing process for detecting bleeding disorders. The main objective is to develop an automatic method for identifying the bleeding frames and zones from WCE video. Different statistical features are extracted from the overlapping spatial blocks of the preprocessed WCE image in a transformed color plane containing green to red pixel ratio. The unique idea of the proposed method is to first perform unsupervised clustering of different blocks for obtaining two clusters and then extract cluster based features (CBFs). Finally, a global feature consisting of the CBFs and differential CBF is used to detect bleeding frame via supervised classification. In order to handle continuous WCE video, a post-processing scheme is introduced utilizing the feature trends in neighboring frames. The CBF along with some morphological operations is employed to identify bleeding zones. Based on extensive experimentation on several WCE videos, it is found that the proposed method offers significantly better performance in comparison to some existing methods in terms of bleeding detection accuracy, sensitivity, specificity and precision in bleeding zone detection. It is found that the bleeding detection performance obtained by using the proposed CBF based global feature is better than the feature extracted from the non-clustered image. The proposed method can reduce the burden of physicians in investigating WCE video to detect bleeding frame and zone with a high level of accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Comparison of reconstruction methods for computed tomography with industrial robots using automatic object position recognition

    International Nuclear Information System (INIS)

    Klein, Philipp; Herold, Frank

    2016-01-01

    The Computed Tomography (CT) is one main imaging technique in the field of non-destructive testing. Newly, industrial robots are used to manipulate the object during the whole CT scan, instead of just placing the object onto a standard turntable as it was usual for industrial CT the times before. Using industrial robots for the object manipulation in CT systems provides an increase in spatial freedom and therefore more flexibility for various applications. For example complete CT trajectories concerning the Tuy-Smith Theorem are applied more easily than using conventional manipulators. These advantages are accompanied by a loss of precision in positioning, caused by mechanical limitations of the robotic systems. In this article we will present a comparison of established reconstruction methods for CT with industrial robots using a so-called Automatic Object Position Recognition (AOPR). AOPR is a new automatic method which improves the position-accuracy online by using a priori information about fix markers in space. The markers are used to reconstruct the position of the object during each image acquisition. These more precise positions lead to a higher quality of the reconstructed volume after the image reconstruction. We will study the image quality of several different reconstruction techniques. For example we will reconstruct real robot-CT datasets by filtered back-projection (FBP), simultaneous algebraic reconstruction technique (SART) or Siemens's theoretically exact reconstruction (TXR). Each time, we will evaluate the datasets with and without AOPR and will present the dedicated image quality. Moreover we will measure the computation time of AOPR to proof that we still fulfill the real time conditions.

  19. Fast image restoration method based on coded exposure and vibration detection

    Science.gov (United States)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-10-01

    Fast image restoration method is proposed for vibration image deblurring based on coded exposure and vibration detection. The criterion of the code sequence selection is discussed in detail, and several factors are considered to search for the optimal coded exposure sequence. The blurred vibration image is obtained by the coded exposure technique. Meanwhile, the vibration track information of the camera is detected by a fiber-optic gyroscope. The point spread function (PSF) is estimated using a statistical method with the selected code sequence and vibration track information. Finally, the blurred image is quickly restored with the estimated PSF through a direct inverse filtering method. Simulation experiments are conducted to test the performance of the approach with different vibration forms. A real imaging system is constructed to verify the effectiveness of the proposed algorithm. Experimental results show that the presented algorithm could yield better subjective experiences and superior objective evaluation values.

  20. WKB: an interactive code for solving differential equations using phase integral methods

    International Nuclear Information System (INIS)

    White, R.B.

    1978-01-01

    A small code for the analysis of ordinary differential equations interactively through the use of Phase Integral Methods (WKB) has been written for use on the DEC 10. This note is a descriptive manual for those interested in using the code

  1. Verification of Euler/Navier-Stokes codes using the method of manufactured solutions

    Science.gov (United States)

    Roy, C. J.; Nelson, C. C.; Smith, T. M.; Ober, C. C.

    2004-02-01

    The method of manufactured solutions is used to verify the order of accuracy of two finite-volume Euler and Navier-Stokes codes. The Premo code employs a node-centred approach using unstructured meshes, while the Wind code employs a similar scheme on structured meshes. Both codes use Roe's upwind method with MUSCL extrapolation for the convective terms and central differences for the diffusion terms, thus yielding a numerical scheme that is formally second-order accurate. The method of manufactured solutions is employed to generate exact solutions to the governing Euler and Navier-Stokes equations in two dimensions along with additional source terms. These exact solutions are then used to accurately evaluate the discretization error in the numerical solutions. Through global discretization error analyses, the spatial order of accuracy is observed to be second order for both codes, thus giving a high degree of confidence that the two codes are free from coding mistakes in the options exercised. Examples of coding mistakes discovered using the method are also given.

  2. Development of three-dimensional transport code by the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1985-01-01

    Development of a three-dimensional neutron transport code by the double finite element method is described. Both of the Galerkin and variational methods are adopted to solve the problem, and then the characteristics of them are compared. Computational results of the collocation method, developed as a technique for the vaviational one, are illustrated in comparison with those of an Ssub(n) code. (author)

  3. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    Science.gov (United States)

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. A novel fractal coding method based on M-J sets.

    Directory of Open Access Journals (Sweden)

    Yuanyuan Sun

    Full Text Available In this paper, we present a novel fractal coding method with the block classification scheme based on a shared domain block pool. In our method, the domain block pool is called dictionary and is constructed from fractal Julia sets. The image is encoded by searching the best matching domain block with the same BTC (Block Truncation Coding value in the dictionary. The experimental results show that the scheme is competent both in encoding speed and in reconstruction quality. Particularly for large images, the proposed method can avoid excessive growth of the computational complexity compared with the traditional fractal coding algorithm.

  5. Automatic segmentation of MRI head images by 3-D region growing method which utilizes edge information

    International Nuclear Information System (INIS)

    Jiang, Hao; Suzuki, Hidetomo; Toriwaki, Jun-ichiro

    1991-01-01

    This paper presents a 3-D segmentation method that automatically extracts soft tissue from multi-sliced MRI head images. MRI produces a sequence of two-dimensional (2-D) images which contains three-dimensional (3-D) information of organs. To utilize such information we need effective algorithms to treat 3-D digital images and to extract organs and tissues of interest. We developed a method to extract the brain from MRI images which uses a region growing procedure and integrates information of uniformity of gray levels and information of the presence of edge segments in the local area around the pixel of interest. First we generate a kernel region which is a part of brain tissue by simple thresholding. Then we grow the region by means of a region growing algorithm under the control of 3-D edge existence to obtain the region of the brain. Our method is rather simple because it uses basic 3-D image processing techniques like spatial difference. It is robust for variation of gray levels inside a tissue since it also refers to the edge information in the process of region growing. Therefore, the method is flexible enough to be applicable to the segmentation of other images including soft tissues which have complicated shapes and fluctuation in gray levels. (author)

  6. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    Science.gov (United States)

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  7. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  8. Standard test methods for determining average grain size using semiautomatic and automatic image analysis

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 These test methods are used to determine grain size from measurements of grain intercept lengths, intercept counts, intersection counts, grain boundary length, and grain areas. 1.2 These measurements are made with a semiautomatic digitizing tablet or by automatic image analysis using an image of the grain structure produced by a microscope. 1.3 These test methods are applicable to any type of grain structure or grain size distribution as long as the grain boundaries can be clearly delineated by etching and subsequent image processing, if necessary. 1.4 These test methods are applicable to measurement of other grain-like microstructures, such as cell structures. 1.5 This standard deals only with the recommended test methods and nothing in it should be construed as defining or establishing limits of acceptability or fitness for purpose of the materials tested. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user ...

  9. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    Science.gov (United States)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  10. A decoding method of an n length binary BCH code through (n + 1n length binary cyclic code

    Directory of Open Access Journals (Sweden)

    TARIQ SHAH

    2013-09-01

    Full Text Available For a given binary BCH code Cn of length n = 2 s - 1 generated by a polynomial of degree r there is no binary BCH code of length (n + 1n generated by a generalized polynomial of degree 2r. However, it does exist a binary cyclic code C (n+1n of length (n + 1n such that the binary BCH code Cn is embedded in C (n+1n . Accordingly a high code rate is attained through a binary cyclic code C (n+1n for a binary BCH code Cn . Furthermore, an algorithm proposed facilitates in a decoding of a binary BCH code Cn through the decoding of a binary cyclic code C (n+1n , while the codes Cn and C (n+1n have the same minimum hamming distance.

  11. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  12. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    Science.gov (United States)

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  13. A new modelling method and unified code with MCRT for concentrating solar collectors and its applications

    International Nuclear Information System (INIS)

    Cheng, Z.D.; He, Y.L.; Cui, F.Q.

    2013-01-01

    Highlights: ► A general-purpose method or design/simulation tool needs to be developed for CSCs. ► A new modelling method and homemade unified code with MCRT are presented. ► The photo-thermal conversion processes in three typical CSCs were analyzed. ► The results show that the proposed method and model are feasible and reliable. -- Abstract: The main objective of the present work is to develop a general-purpose numerical method for improving design/simulation tools for the concentrating solar collectors (CSCs) of concentrated solar power (CSP) systems. A new modelling method and homemade unified code with the Monte Carlo Ray-Trace (MCRT) method for the CSCs are presented firstly. The details of the new designing method and homemade unified code with MCRT for numerical investigations on solar concentrating and collecting characteristics of the CSCs are introduced. Three coordinate systems are used in the MCRT program and can be totally independent from each other. Solar radiation in participating medium and/or non-participating medium can be taken into account simultaneously or dividedly in the simulation. The criteria of data processing and method/code checking are also proposed in detail. Finally the proposed method and code are applied to simulate and analyze the involuted photo-thermal conversion processes in three typical CSCs. The results show that the proposed method and model are reliable to simulate various types of CSCs.

  14. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  15. A Review on Energy-Saving Optimization Methods for Robotic and Automatic Systems

    Directory of Open Access Journals (Sweden)

    Giovanni Carabin

    2017-12-01

    Full Text Available In the last decades, increasing energy prices and growing environmental awareness have driven engineers and scientists to find new solutions for reducing energy consumption in manufacturing. Although many processes of a high energy consumption (e.g., chemical, heating, etc. are considered to have reached high levels of efficiency, this is not the case for many other industrial manufacturing activities. Indeed, this is the case for robotic and automatic systems, for which, in the past, the minimization of energy demand was not considered a design objective. The proper design and operation of industrial robots and automation systems represent a great opportunity for reducing energy consumption in the industry, for example, by the substitution with more efficient systems and the energy optimization of operation. This review paper classifies and analyses several methodologies and technologies that have been developed with the aim of providing a reference of existing methods, techniques and technologies for enhancing the energy performance of industrial robotic and mechatronic systems. Hardware and software methods, including several subcategories, are considered and compared, and emerging ideas and possible future perspectives are discussed.

  16. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  17. Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle

    Science.gov (United States)

    Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui

    2016-03-01

    Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.

  18. A method for automatic feature points extraction of human vertebrae three-dimensional model

    Science.gov (United States)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  19. Evaluation of an automatic dry eye test using MCDM methods and rank correlation.

    Science.gov (United States)

    Peteiro-Barral, Diego; Remeseiro, Beatriz; Méndez, Rebeca; Penedo, Manuel G

    2017-04-01

    Dry eye is an increasingly common disease in modern society which affects a wide range of population and has a negative impact on their daily activities, such as working with computers or driving. It can be diagnosed through an automatic clinical test for tear film lipid layer classification based on color and texture analysis. Up to now, researchers have mainly focused on the improvement of the image analysis step. However, there is still large room for improvement on the machine learning side. This paper presents a methodology to optimize this problem by means of class binarization, feature selection, and classification. The methodology can be used as a baseline in other classification problems to provide several solutions and evaluate their performance using a set of representative metrics and decision-making methods. When several decision-making methods are used, they may offer disagreeing rankings that will be solved by conflict handling in which rankings are merged into a single one. The experimental results prove the effectiveness of the proposed methodology in this domain. Also, its general purpose allows to adapt it to other classification problems in different fields such as medicine and biology.

  20. An overview of failure assessment methods in codes and standards

    International Nuclear Information System (INIS)

    Zerbst, U.; Ainsworth, R.A.

    2003-01-01

    This volume provides comprehensive up-to-date information on the assessment of the integrity of engineering structures containing crack-like flaws, in the absence of effects of creep at elevated temperatures (see volume 5) and of environment (see volume 6). Key methods are extensively reviewed and background information as well as validation is given. However, it should be kept in mind that for actual detailed assessments the relevant documents have to be consulted. In classical engineering design, an applied stress is compared with the appropriate material resistance expressed in terms of a limit stress, such as the yield strength or fatigue endurance limit. As long as the material resistance exceeds the applied stress, integrity of the component is assured. It is implicitly assumed that the component is defect-free but design margins provide some protection against defects. Modern design and operation philosophies, however, take explicit account of the possible presence of defects in engineering components. Such defects may arise from fabrication, e.g., during casting, welding, or forming processes, or may develop during operation. They may extend during operation and eventually lead to failure, which in the ideal case occurs beyond the design life of the component. Failure assessment methods are based upon the behavior of sharp cracks in structures, and for this reason all flaws or defects found in structures have to be treated as if they are sharp planar cracks. Hence the terms flaw or defect should be regarded as being interchangeable with the term crack throughout this volume. (orig.)

  1. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    International Nuclear Information System (INIS)

    Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  2. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

    2012-07-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  3. Automatic generation of 3D fine mesh geometries for the analysis of the venus-3 shielding benchmark experiment with the Tort code

    International Nuclear Information System (INIS)

    Pescarini, M.; Orsi, R.; Martinelli, T.

    2003-01-01

    In many practical radiation transport applications today the cost for solving refined, large size and complex multi-dimensional problems is not so much computing but is linked to the cumbersome effort required by an expert to prepare a detailed geometrical model, verify and validate that it is correct and represents, to a specified tolerance, the real design or facility. This situation is, in particular, relevant and frequent in reactor core criticality and shielding calculations, with three-dimensional (3D) general purpose radiation transport codes, requiring a very large number of meshes and high performance computers. The need for developing tools that make easier the task to the physicist or engineer, by reducing the time required, by facilitating through effective graphical display the verification of correctness and, finally, that help the interpretation of the results obtained, has clearly emerged. The paper shows the results of efforts in this field through detailed simulations of a complex shielding benchmark experiment. In the context of the activities proposed by the OECD/NEA Nuclear Science Committee (NSC) Task Force on Computing Radiation Dose and Modelling of Radiation-Induced Degradation of Reactor Components (TFRDD), the ENEA-Bologna Nuclear Data Centre contributed with an analysis of the VENUS-3 low-flux neutron shielding benchmark experiment (SCK/CEN-Mol, Belgium). One of the targets of the work was to test the BOT3P system, originally developed at the Nuclear Data Centre in ENEA-Bologna and actually released to OECD/NEA Data Bank for free distribution. BOT3P, ancillary system of the DORT (2D) and TORT (3D) SN codes, permits a flexible automatic generation of spatial mesh grids in Cartesian or cylindrical geometry, through combinatorial geometry algorithms, following a simplified user-friendly approach. This system demonstrated its validity also in core criticality analyses, as for example the Lewis MOX fuel benchmark, permitting to easily

  4. Status of SFR Codes and Methods QA Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, Acacia J. [Argonne National Lab. (ANL), Argonne, IL (United States); Briggs, Laural L. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, Thomas H. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-31

    This report details development of the SAS4A/SASSYS-1 SQA Program and describes the initial stages of Program implementation planning. The provisional Program structure, which is largely focused on the establishment of compliant SQA documentation, is outlined in detail, and Program compliance with the appropriate SQA requirements is highlighted. Additional program activities, such as improvements to testing methods and Program surveillance, are also described in this report. Given that the programmatic resources currently granted to development of the SAS4A/SASSYS-1 SQA Program framework are not sufficient to adequately address all SQA requirements (e.g. NQA-1, NUREG/BR-0167, etc.), this report also provides an overview of the gaps that remain the SQA program, and highlights recommendations on a path forward to resolution of these issues. One key finding of this effort is the identification of the need for an SQA program sustainable over multiple years within DOE annual R&D funding constraints.

  5. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    to have the various views available simultaneously. However, in multiview DVC (M-DVC), the decoder can still exploit the redundancy between views, avoiding the need for inter-camera communication. The key element of every DVC decoder is the side information (SI), which can be generated by leveraging intra......-view or inter-view redundancy for multiview video data. In this paper, a novel learning-based fusion technique is proposed, which is able to robustly fuse an inter-view SI and an intra-view (temporal) SI. An inter-view SI generation method capable of identifying occluded areas is proposed and is coupled...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  6. Computational and Experimental Methods to Decipher the Epigenetic Code

    Directory of Open Access Journals (Sweden)

    Stefano ede Pretis

    2014-09-01

    Full Text Available A multi-layered set of epigenetic marks, including post-translational modifications of histones and methylation of DNA, is finely tuned to define the epigenetic state of chromatin in any given cell type under specific conditions. Recently, the knowledge about the combinations of epigenetic marks occurring in the genome of different cell types under various conditions is rapidly increasing. Computational methods were developed for the identification of these states, unraveling the combinatorial nature of epigenetic marks and their association to genomic functional elements and transcriptional states. Nevertheless, the precise rules defining the interplay between all these marks remain poorly characterized. In this perspective we review the current state of this research field, illustrating the power and the limitations of current approaches. Finally, we sketch future avenues of research illustrating how the adoption of specific experimental designs coupled with available experimental approaches could be critical for a significant progress in this area.

  7. Source Code Plagiarism Detection Method Using Protégé Built Ontologies

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2013-01-01

    Full Text Available Software plagiarism is a growing and serious problem that affects computer science universities in particular and the quality of education in general. More and more students tend to copy their thesis’s software from older theses or internet databases. Checking source codes manually, to detect if they are similar or the same, is a laborious and time consuming job, maybe even impossible due to existence of large digital repositories. Ontology is a way of describing a document’s semantic, so it can be easily used for source code files too. OWL Web Ontology Language could find its applicability in describing both vocabulary and taxonomy of a programming language source code. SPARQL is a query language based on SQL that extracts saved or deducted information from ontologies. Our paper proposes a source code plagiarism detection method, based on ontologies created using Protégé editor, which can be applied in scanning students' theses' software source code.

  8. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  9. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  10. Automatic lung segmentation method for MRI-based lung perfusion studies of patients with chronic obstructive pulmonary disease.

    Science.gov (United States)

    Kohlmann, Peter; Strehlow, Jan; Jobst, Betram; Krass, Stefan; Kuhnigk, Jan-Martin; Anjorin, Angela; Sedlaczek, Oliver; Ley, Sebastian; Kauczor, Hans-Ulrich; Wielpütz, Mark Oliver

    2015-04-01

    A novel fully automatic lung segmentation method for magnetic resonance (MR) images of patients with chronic obstructive pulmonary disease (COPD) is presented. The main goal of this work was to ease the tedious and time-consuming task of manual lung segmentation, which is required for region-based volumetric analysis of four-dimensional MR perfusion studies which goes beyond the analysis of small regions of interest. The first step in the automatic algorithm is the segmentation of the lungs in morphological MR images with higher spatial resolution than corresponding perfusion MR images. Subsequently, the segmentation mask of the lungs is transferred to the perfusion images via nonlinear registration. Finally, the masks for left and right lungs are subdivided into a user-defined number of partitions. Fourteen patients with two time points resulting in 28 perfusion data sets were available for the preliminary evaluation of the developed methods. Resulting lung segmentation masks are compared with reference segmentations from experienced chest radiologists, as well as with total lung capacity (TLC) acquired by full-body plethysmography. TLC results were available for thirteen patients. The relevance of the presented method is indicated by an evaluation, which shows high correlation between automatically generated lung masks with corresponding ground-truth estimates. The evaluation of the developed methods indicates good accuracy and shows that automatically generated lung masks differ from expert segmentations about as much as segmentations from different experts.

  11. Refuelling design and core calculations at NPP Paks: codes and methods

    International Nuclear Information System (INIS)

    Pos, I.; Nemes, I.; Javor, E.; Korpas, L.; Szecsenyi, Z.; Patai-Szabo, S.

    2001-01-01

    This article gives a brief review of the computer codes used in the fuel management practice at NPP Paks. The code package consist of the HELIOS neutron and gamma transport code for preparation of few-group cross section library, the CERBER code to determine the optimal core loading patterns and the C-PORCA code for detailed reactor physical analysis of different reactor states. The last two programs have been developed at the NPP Paks. HELIOS gives sturdy basis for our neutron physical calculation, CERBER and C-PORCA programs have been enhanced in great extent for last years. Methods and models have become more detailed and accurate as regards the calculated parameters and space resolution. Introduction of a more advanced data handling algorithm arbitrary move of fuel assemblies can be followed either in the reactor core or storage pool. The new interactive WINDOWS applications allow easier and more reliable use of codes. All these computer code developments made possible to handle and calculate new kind of fuels as profiled Russian and BNFL fuel with burnable poison or to support the reliable reuse of fuel assemblies stored in the storage pool. To extend thermo-hydraulic capability, with KFKI contribution the COBRA code will also be coupled to the system (Authors)

  12. Automatic crack detection method for loaded coal in vibration failure process.

    Directory of Open Access Journals (Sweden)

    Chengwu Li

    Full Text Available In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM. A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  13. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  14. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  15. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  16. An improved method for storing and retrieving tabulated data in a scalar Monte Carlo code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Reynolds, K.H.; Dodds, H.L.; Landers, N.F.; Petrie, L.M.

    1990-01-01

    The KENO-Va code is a production-level criticality safety code used to calculate the k eff of a system. The code is stochastic in nature, using a Monte Carlo algorithm to track individual particles one at a time through the system. The advent of computers with vector processors has generated an interest in improving KENO-Va to take advantage of the potential speed-up associated with these new processors. Unfortunately, the original Monte Carlo algorithm and method of storing and retrieving cross-section data is not adaptable to vector processing. This paper discusses an alternate method for storing and retrieving data that not only is readily vectorizable but also improves the efficiency of the current scalar code

  17. A New Image Encryption Technique Combining Hill Cipher Method, Morse Code and Least Significant Bit Algorithm

    Science.gov (United States)

    Nofriansyah, Dicky; Defit, Sarjon; Nurcahyo, Gunadi W.; Ganefri, G.; Ridwan, R.; Saleh Ahmar, Ansari; Rahim, Robbi

    2018-01-01

    Cybercrime is one of the most serious threats. Efforts are made to reduce the number of cybercrime is to find new techniques in securing data such as Cryptography, Steganography and Watermarking combination. Cryptography and Steganography is a growing data security science. A combination of Cryptography and Steganography is one effort to improve data integrity. New techniques are used by combining several algorithms, one of which is the incorporation of hill cipher method and Morse code. Morse code is one of the communication codes used in the Scouting field. This code consists of dots and lines. This is a new modern and classic concept to maintain data integrity. The result of the combination of these three methods is expected to generate new algorithms to improve the security of the data, especially images.

  18. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain-spe...

  19. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  20. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    International Nuclear Information System (INIS)

    Li, Yankai; Lin, Meng; Yang, Yanhua

    2016-01-01

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  1. Method for calculating internal radiation and ventilation with the ADINAT heat-flow code

    International Nuclear Information System (INIS)

    Butkovich, T.R.; Montan, D.N.

    1980-01-01

    One objective of the spent fuel test in Climax Stock granite (SFTC) is to correctly model the thermal transport, and the changes in the stress field and accompanying displacements from the application of the thermal loads. We have chosen the ADINA and ADINAT finite element codes to do these calculations. ADINAT is a heat transfer code compatible to the ADINA displacement and stress analysis code. The heat flow problem encountered at SFTC requires a code with conduction, radiation, and ventilation capabilities, which the present version of ADINAT does not have. We have devised a method for calculating internal radiation and ventilation with the ADINAT code. This method effectively reproduces the results from the TRUMP multi-dimensional finite difference code, which correctly models radiative heat transport between drift surfaces, conductive and convective thermal transport to and through air in the drifts, and mass flow of air in the drifts. The temperature histories for each node in the finite element mesh calculated with ADINAT using this method can be used directly in the ADINA thermal-mechanical calculation

  2. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    International Nuclear Information System (INIS)

    Berthon, B.; Marshall, C.; Evans, M.; Spezi, E.

    2014-01-01

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible

  3. An automatic gain matching method for {gamma}-ray spectra obtained with a multi-detector array

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed.

  4. An automatic gain matching method for γ-ray spectra obtained with a multi-detector array

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed

  5. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    OpenAIRE

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2015-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...

  6. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  7. Verification & Validation Toolkit to Assess Codes: Is it Theory Limitation, Numerical Method Inadequacy, Bug in the Code or a Serious Flaw?

    Science.gov (United States)

    Bombardelli, F. A.; Zamani, K.

    2014-12-01

    We introduce and discuss an open-source, user friendly, numerical post-processing piece of software to assess reliability of the modeling results of environmental fluid mechanics' codes. Verification and Validation, Uncertainty Quantification (VAVUQ) is a toolkit developed in Matlab© for general V&V proposes. In this work, The VAVUQ implementation of V&V techniques and user interfaces would be discussed. VAVUQ is able to read Excel, Matlab, ASCII, and binary files and it produces a log of the results in txt format. Next, each capability of the code is discussed through an example: The first example is the code verification of a sediment transport code, developed with the Finite Volume Method, with MES. Second example is a solution verification of a code for groundwater flow, developed with the Boundary Element Method, via MES. Third example is a solution verification of a mixed order, Compact Difference Method code of heat transfer via MMS. Fourth example is a solution verification of a 2-D, Finite Difference Method code of floodplain analysis via Complete Richardson Extrapolation. In turn, application of VAVUQ in quantitative model skill assessment studies (validation) of environmental codes is given through two examples: validation of a two-phase flow computational modeling of air entrainment in a free surface flow versus lab measurements and heat transfer modeling in the earth surface versus field measurement. At the end, we discuss practical considerations and common pitfalls in interpretation of V&V results.

  8. The OpenMOC method of characteristics neutral particle transport code

    International Nuclear Information System (INIS)

    Boyd, William; Shaner, Samuel; Li, Lulu; Forget, Benoit; Smith, Kord

    2014-01-01

    Highlights: • An open source method of characteristics neutron transport code has been developed. • OpenMOC shows nearly perfect scaling on CPUs and 30× speedup on GPUs. • Nonlinear acceleration techniques demonstrate a 40× reduction in source iterations. • OpenMOC uses modern software design principles within a C++ and Python framework. • Validation with respect to the C5G7 and LRA benchmarks is presented. - Abstract: The method of characteristics (MOC) is a numerical integration technique for partial differential equations, and has seen widespread use for reactor physics lattice calculations. The exponential growth in computing power has finally brought the possibility for high-fidelity full core MOC calculations within reach. The OpenMOC code is being developed at the Massachusetts Institute of Technology to investigate algorithmic acceleration techniques and parallel algorithms for MOC. OpenMOC is a free, open source code written using modern software languages such as C/C++ and CUDA with an emphasis on extensible design principles for code developers and an easy to use Python interface for code users. The present work describes the OpenMOC code and illustrates its ability to model large problems accurately and efficiently

  9. Validation of the analytical methods in the LWR code BOXER for gadolinium-loaded fuel pins

    International Nuclear Information System (INIS)

    Paratte, J.M.; Arkuszewski, J.J.; Kamboj, B.K.; Kallfelz, J.M.; Abdel-Khalik, S.I.

    1990-01-01

    Due to the very high absorption occurring in gadolinium-loaded fuel pins, calculations of lattices with such pins present are a demanding test of the analysis methods in light water reactor (LWR) cell and assembly codes. Considerable effort has, therefore, been devoted to the validation of code methods for gadolinia fuel. The goal of the work reported in this paper is to check the analysis methods in the LWR cell/assembly code BOXER and its associated cross-section processing code ETOBOX, by comparison of BOXER results with those from a very accurate Monte Carlo calculation for a gadolinium benchmark problem. Initial results of such a comparison have been previously reported. However, the Monte Carlo calculations, done with the MCNP code, were performed at Los Alamos National Laboratory using ENDF/B-V data, while the BOXER calculations were performed at the Paul Scherrer Institute using JEF-1 nuclear data. This difference in the basic nuclear data used for the two calculations, caused by the restricted nature of these evaluated data files, led to associated uncertainties in a comparison of the results for methods validation. In the joint investigations at the Georgia Institute of Technology and PSI, such uncertainty in this comparison was eliminated by using ENDF/B-V data for BOXER calculations at Georgia Tech

  10. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  11. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.

    Science.gov (United States)

    Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-03-31

    As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the

  12. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  13. Automatic limit switch system for scintillation device and method of operation

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Ioannou, B.N.

    1976-01-01

    A scintillation scanner is described having an automatic limit switch system for setting the limits of travel of the radiation detection device which is carried by a scanning boom. The automatic limit switch system incorporates position responsive circuitry for developing a signal representative of the position of the boom, reference signal circuitry for developing a signal representative of a selected limit of travel of the boom, and comparator circuitry for comparng these signals in order to control the operation of a boom drive and indexing mechanism. (author)

  14. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  15. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    Science.gov (United States)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  16. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    NARCIS (Netherlands)

    Weijers, G.; Starke, A.; Haudum, A.; Thijssen, J.M.; Rehage, J.; Korte, C.L. de

    2010-01-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty

  17. Research of an Automatic Control Method of NO Removal System by Silent Discharge

    Science.gov (United States)

    Kimura, Kouhei; Hayashi, Kenji; Yoshioka, Yoshio

    An automatic NOx control device was developed for NOx removal system by silent discharge targeting diesel engine generator. A new algorithm of controlling the exit NO concentration at specified values was developed. The control system was actually made in our laboratory and it was confirmed that exit NO concentration could be controlled in the specified value.

  18. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  19. Analytical laboratories method No. 4001 - automatic determination of U-235 wt% in a uranium matrix by gamma spectrometry

    International Nuclear Information System (INIS)

    1987-01-01

    This method is designed to automatically measure the U-235 concentration of various uranium-containing matrices (e.g., UO 3 , UF 4 , U 3 O 8 , sump samples, UNH, residues, etc.). Analyses are performed using a computer controlled sample changer. The technique is applicable to samples ranging from 0.20 to 20.0 wt% U-235. A complete gamma spectrometric U-235 analysis can be performed in two hours, or less

  20. An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor

    CERN Document Server

    AUTHOR|(CDS)2261553

    2017-01-01

    An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.

  1. A parallel code base on discontinuous Galerkin method on three dimensional unstructured meshes for MHD equations

    Science.gov (United States)

    Li, Xujing; Zheng, Weiying

    2016-10-01

    A new parallel code based on discontinuous Galerkin (DG) method for hyperbolic conservation laws on three dimensional unstructured meshes is developed recently. This code can be used for simulations of MHD equations, which are very important in magnetic confined plasma research. The main challenges in MHD simulations in fusion include the complex geometry of the configurations, such as plasma in tokamaks, the possibly discontinuous solutions and large scale computing. Our new developed code is based on three dimensional unstructured meshes, i.e. tetrahedron. This makes the code flexible to arbitrary geometries. Second order polynomials are used on each element and HWENO type limiter are applied. The accuracy tests show that our scheme reaches the desired three order accuracy and the nonlinear shock test demonstrate that our code can capture the sharp shock transitions. Moreover, One of the advantages of DG compared with the classical finite element methods is that the matrices solved are localized on each element, making it easy for parallelization. Several simulations including the kink instabilities in toroidal geometry will be present here. Chinese National Magnetic Confinement Fusion Science Program 2015GB110003.

  2. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  3. Methods for CT automatic exposure control protocol translation between scanner platforms.

    Science.gov (United States)

    McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M

    2014-03-01

    An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of

  4. Automatic Residential/Commercial Classification of Parcels with Solar Panel Detections

    Energy Technology Data Exchange (ETDEWEB)

    2018-03-25

    A computational method to automatically detect solar panels on rooftops to aid policy and financial assessment of solar distributed generation. The code automatically classifies parcels containing solar panels in the U.S. as residential or commercial. The code allows the user to specify an input dataset containing parcels and detected solar panels, and then uses information about the parcels and solar panels to automatically classify the rooftops as residential or commercial using machine learning techniques. The zip file containing the code includes sample input and output datasets for the Boston and DC areas.

  5. Automatic differentiation for gradient-based optimization of radiatively heated microelectronics manufacturing equipment

    Energy Technology Data Exchange (ETDEWEB)

    Moen, C.D.; Spence, P.A.; Meza, J.C.; Plantenga, T.D.

    1996-12-31

    Automatic differentiation is applied to the optimal design of microelectronic manufacturing equipment. The performance of nonlinear, least-squares optimization methods is compared between numerical and analytical gradient approaches. The optimization calculations are performed by running large finite-element codes in an object-oriented optimization environment. The Adifor automatic differentiation tool is used to generate analytic derivatives for the finite-element codes. The performance results support previous observations that automatic differentiation becomes beneficial as the number of optimization parameters increases. The increase in speed, relative to numerical differences, has a limited value and results are reported for two different analysis codes.

  6. Stochastic methods for uncertainty treatment of functional variables in computer codes: application to safety studies

    International Nuclear Information System (INIS)

    Nanty, Simon

    2015-01-01

    This work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called co-variate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or meta model, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the meta model. Finally, a new approximation approach for expensive codes with functional outputs has been

  7. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    Science.gov (United States)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  8. A lossless compression method for medical image sequences using JPEG-LS and interframe coding.

    Science.gov (United States)

    Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching

    2009-09-01

    Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.

  9. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, P. C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Halverson, M. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-09-01

    This guidance document was prepared using the input from the meeting summarized in the draft CSI Roadmap to provide Building America research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America innovations arising in and/or stemming from codes, standards, and rating methods.

  10. SQA of finite element method (FEM) codes used for analyses of pit storage/transport packages

    Energy Technology Data Exchange (ETDEWEB)

    Russel, E. [Lawrence Livermore National Lab., CA (United States)

    1997-11-01

    This report contains viewgraphs on the software quality assurance of finite element method codes used for analyses of pit storage and transport projects. This methodology utilizes the ISO 9000-3: Guideline for application of 9001 to the development, supply, and maintenance of software, for establishing well-defined software engineering processes to consistently maintain high quality management approaches.

  11. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    Science.gov (United States)

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  12. A Review of Automatic Methods Based on Image Processing Techniques for Tuberculosis Detection from Microscopic Sputum Smear Images.

    Science.gov (United States)

    Panicker, Rani Oomman; Soman, Biju; Saini, Gagan; Rajan, Jeny

    2016-01-01

    Tuberculosis (TB) is an infectious disease caused by the bacteria Mycobacterium tuberculosis. It primarily affects the lungs, but it can also affect other parts of the body. TB remains one of the leading causes of death in developing countries, and its recent resurgences in both developed and developing countries warrant global attention. The number of deaths due to TB is very high (as per the WHO report, 1.5 million died in 2013), although most are preventable if diagnosed early and treated. There are many tools for TB detection, but the most widely used one is sputum smear microscopy. It is done manually and is often time consuming; a laboratory technician is expected to spend at least 15 min per slide, limiting the number of slides that can be screened. Many countries, including India, have a dearth of properly trained technicians, and they often fail to detect TB cases due to the stress of a heavy workload. Automatic methods are generally considered as a solution to this problem. Attempts have been made to develop automatic approaches to identify TB bacteria from microscopic sputum smear images. In this paper, we provide a review of automatic methods based on image processing techniques published between 1998 and 2014. The review shows that the accuracy of algorithms for the automatic detection of TB increased significantly over the years and gladly acknowledges that commercial products based on published works also started appearing in the market. This review could be useful to researchers and practitioners working in the field of TB automation, providing a comprehensive and accessible overview of methods of this field of research.

  13. Control method and device for automatic drift stabilization in radiation detection

    International Nuclear Information System (INIS)

    Berthold, F.; Kubisiak, H.

    1979-01-01

    In the automatic control circuit individual electron peaks in the detectors, e.g. NaI crystals or proportional counters, are used. These peaks exhibit no drift dependence; they may be produced in the detectors in different ways. The control circuit may be applied in nuclear radiation measurement techniques, photometry, gamma cameras and for measuring the X-ray fine structure with proportional counters. (DG) [de

  14. Datasets of Odontocete Sounds Annotated for Developing Automatic Detection Methods, FY09-10

    Science.gov (United States)

    2012-09-01

    automatic call detection and classification; make them publicly available in an archive on the Internet ; continue developing and publishing detection and...out of 85 glider dives. Manual analysis revealed that 7 of these detections were actual beaked whale encounters. During the other 3 glider dives...28 Sept.-1 Oct. 2011. Spatially explicit capture-recapture minke whale density estimation. Proc. XIX Congresso Anual da Sociedade Portuguesa de

  15. High-accuracy automatic classification of Parkinsonian tremor severity using machine learning method.

    Science.gov (United States)

    Jeon, Hyoseon; Lee, Woongwoo; Park, Hyeyoung; Lee, Hong Ji; Kim, Sang Kyong; Kim, Han Byul; Jeon, Beomseok; Park, Kwang Suk

    2017-10-31

    Although clinical aspirations for new technology to accurately measure and diagnose Parkinsonian tremors exist, automatic scoring of tremor severity using machine learning approaches has not yet been employed. This study aims to maximize the scientific validity of automatic tremor-severity classification using machine learning algorithms to score Parkinsonian tremor severity in the same manner as the unified Parkinson's disease rating scale (UPDRS) used to rate scores in real clinical practice. Eighty-five PD patients perform four tasks for severity assessment of their resting, resting with mental stress, postural, and intention tremors. The tremor signals are measured using a wristwatch-type wearable device with an accelerometer and gyroscope. Displacement and angle signals are obtained by integrating the acceleration and angular-velocity signals. Nineteen features are extracted from each of the four tremor signals. The optimal feature configuration is decided using the wrapper feature selection algorithm or principal component analysis, and decision tree, support vector machine, discriminant analysis, and k-nearest neighbour algorithms are considered to develop an automatic scoring system for UPDRS prediction. The results are compared to UPDRS ratings assigned by two neurologists. The highest accuracies are 92.3%, 86.2%, 92.1%, and 89.2% for resting, resting with mental stress, postural, and intention tremors, respectively. The weighted Cohen's kappa values are 0.745, 0.635 and 0.633 for resting, resting with mental stress, and postural tremors (almost perfect agreement), and 0.570 for intention tremors (moderate). These results indicate the feasibility of the proposed system as a clinical decision tool for Parkinsonian tremor-severity automatic scoring.

  16. Photogrammetric Model Based Method of Automatic Orientation of Space Cargo Ship Relative to the International Space Station

    Science.gov (United States)

    Blokhinov, Y. B.; Chernyavskiy, A. S.; Zheltov, S. Y.

    2012-07-01

    The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS) for the International Space Station (ISS) is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  17. PHOTOGRAMMETRIC MODEL BASED METHOD OF AUTOMATIC ORIENTATION OF SPACE CARGO SHIP RELATIVE TO THE INTERNATIONAL SPACE STATION

    Directory of Open Access Journals (Sweden)

    Y. B. Blokhinov

    2012-07-01

    Full Text Available The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS for the International Space Station (ISS is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  18. Automatic mesh refinement and local multigrid methods for contact problems: application to the Pellet-Cladding mechanical Interaction

    International Nuclear Information System (INIS)

    Liu, Hao

    2016-01-01

    This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr

  19. TMCC: a transient three-dimensional neutron transport code by the direct simulation method - 222

    International Nuclear Information System (INIS)

    Shen, H.; Li, Z.; Wang, K.; Yu, G.

    2010-01-01

    A direct simulation method (DSM) is applied to solve the transient three-dimensional neutron transport problems. DSM is based on the Monte Carlo method, and can be considered as an application of the Monte Carlo method in the specific type of problems. In this work, the transient neutronics problem is solved by simulating the dynamic behaviors of neutrons and precursors of delayed neutrons during the transient process. DSM gets rid of various approximations which are always necessary to other methods, so it is precise and flexible in the requirement of geometric configurations, material compositions and energy spectrum. In this paper, the theory of DSM is introduced first, and the numerical results obtained with the new transient analysis code, named TMCC (Transient Monte Carlo Code), are presented. (authors)

  20. Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin

    2011-01-01

    Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)

  1. GRS Method for Uncertainty and Sensitivity Evaluation of Code Results and Applications

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    During the recent years, an increasing interest in computational reactor safety analysis is to replace the conservative evaluation model calculations by best estimate calculations supplemented by uncertainty analysis of the code results. The evaluation of the margin to acceptance criteria, for example, the maximum fuel rod clad temperature, should be based on the upper limit of the calculated uncertainty range. Uncertainty analysis is needed if useful conclusions are to be obtained from best estimate thermal-hydraulic code calculations, otherwise single values of unknown accuracy would be presented for comparison with regulatory acceptance limits. Methods have been developed and presented to quantify the uncertainty of computer code results. The basic techniques proposed by GRS are presented together with applications to a large break loss of coolant accident on a reference reactor as well as on an experiment simulating containment behaviour

  2. Artificial viscosity method for the design of supercritical airfoils. [Analysis code H

    Energy Technology Data Exchange (ETDEWEB)

    McFadden, G.B.

    1979-07-01

    The need for increased efficiency in the use of our energy resources has stimulated applied research in many areas. Recently progress has been made in the field of aerodynamics, where the development of the supercritical wing promises significant savings in the fuel consumption of aircraft operating near the speed of sound. Computational transonic aerodynamics has proved to be a useful tool in the design and evaluation of these wings. A numerical technique for the design of two-dimensional supercritical wing sections with low wave drag is presented. The method is actually a design mode of the analysis code H developed by Bauer, Garabedian, and Korn. This analysis code gives excellent agreement with experimental results and is used widely by the aircraft industry. The addition of a conceptually simple design version should make this code even more useful to the engineering public.

  3. Building America Guidance for Identifying and Overcoming Code, Standard, and Rating Method Barriers

    Energy Technology Data Exchange (ETDEWEB)

    Cole, Pamala C.; Halverson, Mark A.

    2013-09-01

    The U.S. Department of Energy’s (DOE) Building America program implemented a new Codes and Standards Innovation (CSI) Team in 2013. The Team’s mission is to assist Building America (BA) research teams and partners in identifying and resolving conflicts between Building America innovations and the various codes and standards that govern the construction of residences. A CSI Roadmap was completed in September, 2013. This guidance document was prepared using the information in the CSI Roadmap to provide BA research teams and partners with specific information and approaches to identifying and overcoming potential barriers to Building America (BA) innovations arising in and/or stemming from codes, standards, and rating methods. For more information on the BA CSI team, please email: CSITeam@pnnl.gov

  4. A two-level space-time color-coding method for 3D measurements using structured light

    International Nuclear Information System (INIS)

    Xue, Qi; Wang, Zhao; Huang, Junhui; Gao, Jianmin; Qi, Zhaoshuai

    2015-01-01

    Color-coding methods have significantly improved the measurement efficiency of structured light systems. However, some problems, such as color crosstalk and chromatic aberration, decrease the measurement accuracy of the system. A two-level space-time color-coding method is thus proposed in this paper. The method, which includes a space-code level and a time-code level, is shown to be reliable and efficient. The influence of chromatic aberration is completely mitigated when using this method. Additionally, a self-adaptive windowed Fourier transform is used to eliminate all color crosstalk components. Theoretical analyses and experiments have shown that the proposed coding method solves the problems of color crosstalk and chromatic aberration effectively. Additionally, the method guarantees high measurement accuracy which is very close to the measurement accuracy using monochromatic coded patterns. (paper)

  5. Prediction of Protein Coding Regions Using a Wide-Range Wavelet Window Method.

    Science.gov (United States)

    Marhon, Sajid A; Kremer, Stefan C

    2016-01-01

    Prediction of protein coding regions is an important topic in the field of genomic sequence analysis. Several spectrum-based techniques for the prediction of protein coding regions have been proposed. However, the outstanding issue in most of the proposed techniques is that these techniques depend on an experimentally-selected, predefined value of the window length. In this paper, we propose a new Wide-Range Wavelet Window (WRWW) method for the prediction of protein coding regions. The analysis of the proposed wavelet window shows that its frequency response can adapt its width to accommodate the change in the window length so that it can allow or prevent frequencies other than the basic frequency in the analysis of DNA sequences. This feature makes the proposed window capable of analyzing DNA sequences with a wide range of the window lengths without degradation in the performance. The experimental analysis of applying the WRWW method and other spectrum-based methods to five benchmark datasets has shown that the proposed method outperforms other methods along a wide range of the window lengths. In addition, the experimental analysis has shown that the proposed method is dominant in the prediction of both short and long exons.

  6. Coding Methods for the NMF Approach to Speech Recognition and Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Meng Sun

    2012-12-01

    Full Text Available This paper aims at improving the accuracy of the non- negative matrix factorization approach to word learn- ing and recognition of spoken utterances. We pro- pose and compare three coding methods to alleviate quantization errors involved in the vector quantization (VQ of speech spectra: multi-codebooks, soft VQ and adaptive VQ. We evaluate on the task of spotting a vocabulary of 50 keywords in continuous speech. The error rates of multi-codebooks decreased with increas- ing number of codebooks, but the accuracy leveled off around 5 to 10 codebooks. Soft VQ and adaptive VQ made a better trade-off between the required memory and the accuracy. The best of the proposed methods reduce the error rate to 1.2% from the 1.9% obtained with a single codebook. The coding methods and the model framework may also prove useful for applica- tions such as topic discovery/detection and mining of sequential patterns.

  7. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...... educational moment is introduced as a result of this investigation. The course is positioned in the program prior the work with the final project. In the course a mini project is worked out, in which the students provides extra training in academic methods....

  8. An adaptive and fully automatic method for estimating the 3D position of bendable instruments using endoscopic images.

    Science.gov (United States)

    Cabras, Paolo; Nageotte, Florent; Zanne, Philippe; Doignon, Christophe

    2017-12-01

    Flexible bendable instruments are key tools for performing surgical endoscopy. Being able to measure the 3D position of such instruments can be useful for various tasks, such as controlling automatically robotized instruments and analyzing motions. An automatic method is proposed to infer the 3D pose of a single bending section instrument, using only the images provided by a monocular camera embedded at the tip of the endoscope. The proposed method relies on colored markers attached onto the bending section. The image of the instrument is segmented using a graph-based method and the corners of the markers are extracted by detecting the color transitions along Bézier curves fitted on edge points. These features are accurately located and then used to estimate the 3D pose of the instrument using an adaptive model that takes into account the mechanical play between the instrument and its housing channel. The feature extraction method provides good localization of marker corners with images of the in vivo environment despite sensor saturation due to strong lighting. The RMS error on estimation of the tip position of the instrument for laboratory experiments was 2.1, 1.96, and 3.18 mm in the x, y and z directions, respectively. Qualitative analysis in the case of in vivo images shows the ability to correctly estimate the 3D position of the instrument tip during real motions. The proposed method provides an automatic and accurate estimation of the 3D position of the tip of a bendable instrument in realistic conditions, where standard approaches fail. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Application of software quality assurance methods in validation and maintenance of reactor analysis computer codes

    International Nuclear Information System (INIS)

    Reznik, L.

    1994-01-01

    Various computer codes employed at Israel Electricity Company for preliminary reactor design analysis and fuel cycle scoping calculations have been often subject to program source modifications. Although most changes were due to computer or operating system compatibility problems, a number of significant modifications were due to model improvement and enhancements of algorithm efficiency and accuracy. With growing acceptance of software quality assurance requirements and methods, a program of implementing extensive testing of modified software has been adopted within the regular maintenance activities. In this work survey has been performed of various software quality assurance methods of software testing which belong mainly to the two major categories of implementation ('white box') and specification-based ('black box') testing. The results of this survey exhibits a clear preference of specification-based testing. In particular the equivalence class partitioning method and the boundary value method have been selected as especially suitable functional methods for testing reactor analysis codes.A separate study of software quality assurance methods and techniques has been performed in this work objective to establish appropriate pre-test software specification methods. Two methods of software analysis and specification have been selected as the most suitable for this purpose: The method of data flow diagrams has been shown to be particularly valuable for performing the functional/procedural software specification while the entities - relationship diagrams has been approved to be efficient for specifying software data/information domain. Feasibility of these two methods has been analyzed in particular for software uncertainty analysis and overall code accuracy estimation. (author). 14 refs

  10. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  11. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    OpenAIRE

    Hamid Reza Pourreza; Mohammad Hossein Bahreyni Toossi; Alireza Mehdizadeh; Reza Pourreza; Meysam Tavakoli

    2009-01-01

    Introduction: Diabetic retinopathy (DR) is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA) in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic re...

  12. Study on the Automatic Detection Method and System of Multifunctional Hydrocephalus Shunt

    Science.gov (United States)

    Sun, Xuan; Wang, Guangzhen; Dong, Quancheng; Li, Yuzhong

    2017-07-01

    Aiming to the difficulty of micro pressure detection and the difficulty of micro flow control in the testing process of hydrocephalus shunt, the principle of the shunt performance detection was analyzed.In this study, the author analyzed the principle of several items of shunt performance detection,and used advanced micro pressure sensor and micro flow peristaltic pump to overcome the micro pressure detection and micro flow control technology.At the same time,This study also puted many common experimental projects integrated, and successfully developed the automatic detection system for a shunt performance detection function, to achieve a test with high precision, high efficiency and automation.

  13. Alignment-based and alignment-free methods converge with experimental data on amino acids coded by stop codons at split between nuclear and mitochondrial genetic codes.

    Science.gov (United States)

    Seligmann, Hervé

    2018-04-03

    Genetic codes mainly evolve by reassigning punctuation codons, starts and stops. Previous analyses assuming that undefined amino acids translate stops showed greater divergence between nuclear and mitochondrial genetic codes. Here, three independent methods converge on which amino acids translated stops at split between nuclear and mitochondrial genetic codes: (a) alignment-free genetic code comparisons inserting different amino acids at stops; (b) alignment-based blast analyses of hypothetical peptides translated from non-coding mitochondrial sequences, inserting different amino acids at stops; (c) biases in amino acid insertions at stops in proteomic data. Hence short-term protein evolution models reconstruct long-term genetic code evolution. Mitochondria reassign stops to amino acids otherwise inserted at stops by codon-anticodon mismatches (near-cognate tRNAs). Hence dual function (translation termination and translation by codon-anticodon mismatch) precedes mitochondrial reassignments of stops to amino acids. Stop ambiguity increases coded information, compensates endocellular mitogenome reduction. Mitochondrial codon reassignments might prevent viral infections. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Automatic conversion of CAD model into neutronics model

    International Nuclear Information System (INIS)

    Hu Haimin; Wu Yican; Chen Mingliang; Zheng Shanliang; Zeng Qin; Ding Aiping; Li Ying

    2007-01-01

    It is a time-consuming and error-prone task to prepare neutronics model for the discrete ordinates transport codes (S N codes) in manual way. A more efficient solution is presented in this paper, while shift geometric modeling to computer aided design (CAD) system, and to use an interface program for S N codes to convert the CAD model to neutronics model, and then generate the input file of S N code automatically. The detailed conversion method is described and some kernel algorithms are implemented in SNAM, an interface program between CAD system and S N codes. The method has been used to convert the ITER benchmark model to the input file of S N code successfully. It is shown that the conversion method is a correct, efficient and potential solution for S N code modelling. (author)

  15. The sensitivity analysis by adjoint method for the uncertainty evaluation of the CATHARE-2 code

    Energy Technology Data Exchange (ETDEWEB)

    Barre, F.; de Crecy, A.; Perret, C. [French Atomic Energy Commission (CEA), Grenoble (France)

    1995-09-01

    This paper presents the application of the DASM (Discrete Adjoint Sensitivity Method) to CATHARE 2 thermal-hydraulics code. In a first part, the basis of this method is presented. The mathematical model of the CATHARE 2 code is based on the two fluid six equation model. It is discretized using implicit time discretization and it is relatively easy to implement this method in the code. The DASM is the ASM directly applied to the algebraic system of the discretized code equations which has been demonstrated to be the only solution of the mathematical model. The ASM is an integral part of the new version 1.4 of CATHARE. It acts as a post-processing module. It has been qualified by comparison with the {open_quotes}brute force{close_quotes} technique. In a second part, an application of the DASM in CATHARE 2 is presented. It deals with the determination of the uncertainties of the constitutive relationships, which is a compulsory step for calculating the final uncertainty of a given response. First, the general principles of the method are explained: the constitutive relationship are represented by several parameters and the aim is to calculate the variance-covariance matrix of these parameters. The experimental results of the separate effect tests used to establish the correlation are considered. The variance of the corresponding results calculated by CATHARE are estimated by comparing experiment and calculation. A DASM calculation is carried out to provide the derivatives of the responses. The final covariance matrix is obtained by combination of the variance of the responses and those derivatives. Then, the application of this method to a simple case-the blowdown Canon experiment-is presented. This application has been successfully performed.

  16. Benchmarking of epithermal methods in the lattice-physics code EPRI-CELL

    International Nuclear Information System (INIS)

    Williams, M.L.; Wright, R.Q.; Barhen, J.; Rothenstein, W.; Toney, B.

    1982-01-01

    The epithermal cross section shielding methods used in the lattice physics code EPRI-CELL (E-C) have been extensively studied to determine its major approximations and to examine the sensitivity of computed results to these approximations. The study has resulted in several improvements in the original methodology. These include: treatment of the external moderator source with intermediate resonance (IR) theory, development of a new Dancoff factor expression to account for clad interactions, development of a new method for treating resonance interference, and application of a generalized least squares method to compute best-estimate values for the Bell factor and group-dependent IR parameters. The modified E-C code with its new ENDF/B-V cross section library is tested for several numerical benchmark problems. Integral parameters computed by EC are compared with those obtained with point-cross section Monte Carlo calculations, and E-C fine group cross sections are benchmarked against point-cross section descrete ordinates calculations. It is found that the code modifications improve agreement between E-C and the more sophisticated methods. E-C shows excellent agreement on the integral parameters and usually agrees within a few percent on fine-group, shielded cross sections

  17. A novel quantum LSB-based steganography method using the Gray code for colored quantum images

    Science.gov (United States)

    Heidari, Shahrokh; Farzadnia, Ehsan

    2017-10-01

    As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.

  18. Analytical Validation of a New Enzymatic and Automatable Method for d-Xylose Measurement in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Israel Sánchez-Moreno

    2017-01-01

    Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.

  19. Development and application of methods and computer codes of fuel management and nuclear design of reload cycles in PWR

    International Nuclear Information System (INIS)

    Ahnert, C.; Aragones, J.M.; Corella, M.R.; Esteban, A.; Martinez-Val, J.M.; Minguez, E.; Perlado, J.M.; Pena, J.; Matias, E. de; Llorente, A.; Navascues, J.; Serrano, J.

    1976-01-01

    Description of methods and computer codes for Fuel Management and Nuclear Design of Reload Cycles in PWR, developed at JEN by adaptation of previous codes (LEOPARD, NUTRIX, CITATION, FUELCOST) and implementation of original codes (TEMP, SOTHIS, CICLON, NUDO, MELON, ROLLO, LIBRA, PENELOPE) and their application to the project of Management and Design of Reload Cycles of a 510 Mwt PWR, including comparison with results of experimental operation and other calculations for validation of methods. (author) [es

  20. Assessment of shielding analysis methods, codes, and data for spent fuel transport/storage applications

    International Nuclear Information System (INIS)

    Parks, C.V.; Broadhead, B.L.; Hermann, O.W.; Tang, J.S.; Cramer, S.N.; Gauthey, J.C.; Kirk, B.L.; Roussin, R.W.

    1988-07-01

    This report provides a preliminary assessment of the computational tools and existing methods used to obtain radiation dose rates from shielded spent nuclear fuel and high-level radioactive waste (HLW). Particular emphasis is placed on analysis tools and techniques applicable to facilities/equipment designed for the transport or storage of spent nuclear fuel or HLW. Applications to cask transport, storage, and facility handling are considered. The report reviews the analytic techniques for generating appropriate radiation sources, evaluating the radiation transport through the shield, and calculating the dose at a desired point or surface exterior to the shield. Discrete ordinates, Monte Carlo, and point kernel methods for evaluating radiation transport are reviewed, along with existing codes and data that utilize these methods. A literature survey was employed to select a cadre of codes and data libraries to be reviewed. The selection process was based on specific criteria presented in the report. Separate summaries were written for several codes (or family of codes) that provided information on the method of solution, limitations and advantages, availability, data access, ease of use, and known accuracy. For each data library, the summary covers the source of the data, applicability of these data, and known verification efforts. Finally, the report discusses the overall status of spent fuel shielding analysis techniques and attempts to illustrate areas where inaccuracy and/or uncertainty exist. The report notes the advantages and limitations of several analysis procedures and illustrates the importance of using adequate cross-section data sets. Additional work is recommended to enable final selection/validation of analysis tools that will best meet the US Department of Energy's requirements for use in developing a viable HLW management system. 188 refs., 16 figs., 27 tabs

  1. Methods and codes for assessing the off-site Consequences of nuclear accidents. Volume 2

    International Nuclear Information System (INIS)

    Kelly, G.N.; Luykx, F.

    1991-01-01

    The Commission of the European Communities, within the framework of its 1980-84 radiation protection research programme, initiated a two-year project in 1983 entitled methods for assessing the radiological impact of accidents (Maria). This project was continued in a substantially enlarged form within the 1985-89 research programme. The main objectives of the project were, firstly, to develop a new probabilistic accident consequence code that was modular, incorporated the best features of those codes already in use, could be readily modified to take account of new data and model developments and would be broadly applicable within the EC; secondly, to acquire a better understanding of the limitations of current models and to develop more rigorous approaches where necessary; and, thirdly, to quantify the uncertainties associated with the model predictions. This research led to the development of the accident consequence code Cosyma (COde System from MAria), which will be made generally available later in 1990. The numerous and diverse studies that have been undertaken in support of this development are summarized in this paper, together with indications of where further effort might be most profitably directed. Consideration is also given to related research directed towards the development of real-time decision support systems for use in off-site emergency management

  2. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom

    2018-04-01

    Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

  3. Curvelet based automatic segmentation of supraspinatus tendon from ultrasound image: a focused assistive diagnostic method.

    Science.gov (United States)

    Gupta, Rishu; Elamvazuthi, Irraivan; Dass, Sarat Chandra; Faye, Ibrahima; Vasant, Pandian; George, John; Izza, Faizatul

    2014-12-04

    Disorders of rotator cuff tendons results in acute pain limiting the normal range of motion for shoulder. Of all the tendons in rotator cuff, supraspinatus (SSP) tendon is affected first of any pathological changes. Diagnosis of SSP tendon using ultrasound is considered to be operator dependent with its accuracy being related to operator's level of experience. The automatic segmentation of SSP tendon ultrasound image was performed to provide focused and more accurate diagnosis. The image processing techniques were employed for automatic segmentation of SSP tendon. The image processing techniques combines curvelet transform and mathematical concepts of logical and morphological operators along with area filtering. The segmentation assessment was performed using true positives rate, false positives rate and also accuracy of segmentation. The specificity and sensitivity of the algorithm was tested for diagnosis of partial thickness tears (PTTs) and full thickness tears (FTTs). The ultrasound images of SSP tendon were taken from medical center with the help of experienced radiologists. The algorithm was tested on 116 images taken from 51 different patients. The accuracy of segmentation of SSP tendon was calculated to be 95.61% in accordance with the segmentation performed by radiologists, with true positives rate of 91.37% and false positives rate of 8.62%. The specificity and sensitivity was found to be 93.6%, 94% and 95%, 95.6% for partial thickness tears and full thickness tears respectively. The proposed methodology was successfully tested over a database of more than 116 US images, for which radiologist assessment and validation was performed. The segmentation of SSP tendon from ultrasound images helps in focused, accurate and more reliable diagnosis which has been verified with the help of two experienced radiologists. The specificity and sensitivity for accurate detection of partial and full thickness tears has been considerably increased after segmentation when

  4. Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system.

    Science.gov (United States)

    Li, Lin; Zhang, Qizhi; Ding, Yihua; Jiang, Huabei; Thiers, Bruce H; Wang, James Z

    2014-10-13

    Early and accurate diagnosis of melanoma, the deadliest type of skin cancer, has the potential to reduce morbidity and mortality rate. However, early diagnosis of melanoma is not trivial even for experienced dermatologists, as it needs sampling and laboratory tests which can be extremely complex and subjective. The accuracy of clinical diagnosis of melanoma is also an issue especially in distinguishing between melanoma and mole. To solve these problems, this paper presents an approach that makes non-subjective judgements based on quantitative measures for automatic diagnosis of melanoma. Our approach involves image acquisition, image processing, feature extraction, and classification. 187 images (19 malignant melanoma and 168 benign lesions) were collected in a clinic by a spectroscopic device that combines single-scattered, polarized light spectroscopy with multiple-scattered, un-polarized light spectroscopy. After noise reduction and image normalization, features were extracted based on statistical measurements (i.e. mean, standard deviation, mean absolute deviation, L1 norm, and L2 norm) of image pixel intensities to characterize the pattern of melanoma. Finally, these features were fed into certain classifiers to train learning models for classification. We adopted three classifiers - artificial neural network, naïve bayes, and k-nearest neighbour to evaluate our approach separately. The naive bayes classifier achieved the best performance - 89% accuracy, 89% sensitivity and 89% specificity, which was integrated with our approach in a desktop application running on the spectroscopic system for diagnosis of melanoma. Our work has two strengths. (1) We have used single scattered polarized light spectroscopy and multiple scattered unpolarized light spectroscopy to decipher the multilayered characteristics of human skin. (2) Our approach does not need image segmentation, as we directly probe tiny spots in the lesion skin and the image scans do not involve

  5. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Directory of Open Access Journals (Sweden)

    Jan Wieding

    Full Text Available The use of finite element analysis (FEA has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with

  6. Finite element analysis of osteosynthesis screw fixation in the bone stock: an appropriate method for automatic screw modelling.

    Science.gov (United States)

    Wieding, Jan; Souffrant, Robert; Fritsche, Andreas; Mittelmeier, Wolfram; Bader, Rainer

    2012-01-01

    The use of finite element analysis (FEA) has grown to a more and more important method in the field of biomedical engineering and biomechanics. Although increased computational performance allows new ways to generate more complex biomechanical models, in the area of orthopaedic surgery, solid modelling of screws and drill holes represent a limitation of their use for individual cases and an increase of computational costs. To cope with these requirements, different methods for numerical screw modelling have therefore been investigated to improve its application diversity. Exemplarily, fixation was performed for stabilization of a large segmental femoral bone defect by an osteosynthesis plate. Three different numerical modelling techniques for implant fixation were used in this study, i.e. without screw modelling, screws as solid elements as well as screws as structural elements. The latter one offers the possibility to implement automatically generated screws with variable geometry on arbitrary FE models. Structural screws were parametrically generated by a Python script for the automatic generation in the FE-software Abaqus/CAE on both a tetrahedral and a hexahedral meshed femur. Accuracy of the FE models was confirmed by experimental testing using a composite femur with a segmental defect and an identical osteosynthesis plate for primary stabilisation with titanium screws. Both deflection of the femoral head and the gap alteration were measured with an optical measuring system with an accuracy of approximately 3 µm. For both screw modelling techniques a sufficient correlation of approximately 95% between numerical and experimental analysis was found. Furthermore, using structural elements for screw modelling the computational time could be reduced by 85% using hexahedral elements instead of tetrahedral elements for femur meshing. The automatically generated screw modelling offers a realistic simulation of the osteosynthesis fixation with screws in the adjacent

  7. Transcoding method from H.264/AVC to high efficiency video coding based on similarity of intraprediction, interprediction, and motion vector

    Science.gov (United States)

    Liu, Mei-Feng; Zhong, Guo-Yun; He, Xiao-Hai; Qing, Lin-Bo

    2016-09-01

    Currently, most video resources on line are encoded in the H.264/AVC format. More fluent video transmission can be obtained if these resources are encoded in the newest international video coding standard: high efficiency video coding (HEVC). In order to improve the video transmission and storage on line, a transcoding method from H.264/AVC to HEVC is proposed. In this transcoding algorithm, the coding information of intraprediction, interprediction, and motion vector (MV) in H.264/AVC video stream are used to accelerate the coding in HEVC. It is found through experiments that the region of interprediction in HEVC overlaps that in H.264/AVC. Therefore, the intraprediction for the region in HEVC, which is interpredicted in H.264/AVC, can be skipped to reduce coding complexity. Several macroblocks in H.264/AVC are combined into one PU in HEVC when the MV difference between two of the macroblocks in H.264/AVC is lower than a threshold. This method selects only one coding unit depth and one prediction unit (PU) mode to reduce the coding complexity. An MV interpolation method of combined PU in HEVC is proposed according to the areas and distances between the center of one macroblock in H.264/AVC and that of the PU in HEVC. The predicted MV accelerates the motion estimation for HEVC coding. The simulation results show that our proposed algorithm achieves significant coding time reduction with a little loss in bitrates distortion rate, compared to the existing transcoding algorithms and normal HEVC coding.

  8. Methods for Using Small Non-Coding RNAs to Improve Recombinant Protein Expression in Mammalian Cells

    Directory of Open Access Journals (Sweden)

    Sarah Inwood

    2018-01-01

    Full Text Available The ability to produce recombinant proteins by utilizing different “cell factories” revolutionized the biotherapeutic and pharmaceutical industry. Chinese hamster ovary (CHO cells are the dominant industrial producer, especially for antibodies. Human embryonic kidney cells (HEK, while not being as widely used as CHO cells, are used where CHO cells are unable to meet the needs for expression, such as growth factors. Therefore, improving recombinant protein expression from mammalian cells is a priority, and continuing effort is being devoted to this topic. Non-coding RNAs are RNA segments that are not translated into a protein and often have a regulatory role. Since their discovery, major progress has been made towards understanding their functions. Non-coding RNA has been investigated extensively in relation to disease, especially cancer, and recently they have also been used as a method for engineering cells to improve their protein expression capability. In this review, we provide information about methods used to identify non-coding RNAs with the potential of improving recombinant protein expression in mammalian cell lines.

  9. Application Of WIMS Code To Calculation Kartini Reactor Parameters By Pin-Cell And Cluster Method

    International Nuclear Information System (INIS)

    Sumarsono, Bambang; Tjiptono, T.W.

    1996-01-01

    Analysis UZrH fuel element parameters calculation in Kartini Reactor by WIMS Code has been done. The analysis is done by pin cell and cluster method. The pin cell method is done as a function percent burn-up and by 8 group 3 region analysis and cluster method by 8 group 12 region analysis. From analysis and calculation resulted K ∼ = 1.3687 by pin cell method and K ∼ = 1.3162 by cluster method and so deviation is 3.83%. By pin cell analysis as a function percent burn-up at the percent burn-up greater than 59.50%, the multiplication factor is less than one (k ∼ < 1) it is mean that the fuel element reactivity is negative

  10. Results of a survey on accident and safety analysis codes, benchmarks, verification and validation methods

    International Nuclear Information System (INIS)

    Lee, A.G.; Wilkin, G.B.

    1996-03-01

    During the 'Workshop on R and D needs' at the 3rd Meeting of the International Group on Research Reactors (IGORR-III), the participants agreed that it would be useful to compile a survey of the computer codes and nuclear data libraries used in accident and safety analyses for research reactors and the methods various organizations use to verify and validate their codes and libraries. Five organizations, Atomic Energy of Canada Limited (AECL, Canada), China Institute of Atomic Energy (CIAE, People's Republic of China), Japan Atomic Energy Research Institute (JAERI, Japan), Oak Ridge National Laboratories (ORNL, USA), and Siemens (Germany) responded to the survey. The results of the survey are compiled in this report. (author) 36 refs., 3 tabs

  11. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    Science.gov (United States)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  12. A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance

    Science.gov (United States)

    Bell, E. V.; Henry, A.; Pivo, G.

    2017-12-01

    What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their

  13. Pressure vessels design methods using the codes, fracture mechanics and multiaxial fatigue

    Directory of Open Access Journals (Sweden)

    Fatima Majid

    2016-10-01

    Full Text Available This paper gives a highlight about pressure vessel (PV methods of design to initiate new engineers and new researchers to understand the basics and to have a summary about the knowhow of PV design. This understanding will contribute to enhance their knowledge in the selection of the appropriate method. There are several types of tanks distinguished by the operating pressure, temperature and the safety system to predict. The selection of one or the other of these tanks depends on environmental regulations, the geographic location and the used materials. The design theory of PVs is very detailed in various codes and standards API, such as ASME, CODAP ... as well as the standards of material selection such as EN 10025 or EN 10028. While designing a PV, we must design the fatigue of its material through the different methods and theories, we can find in the literature, and specific codes. In this work, a focus on the fatigue lifetime calculation through fracture mechanics theory and the different methods found in the ASME VIII DIV 2, the API 579-1 and EN 13445-3, Annex B, will be detailed by giving a comparison between these methods. In many articles in the literature the uniaxial fatigue has been very detailed. Meanwhile, the multiaxial effect has not been considered as it must be. In this paper we will lead a discussion about the biaxial fatigue due to cyclic pressure in thick-walled PV. Besides, an overview of multiaxial fatigue in PVs is detailed

  14. Analysis of hydrogen and methane in seawater by "Headspace" method: Determination at trace level with an automatic headspace sampler.

    Science.gov (United States)

    Donval, J P; Guyader, V

    2017-01-01

    "Headspace" technique is one of the methods for the onboard measurement of hydrogen (H 2 ) and methane (CH 4 ) in deep seawater. Based on the principle of an automatic headspace commercial sampler, a specific device has been developed to automatically inject gas samples from 300ml syringes (gas phase in equilibrium with seawater). As valves, micro pump, oven and detector are independent, a gas chromatograph is not necessary allowing a reduction of the weight and dimensions of the analytical system. The different steps from seawater sampling to gas injection are described. Accuracy of the method is checked by a comparison with the "purge and trap" technique. The detection limit is estimated to 0.3nM for hydrogen and 0.1nM for methane which is close to the background value in deep seawater. It is also shown that this system can be used to analyze other gases such as Nitrogen (N 2 ), carbon monoxide (CO), carbon dioxide (CO 2 ) and light hydrocarbons. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Validation of experts versus atlas-based and automatic registration methods for subthalamic nucleus targeting on MRI

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Castro, F.J.; Cuisenaire, O.; Thiran, J.P. [Ecole Polytechnique Federale de Lausanne (EPFL) (Switzerland). Signal Processing Inst.; Pollo, C. [Ecole Polytechnique Federale de Lausanne (EPFL) (Switzerland). Signal Processing Inst.; Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne (Switzerland). Dept. of Neurosurgery; Villemure, J.G. [Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne (Switzerland). Dept. of Neurosurgery

    2006-03-15

    Objects: In functional stereotactic neurosurgery, one of the cornerstones upon which the success and the operating time depends is an accurate targeting. The subthalamic nucleus (STN) is the usual target involved when applying deep brain stimulation for Parkinson's disease (PD). Unfortunately, STN is usually not clearly visible in common medical imaging modalities, which justifies the use of atlas-based segmentation techniques to infer the STN location. Materials and methods: Eight bilaterally implanted PD patients were included in this study. A three-dimensional T1-weighted sequence and inversion recovery T2-weighted coronal slices were acquired pre-operatively. We propose a methodology for the construction of a ground truth of the STN location and a scheme that allows both, to perform a comparison between different non-rigid registration algorithms and to evaluate their usability to locate the STN automatically. Results: The intra-expert variability in identifying the STN location is 1.06{+-}0.61 mm while the best non-rigid registration method gives an error of 1.80{+-}0.62 mm. On the other hand, statistical tests show that an affine registration with only 12 degrees of freedom is not enough for this application. Conclusions: Using our validation-evaluation scheme, we demonstrate that automatic STN localization is possible and accurate with non-rigid registration algorithms. (orig.)

  16. Wien Automatic System Planning (WASP) Package. A computer code for power generating system expansion planning. Version WASP-III Plus. User's manual. Volume 1: Chapters 1-11

    International Nuclear Information System (INIS)

    1995-01-01

    As a continuation of its effort to provide comprehensive and impartial guidance to Member States facing the need for introducing nuclear power, the IAEA has completed a new version of the Wien Automatic System Planning (WASP) Package for carrying out power generation expansion planning studies. WASP was originally developed in 1972 in the USA to meet the IAEA's needs to analyze the economic competitiveness of nuclear power in comparison to other generation expansion alternatives for supplying the future electricity requirements of a country or region. The model was first used by the IAEA to conduct global studies (Market Survey for Nuclear Power Plants in Developing Countries, 1972-1973) and to carry out Nuclear Power Planning Studies for several Member States. The WASP system developed into a very comprehensive planning tool for electric power system expansion analysis. Following these developments, the so-called WASP-Ill version was produced in 1979. This version introduced important improvements to the system, namely in the treatment of hydroelectric power plants. The WASP-III version has been continually updated and maintained in order to incorporate needed enhancements. In 1981, the Model for Analysis of Energy Demand (MAED) was developed in order to allow the determination of electricity demand, consistent with the overall requirements for final energy, and thus, to provide a more adequate forecast of electricity needs to be considered in the WASP study. MAED and WASP have been used by the Agency for the conduct of Energy and Nuclear Power Planning Studies for interested Member States. More recently, the VALORAGUA model was completed in 1992 as a means for helping in the preparation of the hydro plant characteristics to be input in the WASP study and to verify that the WASP overall optimized expansion plan takes also into account an optimization of the use of water for electricity generation. The combined application of VALORAGUA and WASP permits the

  17. Reactor power automatically controlling method and device for BWR type reactor

    International Nuclear Information System (INIS)

    Murata, Akira; Miyamoto, Yoshiyuki; Tanigawa, Naoshi.

    1997-01-01

    For an automatic control for a reactor power, when a deviation exceeds a predetermined value, the aimed value is kept at a predetermined value, and when the deviation is decreased to less than the predetermined value, the aimed value is increased from the predetermined value again. Alternatively, when a reactor power variation coefficient is decreased to less than a predetermine value, an aimed value is maintained at a predetermined value, and when the variation coefficient exceeds the predetermined value, the aimed value is increased. When the reactor power variation coefficient exceeds a first determined value, an aimed value is increased to a predetermined variation coefficient, and when the variation coefficient is decreased to less than the first determined value and also when the deviation between the aimed value and an actual reactor power exceeds a second determined value, the aimed value is maintained at a constant value. When the deviation is increased or when the reactor power variation coefficient is decreased, since the aimed value is maintained at predetermined value without increasing the aimed value, the deviation is not increased excessively thereby enabling to avoid excessive overshoot. (N.H.)

  18. A method for automatic situation recognition in collaborative multiplayer Serious Games

    Directory of Open Access Journals (Sweden)

    Viktor Wendel

    2015-07-01

    Full Text Available One major Serious Games challenge is adaptation of game-based learning environments towards the needs of players with heterogeneous player and learner traits. For both an instructor or an algorithmic adaptation mechanism it is vital to have knowledge about the course of the game in order to be able to recognize player intentions, potential problems, or misunderstandings - both of the game(play and the learning content. The main contribution of this paper is a mechanism to recognize high-level situations in a multiplayer Serious Game. The approach presented uses criteria and situations based on the game-state, player actions and events and calculates how likely it is that players are in a certain situation. The gathered information can be used to feed an adaptation algorithm or be presented to the instructor to improve instructor decision making. In a first evaluation, the situation recognition was able to correctly recognize all of the situations in a set of game sessions. Thus, the contribution of this paper contains a novel approach to automatically capture complex multiplayer game states influenced by unpredictable player behavior, and to interpret that information to calculate probabilities of relevant game situations to be present from which player intentions can be derived.

  19. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    Science.gov (United States)

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine. PMID:23509721

  20. A Coding Method for Efficient Subgraph Querying on Vertex- and Edge-Labeled Graphs

    Science.gov (United States)

    Zhu, Lei; Song, Qinbao; Guo, Yuchen; Du, Lei; Zhu, Xiaoyan; Wang, Guangtao

    2014-01-01

    Labeled graphs are widely used to model complex data in many domains, so subgraph querying has been attracting more and more attention from researchers around the world. Unfortunately, subgraph querying is very time consuming since it involves subgraph isomorphism testing that is known to be an NP-complete problem. In this paper, we propose a novel coding method for subgraph querying that is based on Laplacian spectrum and the number of walks. Our method follows the filtering-and-verification framework and works well on graph databases with frequent updates. We also propose novel two-step filtering conditions that can filter out most false positives and prove that the two-step filtering conditions satisfy the no-false-negative requirement (no dismissal in answers). Extensive experiments on both real and synthetic graphs show that, compared with six existing counterpart methods, our method can effectively improve the efficiency of subgraph querying. PMID:24853266

  1. Coding Model and Mapping Method of Spherical Diamond Discrete Grids Based on Icosahedron

    Directory of Open Access Journals (Sweden)

    LIN Bingxian

    2016-12-01

    Full Text Available Discrete Global Grid(DGG provides a fundamental environment for global-scale spatial data's organization and management. DGG's encoding scheme, which blocks coordinate transformation between different coordination reference frames and reduces the complexity of spatial analysis, contributes a lot to the multi-scale expression and unified modeling of spatial data. Compared with other kinds of DGGs, Diamond Discrete Global Grid(DDGG based on icosahedron is beneficial to the spherical spatial data's integration and expression for much better geometric properties. However, its structure seems more complicated than DDGG on octahedron due to its initial diamond's edges cannot fit meridian and parallel. New challenges are posed when it comes to the construction of hierarchical encoding system and mapping relationship with geographic coordinates. On this issue, this paper presents a DDGG's coding system based on the Hilbert curve and designs conversion methods between codes and geographical coordinates. The study results indicate that this encoding system based on the Hilbert curve can express space scale and location information implicitly with the similarity between DDG and planar grid put into practice, and balances efficiency and accuracy of conversion between codes and geographical coordinates in order to support global massive spatial data's modeling, integrated management and all kinds of spatial analysis.

  2. Development of simulation code for MOX dissolution using silver-mediated electrochemical method (Contract research)

    Energy Technology Data Exchange (ETDEWEB)

    Kida, Takashi; Umeda, Miki; Sugikawa, Susumu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    MOX dissolution using silver-mediated electrochemical method will be employed for the preparation of plutonium nitrate solution in the criticality safety experiments in the Nuclear Fuel Cycle Safety Engineering Research Facility (NUCEF). A simulation code for the MOX dissolution has been developed for the operating support. The present report describes the outline of the simulation code, a comparison with the experimental data and a parameter study on the MOX dissolution. The principle of this code is based on the Zundelevich's model for PuO{sub 2} dissolution using Ag(II). The influence of nitrous acid on the material balance of Ag(II) is taken into consideration and the surface area of MOX powder is evaluated by particle size distribution in this model. The comparison with experimental data was carried out to confirm the validity of this model. It was confirmed that the behavior of MOX dissolution could adequately be simulated using an appropriate MOX dissolution rate constant. It was found from the result of parameter studies that MOX particle size was major governing factor on the dissolution rate. (author)

  3. Structural design codes: Strain-life method and fatigue damage estimation for ITER

    International Nuclear Information System (INIS)

    Karditsas, P.J.

    1996-01-01

    A preferred route is suggested for implementing the design rules and requirements of the design codes for the International Thermonuclear Experimental Reactor (ITER), such as ASME and RCC-MR, and for preliminarily assessing which of the in-service loading conditions inflicts the greatest damage on the structure. Some of the relevant design code rules and constraints are presented, and lifetime and fatigue damage, with some data on fatigue life for Type 316 stainless steel, are predicted. A design curve for strain range versus the number of cycles to failure is presented, including the effect of neutron damage on the material. An example calculation is performed on a first-wall section, and preliminary estimation of the fatigue usage factor is presented. One must observe caution when assessing the results because of the assumptions made in performing the calculations. The results, however, indicate that parts of the component are in the low-cycle fatigue region of operation, which thus supports the use of strain-life methods. The load-controlled stress limit approach of the existing codes leads to difficulties with in-service loading and component categorization, whereas the strain-deformation limit approach may lead to difficulties in calculations. The conclusion is that the load-controlled approach shifts the emphasis to the regulator and the licensing body, whereas the strain-deformation approach shifts the emphasis to the designer and the structural analyst. 11 refs., 7 figs., 2 tabs

  4. A method for the automatic quantification of the completeness of pulmonary fissures: evaluation in a database of subjects with severe emphysema

    International Nuclear Information System (INIS)

    Rikxoort, Eva M. van; Goldin, Jonathan G.; Galperin-Aizenberg, Maya; Abtin, Fereidoun; Kim, Hyun J.; Lu, Peiyun; Shaw, Greg; Brown, Matthew S.; Ginneken, Bram van

    2012-01-01

    To propose and evaluate a technique for automatic quantification of fissural completeness from chest computed tomography (CT) in a database of subjects with severe emphysema. Ninety-six CT studies of patients with severe emphysema were included. The lungs, fissures and lobes were automatically segmented. The completeness of the fissures was calculated as the percentage of the lobar border defined by a fissure. The completeness score of the automatic method was compared with a visual consensus read by three radiologists using boxplots, rank sum tests and ROC analysis. The consensus read found 49% (47/96), 15% (14/96) and 67% (64/96) of the right major, right minor and left major fissures to be complete. For all fissures visually assessed as being complete the automatic method resulted in significantly higher completeness scores (mean 92.78%) than for those assessed as being partial or absent (mean 77.16%; all p values <0.001). The areas under the curves for the automatic fissural completeness were 0.88, 0.91 and 0.83 for the right major, right minor and left major fissures respectively. An automatic method is able to quantify fissural completeness in a cohort of subjects with severe emphysema consistent with a visual consensus read of three radiologists. (orig.)

  5. Advanced method for automatic processing of seismic and infra-sound data; Methodes avancees de traitement automatique de donnees sismiques et infrasoniques

    Energy Technology Data Exchange (ETDEWEB)

    Cansi, Y.; Crusem, R. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

    1997-11-01

    Governmental organizations have manifested their need for rapid and precise information in the two main fields covered by operational seismology, i.e.: major earthquake alerts and the detection of nuclear explosions. To satisfy both of these constraints, it is necessary to implement increasingly elaborate automation methods for processing the data. Automatic processing methods are mainly based on the flowing elementary steps: detection of a seismic signal on a recording; identification of the type of wave associated with the signal; linking of the different detected arrivals to the same seismic event; localization of the source, which also determines the characteristics of the event. Otherwise, two main categories of processing may be distinguished: methods suitable for large aperture networks, which are characterized by single-channel treatment for detection and identification, and antenna-type methods which are based on searching for consistent signals on the scale of the net work. Within the two main fields of research mentioned here, our effort has focused on regional-scale seismic waves in relation to large-aperture networks as well as on detection techniques using a mini-network (antenna). We have taken advantage of the extensive set of examples in order to implement an automatic procedure for identifying regional seismic waves on single-channel recordings. With the mini-networks, we have developed a novel method universally applicable and successfully applied to various different types of recording (e.g. seismic, micro-barometric, etc) and networks adapted to different wavelength bands. (authors) 7 refs.

  6. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    Science.gov (United States)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long

  7. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    International Nuclear Information System (INIS)

    Schoot, A. J. A. J. van de; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A.; Hoogeman, M. S.; Chai, X.

    2014-01-01

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  8. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    Energy Technology Data Exchange (ETDEWEB)

    Schoot, A. J. A. J. van de, E-mail: a.j.schootvande@amc.uva.nl; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands); Chai, X. [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Palo Alto, California 94305 (United States)

    2014-03-15

    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  9. A semi-automatic computerized method to measure baroreflex-mediated heart rate responses that reduces interobserver variability

    Directory of Open Access Journals (Sweden)

    P.P.S. Soares

    2005-06-01

    Full Text Available Arterial baroreflex sensitivity estimated by pharmacological impulse stimuli depends on intrinsic signal variability and usually a subjective choice of blood pressure (BP and heart rate (HR values. We propose a semi-automatic method to estimate cardiovascular reflex sensitivity to bolus infusions of phenylephrine and nitroprusside. Beat-to-beat BP and HR time series for male Wistar rats (N = 13 were obtained from the digitized signal (sample frequency = 2 kHz and analyzed by the proposed method (PRM developed in Matlab language. In the PRM, time series were low-pass filtered with zero-phase distortion (3rd order Butterworth used in the forward and reverse direction and presented graphically, and parameters were selected interactively. Differences between basal mean values and peak BP (deltaBP and HR (deltaHR values after drug infusions were used to calculate baroreflex sensitivity indexes, defined as the deltaHR/deltaBP ratio. The PRM was compared to the method traditionally (TDM employed by seven independent observers using files for reflex bradycardia (N = 43 and tachycardia (N = 61. Agreement was assessed by Bland and Altman plots. Dispersion among users, measured as the standard deviation, was higher for TDM for reflex bradycardia (0.60 ± 0.46 vs 0.21 ± 0.26 bpm/mmHg for PRM, P < 0.001 and tachycardia (0.83 ± 0.62 vs 0.28 ± 0.28 bpm/mmHg for PRM, P < 0.001. The advantage of the present method is related to its objectivity, since the routine automatically calculates the desired parameters according to previous software instructions. This is an objective, robust and easy-to-use tool for cardiovascular reflex studies.

  10. Detection of viable myocardium by dobutamine stress tagging magnetic resonance imaging with three-dimensional analysis by automatic trace method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Isao [Yatsu Hoken Hospital, Narashino, Chiba (Japan); Watanabe, Shigeru; Masuda, Yoshiaki

    2000-07-01

    The present study attempted to detect the viability of myocardium by quantitative automatic 3-dimensional analysis of the improvement of regional wall motion using an magnetic resonance imaging (MRI) tagging method. Twenty-two subjects with ischemic heart disease who had abnormal wall motion on echocardiography at rest were enrolled. All patients underwent dobutamine stress echocardiography (DSE), coronary arteriography and left ventriculography. The results were compared with those of 7 normal volunteers. MRI studies were done with myocardial tagging using the spatial modulation of magnetization technique. Automatic tracing with an original program was performed, and wall motion was compared before and during dobutamine infusion. The evaluation of myocardial viability with MRI and echocardiography had similar results in 19 (86.4%) of the 22 patients; 20 were studied by positron emission tomography or thallium-201 single photon emission computed tomography for myocardial viability, or studied for improvement of wall motion following coronary intervention. The sensitivity of dobutamine stress MRI (DSMRI) with tagging was 75.9% whereas that of DSE was 65.5%. The specificity of DSMRI was 85.7% (6/7) and that of DSE was 100% (7/7). The accuracy of DSMRI was 77.8% (28/36) and that of DSE 72.2% (26/36). DSMRI was shown to be superior to DSE in terms of evaluation of myocardial viability. (author)

  11. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    Science.gov (United States)

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.

  12. A software defined RTU multi-protocol automatic adaptation data transmission method

    Science.gov (United States)

    Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng

    2018-02-01

    Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.

  13. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    Science.gov (United States)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  14. CHF predictor derived from a 3D thermal-hydraulic code and an advanced statistical method

    International Nuclear Information System (INIS)

    Banner, D.; Aubry, S.

    2004-01-01

    A rod bundle CHF predictor has been determined by using a 3D code (THYC) to compute local thermal-hydraulic conditions at the boiling crisis location. These local parameters have been correlated to the critical heat flux by using an advanced statistical method based on spline functions. The main characteristics of the predictor are presented in conjunction with a detailed analysis of predictions (P/M ratio) in order to prove that the usual safety methodology can be applied with such a predictor. A thermal-hydraulic design criterion is obtained (1.13) and the predictor is compared with the WRB-1 correlation. (author)

  15. Development and Testing of a Decision Making Based Method to Adjust Automatically the Harrowing Intensity

    Directory of Open Access Journals (Sweden)

    Roland Gerhards

    2013-05-01

    Full Text Available Harrowing is often used to reduce weed competition, generally using a constant intensity across a whole field. The efficacy of weed harrowing in wheat and barley can be optimized, if site-specific conditions of soil, weed infestation and crop growth stage are taken into account. This study aimed to develop and test an algorithm to automatically adjust the harrowing intensity by varying the tine angle and number of passes. The field variability of crop leaf cover, weed density and soil density was acquired with geo-referenced sensors to investigate the harrowing selectivity and crop recovery. Crop leaf cover and weed density were assessed using bispectral cameras through differential images analysis. The draught force of the soil opposite to the direction of travel was measured with electronic load cell sensor connected to a rigid tine mounted in front of the harrow. Optimal harrowing intensity levels were derived in previously implemented experiments, based on the weed control efficacy and yield gain. The assessments of crop leaf cover, weed density and soil density were combined via rules with the aforementioned optimal intensities, in a linguistic fuzzy inference system (LFIS. The system was evaluated in two field experiments that compared constant intensities with variable intensities inferred by the system. A higher weed density reduction could be achieved when the harrowing intensity was not kept constant along the cultivated plot. Varying the intensity tended to reduce the crop leaf cover, though slightly improving crop yield. A real-time intensity adjustment with this system is achievable, if the cameras are attached in the front and at the rear or sides of the harrow.

  16. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  17. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. (comp.)

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  18. Automatic PCM guard-band selector and calibrator

    Science.gov (United States)

    Noda, T. T.

    1974-01-01

    Automatic method for selection of proper guard band eliminates human error and speeds up calibration process. There is also an option which allows a single channel to be calibrated, independently of other channels. Entire system is designed on 3- by 4-inch printed-circuit cards and may be used with any pulse code modulation system.

  19. An imaging method of wavefront coding system based on phase plate rotation

    Science.gov (United States)

    Yi, Rigui; Chen, Xi; Dong, Liquan; Liu, Ming; Zhao, Yuejin; Liu, Xiaohua

    2018-01-01

    Wave-front coding has a great prospect in extending the depth of the optical imaging system and reducing optical aberrations, but the image quality and noise performance are inevitably reduced. According to the theoretical analysis of the wave-front coding system and the phase function expression of the cubic phase plate, this paper analyzed and utilized the feature that the phase function expression would be invariant in the new coordinate system when the phase plate rotates at different angles around the z-axis, and we proposed a method based on the rotation of the phase plate and image fusion. First, let the phase plate rotated at a certain angle around the z-axis, the shape and distribution of the PSF obtained on the image surface remain unchanged, the rotation angle and direction are consistent with the rotation angle of the phase plate. Then, the middle blurred image is filtered by the point spread function of the rotation adjustment. Finally, the reconstruction images were fused by the method of the Laplacian pyramid image fusion and the Fourier transform spectrum fusion method, and the results were evaluated subjectively and objectively. In this paper, we used Matlab to simulate the images. By using the Laplacian pyramid image fusion method, the signal-to-noise ratio of the image is increased by 19% 27%, the clarity is increased by 11% 15% , and the average gradient is increased by 4% 9% . By using the Fourier transform spectrum fusion method, the signal-to-noise ratio of the image is increased by 14% 23%, the clarity is increased by 6% 11% , and the average gradient is improved by 2% 6%. The experimental results show that the image processing by the above method can improve the quality of the restored image, improving the image clarity, and can effectively preserve the image information.

  20. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  1. Evaluation of Monte Carlo Codes Regarding the Calculated Detector Response Function in NDP Method

    Energy Technology Data Exchange (ETDEWEB)

    Tuan, Hoang Sy Minh; Sun, Gwang Min; Park, Byung Gun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements ({sup 10}B, {sup 3}He, {sup 6}Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate.

  2. Automatic sequences

    CERN Document Server

    Haeseler, Friedrich

    2003-01-01

    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  3. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  4. An Effective Transform Unit Size Decision Method for High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Chou-Chen Wang

    2014-01-01

    Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.

  5. An automatic high precision registration method between large area aerial images and aerial light detection and ranging data

    Science.gov (United States)

    Du, Q.; Xie, D.; Sun, Y.

    2015-06-01

    The integration of digital aerial photogrammetry and Light Detetion And Ranging (LiDAR) is an inevitable trend in Surveying and Mapping field. We calculate the external orientation elements of images which identical with LiDAR coordinate to realize automatic high precision registration between aerial images and LiDAR data. There are two ways to calculate orientation elements. One is single image spatial resection using image matching 3D points that registered to LiDAR. The other one is Position and Orientation System (POS) data supported aerotriangulation. The high precision registration points are selected as Ground Control Points (GCPs) instead of measuring GCPs manually during aerotriangulation. The registration experiments indicate that the method which registering aerial images and LiDAR points has a great advantage in higher automation and precision compare with manual registration.

  6. Latent variable method for automatic adaptation to background states in motor imagery BCI

    Science.gov (United States)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  7. Automatic MRI Quantifying Methods in Behavioral-Variant Frontotemporal Dementia Diagnosis

    DEFF Research Database (Denmark)

    Cajanus, Antti; Hall, Anette; Koikkalainen, Juha

    2018-01-01

    genetic status in the differentiation sensitivity. Methods: The MRI scans of 50 patients with bvFTD (17 C9ORF72 expansion carriers) were analyzed using 6 quantification methods as follows: voxel-based morphometry (VBM), tensor-based morphometry, volumetry (VOL), manifold learning, grading, and white...

  8. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  9. Algorithmic complexity for psychology: a user-friendly implementation of the coding theorem method.

    Science.gov (United States)

    Gauvrit, Nicolas; Singmann, Henrik; Soler-Toscano, Fernando; Zenil, Hector

    2016-03-01

    Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology.

  10. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  11. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  12. An Automatic Parameter Identification Method for a PMSM Drive with LC-Filter

    DEFF Research Database (Denmark)

    Bech, Michael Møller; Christensen, Jeppe Haals; Weber, Magnus L.

    2016-01-01

    This paper presents a method for stand-still identification of parameters in a permanent magnet synchronous motor (PMSM) fed from an inverter equipped with an three-phase LC-type output filter. Using a special random modulation strategy, the method uses the inverter for broad-band excitation...... of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find...... the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification...

  13. Signal Compression in Automatic Ultrasonic testing of Rails

    Directory of Open Access Journals (Sweden)

    Tomasz Ciszewski

    2007-01-01

    Full Text Available Full recording of the most important information carried by the ultrasonic signals allows realizing statistical analysis of measurement data. Statistical analysis of the results gathered during automatic ultrasonic tests gives data which lead, together with use of features of measuring method, differential lossy coding and traditional method of lossless data compression (Huffman’s coding, dictionary coding, to a comprehensive, efficient data compression algorithm. The subject of the article is to present the algorithm and the benefits got by using it in comparison to alternative compression methods. Storage of large amount  of data allows to create an electronic catalogue of ultrasonic defects. If it is created, the future qualification system training in the new solutions of the automat for test in rails will be possible.

  14. Phase-Phase and Phase-Code Methods Modification for Precise Detecting and Predicting the GPS Cycle Slip Error

    OpenAIRE

    Elashiry Ahmed A.; Youssef Mohamed A.; Abdel Hamid Mohamed A.

    2015-01-01

    There are three well-established detecting methods for cycle slip error, which are: Doppler measurement method, Phase-Code differencing method, and Phase-Phase Differencing Method. The first method depends on the comparison between observables and the fact that Doppler measurements are immune to cycle slip error. This method is considered as the most precise method for cycle slip detecting, because it succeed in detecting and predicting the smallest cycle slip size (1 cycle) in case the local...

  15. A method for blind automatic evaluation of noise variance in images based on bootstrap and myriad operations

    Science.gov (United States)

    Lukin, Vladimir V.; Abramov, Sergey K.; Vozel, Benoit; Chehdi, Kacem

    2005-10-01

    Multichannel (multispectral) remote sensing (MRS) is widely used for various applications nowadays. However, original images are commonly corrupted by noise and other distortions. This prevents reliable retrieval of useful information from remote sensing data. Because of this, image pre-filtering and/or reconstruction are typical stages of multichannel image processing. And majority of modern efficient methods for image pre-processing requires availability of a priori information concerning noise type and its statistical characteristics. Thus, there is a great need in automatic blind methods for determination of noise type and its characteristics. However, almost all such methods fail to perform appropriately well if an image under consideration contains a large percentage of texture regions, details and edges. In this paper we demonstrate that by applying bootstrap it is possible to obtain rather accurate estimates of noise variance that can be used either as the final or preliminary ones. Different quantiles (order statistics) are used as initial estimates of mode location for distribution of noise variance local estimations and then bootstrap is applied for their joint analysis. To further improve accuracy of noise variance estimations, it is proposed under certain condition to apply myriad operation with tunable parameter k set in accordance with preliminary estimate obtained by bootstrap. Numerical simulation results confirm applicability of the proposed approach and produce data allowing to evaluate method accuracy.

  16. A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.

  17. SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs.

    Energy Technology Data Exchange (ETDEWEB)

    McBride, Cory L. (Elemental Technologies, American Fort, UT); Yarberry, Victor R.; Schmidt, Rodney Cannon; Meyers, Ray J. (Elemental Technologies, American Fort, UT)

    2006-11-01

    This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

  18. TreePM: A Code for Cosmological N-Body Simulations

    Indian Academy of Sciences (India)

    We describe the TreePM method for carrying out large N-Body simulations to study formation and evolution of the large scale structure in the Universe. This method is a combination of Barnes and Hut tree code and Particle-Mesh code. It combines the automatic inclusion of periodic boundary conditions of PM simulations ...

  19. Adaptive Morse code recognition using variable degree variable step size LMS for persons with disabilities.

    Science.gov (United States)

    Yang, C H

    1998-01-01

    In this paper, we applied variable degree, variable step size LMS algorithm to adaptive Morse code recognition for persons with impaired hand coordination and dexterity. The automatic recognition of Morse code by the disabled is difficult because they cannot maintain a stable typing rate. Therefore, a suitable adaptive automatic recognition method is needed. In this adaptive Morse code recognition method, three processes are involved: character separation, character recognition, and adaptive processing. Statistical analyses demonstrated that the proposed method resulted in a better recognition rate compared to alternative methods from the literature.

  20. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron...... hydroxide and manganese dioxide co-precipitation and evaporation were compared and the applicability of different techniques was discussed in order to evaluate and establish the optimal method for in vivo radioassay program. The analytical results indicate that the various sample pre......-precipitation step, yet, the occurrence of sulfur compounds in the processed sample deteriorated the analytical performance of the ensuing extraction chromatographic separation with chemical yields of...

  1. ANALYSIS OF THE DISTANCES COVERED BY FIRST DIVISION BRAZILIAN SOCCER PLAYERS OBTAINED WITH AN AUTOMATIC TRACKING METHOD

    Directory of Open Access Journals (Sweden)

    Ricardo M. L. Barros

    2007-06-01

    Full Text Available Methods based on visual estimation still is the most widely used analysis of the distances that is covered by soccer players during matches, and most description available in the literature were obtained using such an approach. Recently, systems based on computer vision techniques have appeared and the very first results are available for comparisons. The aim of the present study was to analyse the distances covered by Brazilian soccer players and compare the results to the European players', both data measured by automatic tracking system. Four regular Brazilian First Division Championship matches between different teams were filmed. Applying a previously developed automatic tracking system (DVideo, Campinas, Brazil, the results of 55 outline players participated in the whole game (n = 55 are presented. The results of mean distances covered, standard deviations (s and coefficient of variation (cv after 90 minutes were 10,012 m, s = 1,024 m and cv = 10.2%, respectively. The results of three-way ANOVA according to playing positions, showed that the distances covered by external defender (10642 ± 663 m, central midfielders (10476 ± 702 m and external midfielders (10598 ± 890 m were greater than forwards (9612 ± 772 m and forwards covered greater distances than central defenders (9029 ± 860 m. The greater distances were covered in standing, walking, or jogging, 5537 ± 263 m, followed by moderate-speed running, 1731 ± 399 m; low speed running, 1615 ± 351 m; high-speed running, 691 ± 190 m and sprinting, 437 ± 171 m. Mean distance covered in the first half was 5,173 m (s = 394 m, cv = 7.6% highly significant greater (p < 0.001 than the mean value 4,808 m (s = 375 m, cv = 7.8% in the second half. A minute-by-minute analysis revealed that after eight minutes of the second half, player performance has already decreased and this reduction is maintained throughout the second half

  2. Automatic MRI Quantifying Methods in Behavioral-Variant Frontotemporal Dementia Diagnosis

    Directory of Open Access Journals (Sweden)

    Antti Cajanus

    2018-02-01

    Full Text Available Aims: We assessed the value of automated MRI quantification methods in the differential diagnosis of behavioral-variant frontotemporal dementia (bvFTD from Alzheimer disease (AD, Lewy body dementia (LBD, and subjective memory complaints (SMC. We also examined the role of the C9ORF72-related genetic status in the differentiation sensitivity. Methods: The MRI scans of 50 patients with bvFTD (17 C9ORF72 expansion carriers were analyzed using 6 quantification methods as follows: voxel-based morphometry (VBM, tensor-based morphometry, volumetry (VOL, manifold learning, grading, and white-matter hyperintensities. Each patient was then individually compared to an independent reference group in order to attain diagnostic suggestions. Results: Only VBM and VOL showed utility in correctly identifying bvFTD from our set of data. The overall classification sensitivity of bvFTD with VOL + VBM achieved a total sensitivity of 60%. Using VOL + VBM, 32% were misclassified as having LBD. There was a trend of higher values for classification sensitivity of the C9ORF72 expansion carriers than noncarriers. Conclusion: VOL, VBM, and their combination are effective in differential diagnostics between bvFTD and AD or SMC. However, MRI atrophy profiles for bvFTD and LBD are too similar for a reliable differentiation with the quantification methods tested in this study.

  3. Automatic and efficient methods applied to the binarization of a subway map

    Science.gov (United States)

    Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan

    2015-12-01

    The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.

  4. 26 CFR 1.467-9 - Effective dates and automatic method changes for certain agreements.

    Science.gov (United States)

    2010-04-01

    ... (e), the scope limitations in section 4.02 of Rev. Proc. 98-60 are not applicable. A method change in... applies only to rental agreements described in § 1.467-8. (c) Application of regulation project IA-292-84... before May 18, 1999, a taxpayer may choose to apply the provisions of regulation project IA-292-84 (1996...

  5. Manual and Fast C Code Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Fadle Abdulla

    2010-01-01

    Full Text Available Developing an application with high performance through the code optimization places a greater responsibility on the programmers. While most of the existing compilers attempt to automatically optimize the program code, manual techniques remain the predominant method for performing optimization. Deciding where to try to optimize code is difficult, especially for large complex applications. For manual optimization, the programmers can use his experiences in writing the code, and then he can use a software profiler in order to collect and analyze the performance data from the code. In this work, we have gathered the most experiences which can be applied to improve the style of writing programs in C language as well as we present an implementation of the manual optimization of the codes using the Intel VTune profiler. The paper includes two case studies to illustrate our optimization on the Heap Sort and Factorial functions.

  6. 10 CFR 431.134 - Uniform test methods for the measurement of energy consumption and water consumption of automatic...

    Science.gov (United States)

    2010-01-01

    ... consumption and water consumption of automatic commercial ice makers. 431.134 Section 431.134 Energy... of energy consumption and water consumption of automatic commercial ice makers. (a) Scope. This... consumption, but instead calculate the energy use rate (kWh/100 lbs Ice) by dividing the energy consumed...

  7. A method for automatic segmentation and splitting of hyperspectral images of raspberry plants collected in field conditions

    Directory of Open Access Journals (Sweden)

    Dominic Williams

    2017-11-01

    Full Text Available Abstract Hyperspectral imaging is a technology that can be used to monitor plant responses to stress. Hyperspectral images have a full spectrum for each pixel in the image, 400–2500 nm in this case, giving detailed information about the spectral reflectance of the plant. Although this technology has been used in laboratory-based controlled lighting conditions for early detection of plant disease, the transfer of such technology to imaging plants in field conditions presents a number of challenges. These include problems caused by varying light levels and difficulties of separating the target plant from its background. Here we present an automated method that has been developed to segment raspberry plants from the background using a selected spectral ratio combined with edge detection. Graph theory was used to minimise a cost function to detect the continuous boundary between uninteresting plants and the area of interest. The method includes automatic detection of a known reflectance tile which was kept constantly within the field of view for all image scans. A method to split images containing rows of multiple raspberry plants into individual plants was also developed. Validation was carried out by comparison of plant height and density measurements with manually scored values. A reasonable correlation was found between these manual scores and measurements taken from the images (r2 = 0.75 for plant height. These preliminary steps are an essential requirement before detailed spectral analysis of the plants can be achieved.

  8. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  9. a Semi-Automatic Rule Set Building Method for Urban Land Cover Classification Based on Machine Learning and Human Knowledge

    Science.gov (United States)

    Gu, H. Y.; Li, H. T.; Liu, Z. Y.; Shao, C. Y.

    2017-09-01

    Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  10. An automatic contour propagation method to follow parotid gland deformation during head-and-neck cancer tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Faggiano, E; Scalco, E; Rizzo, G [Istituto di Bioimmagini e Fisiologia Molecolare (IBFM), CNR, Milan (Italy); Fiorino, C; Broggi, S; Cattaneo, M; Maggiulli, E; Calandrino, R [Department of Medical Physics, San Raffaele Scientific Institute, Milan (Italy); Dell' Oca, I; Di Muzio, N, E-mail: fiorino.claudio@hsr.it [Department of Radiotherapy, San Raffaele Scientific Institute, Milan (Italy)

    2011-02-07

    We developed an efficient technique to auto-propagate parotid gland contours from planning kVCT to daily MVCT images of head-and-neck cancer patients treated with helical tomotherapy. The method deformed a 3D surface mesh constructed from manual kVCT contours by B-spline free-form deformation to generate optimal and smooth contours. Deformation was calculated by elastic image registration between kVCT and MVCT images. Data from ten head-and-neck cancer patients were considered and manual contours by three observers were included in both kVCT and MVCT images. A preliminary inter-observer variability analysis demonstrated the importance of contour propagation in tomotherapy application: a high variability was reported in MVCT parotid volume estimation (p = 0.0176, ANOVA test) and a larger uncertainty of MVCT contouring compared with kVCT was demonstrated by DICE and volume variability indices (Wilcoxon signed rank test, p < 10{sup -4} for both indices). The performance analysis of our method showed no significant differences between automatic and manual contours in terms of volumes (p > 0.05, in a multiple comparison Tukey test), center-of-mass distances (p = 0.3043, ANOVA test), DICE values (p = 0.1672, Wilcoxon signed rank test) and average and maximum symmetric distances (p = 0.2043, p = 0.8228 Wilcoxon signed rank tests). Results suggested that our contour propagation method could successfully substitute human contouring on MVCT images.

  11. Improved differential pulse code modulation-block truncation coding method adopting two-level mean squared error near-optimal quantizers

    Science.gov (United States)

    Choi, Kang-Sun; Ko, Sung-Jea

    2011-04-01

    The conventional hybrid method of block truncation coding (BTC) and differential pulse code modulation (DPCM), namely the DPCM-BTC method, offers better rate-distortion performance than the standard BTC. However, the quantization error in the hybrid method is easily increased for large block sizes due to the use of two representative levels in BTC. In this paper, we first derive a bivariate quadratic function representing the mean squared error (MSE) between the original block and the block reconstructed in the DPCM framework. The near-optimal representatives obtained by quantizing the minimum of the derived function can prevent the rapid increase of the quantization error. Experimental results show that the proposed method improves peak signal-to-noise ratio performance by up to 2dB at 1.5 bit/pixel (bpp) and by 1.2dB even at a low bit rate of 1.1 bpp as compared with the DPCM-BTC method without optimization. Even with the additional computation for the quantizer optimization, the computational complexity of the proposed method is still much lower than those of transform-based compression techniques.

  12. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  13. An automatic image-based modelling method applied to forensic infography.

    Directory of Open Access Journals (Sweden)

    Sandra Zancajo-Blazquez

    Full Text Available This paper presents a new method based on 3D reconstruction from images that demonstrates the utility and integration of close-range photogrammetry and computer vision as an efficient alternative to modelling complex objects and scenarios of forensic infography. The results obtained confirm the validity of the method compared to other existing alternatives as it guarantees the following: (i flexibility, permitting work with any type of camera (calibrated and non-calibrated, smartphone or tablet and image (visible, infrared, thermal, etc.; (ii automation, allowing the reconstruction of three-dimensional scenarios in the absence of manual intervention, and (iii high quality results, sometimes providing higher resolution than modern laser scanning systems. As a result, each ocular inspection of a crime scene with any camera performed by the scientific police can be transformed into a scaled 3d model.

  14. An automatic formulation of inverse free second moment method for algebraic systems

    International Nuclear Information System (INIS)

    Shakshuki, Elhadi; Ponnambalam, Kumaraswamy

    2002-01-01

    In systems with probabilistic uncertainties, an estimation of reliability requires at least the first two moments. In this paper, we focus on probabilistic analysis of linear systems. The important tasks in this analysis are the formulation and the automation of the moment equations. The main objective of the formulation is to provide at least means and variances of the output variables with at least a second-order accuracy. The objective of the automation is to reduce the storage and computational complexities required for implementing (automating) those formulations. This paper extends the recent work done to calculate the first two moments of a set of random algebraic linear equations by developing a stamping procedure to facilitate its automation. The new method has an additional advantage of being able to solve problems when the mean matrix of a system is singular. Lastly, from storage and computational complexities and accuracy point of view, a comparison between the new method and another recently developed first order second moment method is made with numerical examples

  15. [Automatic classification method of star spectrum data based on classification pattern tree].

    Science.gov (United States)

    Zhao, Xu-Jun; Cai, Jiang-Hui; Zhang, Ji-Fu; Yang, Hai-Feng; Ma, Yang

    2013-10-01

    Frequent pattern, frequently appearing in the data set, plays an important role in data mining. For the stellar spectrum classification tasks, a classification rule mining method based on classification pattern tree is presented on the basis of frequent pattern. The procedures can be shown as follows. Firstly, a new tree structure, i. e., classification pattern tree, is introduced based on the different frequencies of stellar spectral attributes in data base and its different importance used for classification. The related concepts and the construction method of classification pattern tree are also described in this paper. Then, the characteristics of the stellar spectrum are mapped to the classification pattern tree. Two modes of top-to-down and bottom-to-up are used to traverse the classification pattern tree and extract the classification rules. Meanwhile, the concept of pattern capability is introduced to adjust the number of classification rules and improve the construction efficiency of the classification pattern tree. Finally, the SDSS (the Sloan Digital Sky Survey) stellar spectral data provided by the National Astronomical Observatory are used to verify the accuracy of the method. The results show that a higher classification accuracy has been got.

  16. System and method of self-properties for an autonomous and automatic computer environment

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Sterritt, Roy (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments self health/urgency data and environment health/urgency data may be transmitted externally from an autonomic element. Other embodiments may include transmitting the self health/urgency data and environment health/urgency data together on a regular basis similar to the lub-dub of a heartbeat. Yet other embodiments may include a method for managing a system based on the functioning state and operating status of the system, wherein the method may include processing received signals from the system indicative of the functioning state and the operating status to obtain an analysis of the condition of the system, generating one or more stay alive signals based on the functioning status and the operating state of the system, transmitting the stay-alive signal, transmitting self health/urgency data, and transmitting environment health/urgency data. Still other embodiments may include an autonomic element that includes a self monitor, a self adjuster, an environment monitor, and an autonomic manager.

  17. A robust automatic leukocyte recognition method based on island-clustering texture

    Directory of Open Access Journals (Sweden)

    Xiaoshun Li

    2016-01-01

    Full Text Available A leukocyte recognition method for human peripheral blood smear based on island-clustering texture (ICT is proposed. By analyzing the features of the five typical classes of leukocyte images, a new ICT model is established. Firstly, some feature points are extracted in a gray leukocyte image by mean-shift clustering to be the centers of islands. Secondly, the growing region is employed to create regions of the islands in which the seeds are just these feature points. These islands distribution can describe a new texture. Finally, a distinguished parameter vector of these islands is created as the ICT features by combining the ICT features with the geometric features of the leukocyte. Then the five typical classes of leukocytes can be recognized successfully at the correct recognition rate of more than 92.3% with a total sample of 1310 leukocytes. Experimental results show the feasibility of the proposed method. Further analysis reveals that the method is robust and results can provide important information for disease diagnosis.

  18. Set up an Arc Welding Code with Enthalpy Method in Upwind Scheme

    Science.gov (United States)

    Ho, Je-Ee.

    2010-05-01

    In this study, a numerical code with enthalpy method in upwind scheme is proposed to estimate the distribution of thermal stress in the molten pool, which is primarily determined by the type of the input power and travel speed of heating source. To predict the cracker deficit inside the workpiece, a simulated program satisfying the diagonal domination and Scarborough criterion provides a stable iteration. Meantime, an experimental performance, operated by robot arm "DR-400" to provide a steady and continuous arc welding, was also conducted to verify the simulated result. By surveying the consistence of molten pool bounded by contrast shade and simulated melting contour on the surface of workpiece, the validity of model proposed to predict the thermal cracker has been successfully identified.

  19. Solution of the neutronics code dynamic benchmark by finite element method

    Science.gov (United States)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  20. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  1. Quantum image pseudocolor coding based on the density-stratified method

    Science.gov (United States)

    Jiang, Nan; Wu, Wenya; Wang, Luo; Zhao, Na

    2015-05-01

    Pseudocolor processing is a branch of image enhancement. It dyes grayscale images to color images to make the images more beautiful or to highlight some parts on the images. This paper proposes a quantum image pseudocolor coding scheme based on the density-stratified method which defines a colormap and changes the density value from gray to color parallel according to the colormap. Firstly, two data structures: quantum image GQIR and quantum colormap QCR are reviewed or proposed. Then, the quantum density-stratified algorithm is presented. Based on them, the quantum realization in the form of circuits is given. The main advantages of the quantum version for pseudocolor processing over the classical approach are that it needs less memory and can speed up the computation. Two kinds of examples help us to describe the scheme further. Finally, the future work are analyzed.

  2. Comparison between Different Intensity Normalization Methods in 123I-Ioflupane Imaging for the Automatic Detection of Parkinsonism.

    Directory of Open Access Journals (Sweden)

    A Brahim

    Full Text Available Intensity normalization is an important pre-processing step in the study and analysis of DaTSCAN SPECT imaging. As most automatic supervised image segmentation and classification methods base their assumptions regarding the intensity distributions on a standardized intensity range, intensity normalization takes on a very significant role. In this work, a comparison between different novel intensity normalization methods is presented. These proposed methodologies are based on Gaussian Mixture Model (GMM image filtering and mean-squared error (MSE optimization. The GMM-based image filtering method is achieved according to a probability threshold that removes the clusters whose likelihood are negligible in the non-specific regions. The MSE optimization method consists of a linear transformation that is obtained by minimizing the MSE in the non-specific region between the intensity normalized image and the template. The proposed intensity normalization methods are compared to: i a standard approach based on the specific-to-non-specific binding ratio that is widely used, and ii a linear approach based on the α-stable distribution. This comparison is performed on a DaTSCAN image database comprising analysis and classification stages for the development of a computer aided diagnosis (CAD system for Parkinsonian syndrome (PS detection. In addition, these proposed methods correct spatially varying artifacts that modulate the intensity of the images. Finally, using the leave-one-out cross-validation technique over these two approaches, the system achieves results up to a 92.91% of accuracy, 94.64% of sensitivity and 92.65 % of specificity, outperforming previous approaches based on a standard and a linear approach, which are used as a reference. The use of advanced intensity normalization techniques, such as the GMM-based image filtering and the MSE optimization improves the diagnosis of PS.

  3. Method for automatic re contouring straight adaptive radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Rodriguez Vila, B.; Garcia Vicente, F.; Aguilera, E. J.

    2011-01-01

    Outline of quickly and accurately the rectal wall is important in Image Guided Radiotherapy (IGRT in the acronym) as an organ of greatest influence in limiting the dose in the planning of radiation therapy in prostate cancer. Deformabies registration methods based on image intensity can not create a correct spatial transformation if there is no correspondence between the image and image planning session. The rectal content variation creates a non-correspondence in the image intensity becomes a major obstacle to the deformable registration based on image intensity.

  4. Quick Correct: A Method to Automatically Evaluate Student Work in MS Excel Spreadsheets

    Directory of Open Access Journals (Sweden)

    Laura R Wetzel

    2007-11-01

    Full Text Available The quick correct method allows instructors to easily assess Excel spreadsheet assignments and notifies students immediately if their answers are acceptable. The instructor creates a spread-sheet template for students to complete. To evaluate student answers within the template, the instructor places logic functions (e.g., IF, AND, OR into a column adjacent to student responses. These “quick correct” formulae are then password protected and hidden from view. If a student enters an incorrect answer while completing the spreadsheet template, the logic function returns an appropriate warning that encourages corrections.

  5. Method for Increasing the Efficiency of Automatic Fire Extinguish System at Objects Of Power

    Directory of Open Access Journals (Sweden)

    Dmitrienko Margarita

    2015-01-01

    Full Text Available Operation of energy facilities requires compliance with all safety standards, and especially fire safety. Emergency situations that arise when operated the power equipment damage not only the objects of the technosphere but also the environment. In recent years, can be noted a trend of quite intensive development of technological bases of technology water mist fire extinguishing. Using the methods of optical panoramic imaging PIV, IPI and the method of high-speed video recording were performed the experimental studies of the characteristics of evaporation of large single water droplets as they pass through the flames of oil and oil products with varying parameters of the processes (the initial size of 2–6 mm, the rate of 2–4 m/s and the temperature of water drops 290–300 K, the temperature of the combustion products 185–2073 K. Was established decisive influence droplet size, velocities at which droplets enter the gaseous medium, the initial water temperature on heating rate and evaporation of droplets in a stream of high-temperature combustion products.

  6. A five-colour colour-coded mapping method for DCE-MRI analysis of head and neck tumours

    International Nuclear Information System (INIS)

    Yuan, J.; Chow, S.K.K.; Yeung, D.K.W.; King, A.D.

    2012-01-01

    Aim: To devise a method to convert the time–intensity curves (TICs) of head and neck dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data into a pixel-by-pixel colour-coded map for identifying normal tissues and tumours. Materials and methods: Twenty-three patients with head and neck squamous cell carcinoma (HNSCC) underwent DCE-MRI. TIC patterns of primary tumours, metastatic nodes, and normal tissues were assessed and a program was devised to convert the patterns into a classified colour-coded map. The enhancement patterns of tumours and normal tissue structures were evaluated and categorized into nine grades (0–8) based on the predominance of coloured pixels on maps. Results: Five identified TIC patterns were converted into a colour-coded map consisting of red (maximum enhancement), brown (continuous slow rise-up), yellow (rapid wash-in and wash-out), green (rapid wash-in and plateau), and blue (rapid wash-in and rise-up). The colour-coded map distinguished all 21 primary tumours and 15 metastatic nodes from normal structures. Primary tumours and metastatic nodes were colour coded as predominantly yellow (grades 1–2) in 17/21 and 6/15, green (grades 3–5) in 3/21 and 5/15, and blue (grades 6–7) in 1/21 and 4/15, respectively. Vessels were coded red in 46/46 (grade 0) and muscles were coded brown in 23/23 (grade 8). Salivary glands, thyroid glands, and palatine tonsils were coded into predominantly yellow (grade 1) in 46/46 and 10/10 and 18/22, respectively. Conclusion: DCE-MRI derived five-colour-coded mapping provides an objective easy-to-interpret method to assess the dynamic enhancement pattern of head and neck cancers.

  7. A five-colour colour-coded mapping method for DCE-MRI analysis of head and neck tumours

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, J., E-mail: jyuan@cuhk.edu.hk [Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, NT (Hong Kong); Chow, S.K.K.; Yeung, D.K.W.; King, A.D. [Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, NT (Hong Kong)

    2012-03-15

    Aim: To devise a method to convert the time-intensity curves (TICs) of head and neck dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data into a pixel-by-pixel colour-coded map for identifying normal tissues and tumours. Materials and methods: Twenty-three patients with head and neck squamous cell carcinoma (HNSCC) underwent DCE-MRI. TIC patterns of primary tumours, metastatic nodes, and normal tissues were assessed and a program was devised to convert the patterns into a classified colour-coded map. The enhancement patterns of tumours and normal tissue structures were evaluated and categorized into nine grades (0-8) based on the predominance of coloured pixels on maps. Results: Five identified TIC patterns were converted into a colour-coded map consisting of red (maximum enhancement), brown (continuous slow rise-up), yellow (rapid wash-in and wash-out), green (rapid wash-in and plateau), and blue (rapid wash-in and rise-up). The colour-coded map distinguished all 21 primary tumours and 15 metastatic nodes from normal structures. Primary tumours and metastatic nodes were colour coded as predominantly yellow (grades 1-2) in 17/21 and 6/15, green (grades 3-5) in 3/21 and 5/15, and blue (grades 6-7) in 1/21 and 4/15, respectively. Vessels were coded red in 46/46 (grade 0) and muscles were coded brown in 23/23 (grade 8). Salivary glands, thyroid glands, and palatine tonsils were coded into predominantly yellow (grade 1) in 46/46 and 10/10 and 18/22, respectively. Conclusion: DCE-MRI derived five-colour-coded mapping provides an objective easy-to-interpret method to assess the dynamic enhancement pattern of head and neck cancers.

  8. A novel method and software for automatically classifying Alzheimer's disease patients by magnetic resonance imaging analysis.

    Science.gov (United States)

    Previtali, F; Bertolazzi, P; Felici, G; Weitschek, E

    2017-05-01

    The cause of the Alzheimer's disease is poorly understood and to date no treatment to stop or reverse its progression has been discovered. In developed countries, the Alzheimer's disease is one of the most financially costly diseases due to the requirement of continuous treatments as well as the need of assistance or supervision with the most cognitively demanding activities as time goes by. The objective of this work is to present an automated approach for classifying the Alzheimer's disease from magnetic resonance imaging (MRI) patient brain scans. The method is fast and reliable for a suitable and straightforward deploy in clinical applications for helping diagnosing and improving the efficacy of medical treatments by recognising the disease state of the patient. Many features can be extracted from magnetic resonance images, but most are not suitable for the classification task. Therefore, we propose a new feature extraction technique from patients' MRI brain scans that is based on a recent computer vision method, called Oriented FAST and Rotated BRIEF. The extracted features are processed with the definition and the combination of two new metrics, i.e., their spatial position and their distribution around the patient's brain, and given as input to a function-based classifier (i.e., Support Vector Machines). We report the comparison with recent state-of-the-art approaches on two established medical data sets (ADNI and OASIS). In the case of binary classification (case vs control), our proposed approach outperforms most state-of-the-art techniques, while having comparable results with the others. Specifically, we obtain 100% (97%) of accuracy, 100% (97%) sensitivity and 99% (93%) specificity for the ADNI (OASIS) data set. When dealing with three or four classes (i.e., classification of all subjects) our method is the only one that reaches remarkable performance in terms of classification accuracy, sensitivity and specificity, outperforming the state

  9. Adaptive Morse code communication system for severely disabled individuals.

    Science.gov (United States)

    Yang, C H

    2000-01-01

    Morse code with an easy-to-operate, single switch input system has been shown to be an excellent communication adaptive device. Because maintaining a stable typing rate is not easy for the disabled, the automatic recognition of Morse code is difficult. Therefore, a suitable adaptive automatic recognition method is needed. This paper presents the application of a Least-Mean-Square algorithm to adaptive Morse code recognition for persons with impaired hand coordination and dexterity. Four processes are involved in this adaptive Morse code recognition method: space recognition, tone recognition, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method results in a better recognition rate for the participants tested in comparison to other methods from the literature.

  10. Phase-Phase and Phase-Code Methods Modification for Precise Detecting and Predicting the GPS Cycle Slip Error

    Directory of Open Access Journals (Sweden)

    Elashiry Ahmed A.

    2015-12-01

    Full Text Available There are three well-established detecting methods for cycle slip error, which are: Doppler measurement method, Phase-Code differencing method, and Phase-Phase Differencing Method. The first method depends on the comparison between observables and the fact that Doppler measurements are immune to cycle slip error. This method is considered as the most precise method for cycle slip detecting, because it succeed in detecting and predicting the smallest cycle slip size (1 cycle in case the local oscillator has low bias. The second method depends on the comparison between observables (phase and code and the code measurements are immune to the cycle slip error. But this method can’t detect or predict cycle slip size smaller than 10 cycles, because the code measurements have high noise. The third method depends on the comparison between observables (phase 1 and phase 2 and the phases measurements that have low noise. But this method can’t detect or predict cycle slip size smaller than 5 cycles, because the ionospheric change might have a high variation.

  11. Preliminary study of automatic detection method for anatomical landmarks in body trunk CT images

    International Nuclear Information System (INIS)

    Nemoto, Mitsutaka; Nomura, Yukihiro; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni; Hanaoka, Shouhei

    2010-01-01

    In the research field of medical image processing and analysis, it is important to develop medical image understanding methods which are robust for individual and case differences, since they often interfere with accurate medical image processing and analysis. Location of anatomical landmarks, which are localized regions with anatomical reference to the human body, allows for robust medical understanding since the relative position of anatomical landmarks is basically the same among cases. This is a preliminary study for detecting anatomical point landmarks by using a technique of local area model matching. The model for matching process, which is called appearance model, shows the spatial appearance of voxel values at the detection target landmark and its surrounding region, while the Principal Component Analysis (PCA) is used to train appearance models. In this study, we experimentally investigate the optimal appearance model for landmark detection and analyze detection accuracy of anatomical point landmarks. (author)

  12. A Generalized Method for Automatic Downhand and Wirefeed Control of a Welding Robot and Positioner

    Science.gov (United States)

    Fernandez, Ken; Cook, George E.

    1988-01-01

    A generalized method for controlling a six degree-of-freedom (DOF) robot and a two DOF positioner used for arc welding operations is described. The welding path is defined in the part reference frame, and robot/positioner joint angles of the equivalent eight DOF serial linkage are determined via an iterative solution. Three algorithms are presented: the first solution controls motion of the eight DOF mechanism such that proper torch motion is achieved while minimizing the sum-of-squares of joint displacements; the second algorithm adds two constraint equations to achieve torch control while maintaining part orientation so that welding occurs in the downhand position; and the third algorithm adds the ability to control the proper orientation of a wire feed mechanism used in gas tungsten arc (GTA) welding operations. A verification of these algorithms is given using ROBOSIM, a NASA developed computer graphic simulation software package design for robot systems development.

  13. Method and apparatus for automatically detecting patterns in digital point-ordered signals

    Science.gov (United States)

    Brudnoy, D.M.

    1998-10-20

    The present invention is a method and system for detecting a physical feature of a test piece by detecting a pattern in a signal representing data from inspection of the test piece. The pattern is detected by automated additive decomposition of a digital point-ordered signal which represents the data. The present invention can properly handle a non-periodic signal. A physical parameter of the test piece is measured. A digital point-ordered signal representative of the measured physical parameter is generated. The digital point-ordered signal is decomposed into a baseline signal, a background noise signal, and a peaks/troughs signal. The peaks/troughs from the peaks/troughs signal are located and peaks/troughs information indicating the physical feature of the test piece is output. 14 figs.

  14. Modelling of the automatic stabilization system of the aircraft course by a fuzzy logic method

    Science.gov (United States)

    Mamonova, T.; Syryamkin, V.; Vasilyeva, T.

    2016-04-01

    The problem of the present paper concerns the development of a fuzzy model of the system of an aircraft course stabilization. In this work modelling of the aircraft course stabilization system with the application of fuzzy logic is specified. Thus the authors have used the data taken for an ordinary passenger plane. As a result of the study the stabilization system models were realised in the environment of Matlab package Simulink on the basis of the PID-regulator and fuzzy logic. The authors of the paper have shown that the use of the method of artificial intelligence allows reducing the time of regulation to 1, which is 50 times faster than the time when standard receptions of the management theory are used. This fact demonstrates a positive influence of the use of fuzzy regulation.

  15. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    Science.gov (United States)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  16. ANISOMAT+: An automatic tool to retrieve seismic anisotropy from local earthquakes

    Science.gov (United States)

    Piccinini, Davide; Pastori, Marina; Margheriti, Lucia

    2013-07-01

    An automatic analysis code called ANISOMAT+ has been developed and improved to automatically retrieve the crustal anisotropic parameters fast polarization direction (ϕ) and delay time (δt) related to the shear wave splitting phenomena affecting seismic S-wave. The code is composed of a set of MatLab scripts and functions able to evaluate the anisotropic parameters from the three-component seismic recordings of local earthquakes using the cross-correlation method. Because the aim of the code is to achieve a fully automatic evaluation of anisotropic parameters, during the development of the code we focus our attention to devise several automatic checks intended to guarantee the quality and the stability of the results obtained. The basic idea behind the development of this automatic code is to build a tool able to work on a huge amount of data in a short time, obtaining stable results and minimizing the errors due to the subjectivity. These behaviors, coupled to a three component digital seismic network and a monitoring system that performs automatic pickings and locations, are required to develop a real-time monitoring of the anisotropic parameters.

  17. An innovative method for automatic determination of time of arrival for Lamb waves excited by impact events

    Science.gov (United States)

    Zhu, Junxiao; Parvasi, Seyed Mohammad; Ho, Siu Chun Michael; Patil, Devendra; Ge, Maochen; Li, Hongnan; Song, Gangbing

    2017-05-01

    Lamb waves have great potential as a diagnostic tool in the application of structural health monitoring. Propagation properties of Lamb waves are affected by the state of the structure that the waves are traveling upon. Thus Lamb waves can carry information about the structure as they travel across a structure. However, the dispersive, multimodal and attenuation characteristics of Lamb waves make it difficult to determine the time of arrival of Lamb waves. To deal with these characteristics, an innovative method to automatically determine the time of arrival for impact-induced Lamb waves without human intervention is proposed in this paper. Lead zirconate titanate sensors mounted on the surface of an aluminum plate were used to measure the Lamb waves excited by an impact. The time of arrival was determined based on wavelet decomposition, Hilbert transform and statistics (Grubbs’ test and maximum likelihood estimation). Both of numerical analysis and physical measurements have verified the accuracy of this method for impacts on an aluminum plate.

  18. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  19. Utilisation of best estimate system codes and best estimate methods in safety analyses of VVER reactors in the Czech Republic

    International Nuclear Information System (INIS)

    Macek, Jiri; Kral, Pavel

    2010-01-01

    The content of the presentation was as follows: Conservative versus best estimate approach, Brief description and selection of methodology, Description of uncertainty methods, Examples of the BE methodology. It is concluded that where BE computer codes are used, uncertainty and sensitivity analyses should be included; if best estimate codes + uncertainty are used, the safety margins increase; and BE + BSA is the next step in licensing analyses. (P.A.)

  20. An automatic patient-specific seizure onset detection method in intracranial EEG based on incremental nonlinear dimensionality reduction.

    Science.gov (United States)

    Zhang, Yizhuo; Xu, Guanghua; Wang, Jing; Liang, Lin

    2010-01-01

    Epileptic seizure features always include the morphology and spatial distribution of nonlinear waveforms in the electroencephalographic (EEG) signals. In this study, we propose a novel incremental learning scheme based on nonlinear dimensionality reduction for automatic patient-specific seizure onset detection. The method allows for identification of seizure onset times in long-term EEG signals acquired from epileptic patients. Firstly, a nonlinear dimensionality reduction (NDR) method called local tangent space alignment (LTSA) is used to reduce the dimensionality of available initial feature sets extracted with continuous wavelet transform (CWT). One-dimensional manifold which reflects the intrinsic dynamics of seizure onset is obtained. For each patient, IEEG recordings containing one seizure onset is sufficient to train the initial one-dimensional manifold. Secondly, an unsupervised incremental learning scheme is proposed to update the initial manifold when the unlabelled EEG segments flow in sequentially. The incremental learning scheme can cluster the new coming samples into the trained patterns (containing or not containing seizure onsets). Intracranial EEG recordings from 21 patients with duration of 193.8h and 82 seizures are used for the evaluation of the method. Average sensitivity of 98.8%, average uninteresting false positive rate of 0.24/h, average interesting false positives rate of 0.25/h, and average detection delay of 10.8s are obtained. Our method offers simple, accurate training with less human intervening and can be well used in off-line seizure detection. The unsupervised incremental learning scheme has the potential in identifying novel IEEG classes (different onset patterns) within the data. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Monte-Carlo method - codes for the study of criticality problems (on IBM 7094); Methode de Monte- Carlo - codes pour l'etude des problemes de criticite (IBM 7094)

    Energy Technology Data Exchange (ETDEWEB)

    Moreau, J.; Rabot, H.; Robin, C. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    The two codes presented here allow to determine the multiplication constant of media containing fissionable materials under numerous and divided forms; they are based on the Monte-Carlo method. The first code apply to x, y, z, geometries. The volume to be studied ought to be divisible in parallelepipeds, the media within each parallelepiped being limited by non-secant surfaces. The second code is intended for r, 0, z geometries. The results include an analysis of collisions in each medium. Applications and examples with informations on time and accuracy are given. (authors) [French] Les deux codes presentes dans ce rapport permettent la determination des coefficients de multiplication de milieux contenant des matieres fissiles sous des formes tres variees et divisees, ils reposent sur la methode de Monte-Carlo. Le premier code s'applique aux geometries x, y, z, le volume a etudier doit pouvoir etre decompose en parallelepipedes, les milieux a l'interieur de chaque parallelepipede etant limites par des surfaces non secantes. Le deuxieme code s'applique aux geometries r, 0, z. Les resultats comportent une analyse des collisions dans chaque milieu. Des applications et des exemples avec les indications de temps et de precision sont fournis. (auteurs)

  2. Automatic contact in DYNA3D for vehicle crashworthiness

    International Nuclear Information System (INIS)

    Whirley, R.G.; Engelmann, B.E.

    1994-01-01

    This paper presents a new formulation for the automatic definition and treatment of mechanical contact in explicit, nonlinear, finite element analysis. Automatic contact offers the benefits of significantly reduced model construction time and fewer opportunities for user error, but faces significant challenges in reliability and computational costs. The authors have used a new four-step automatic contact algorithm. Key aspects of the proposed method include (1) automatic identification of adjacent and opposite surfaces in the global search phase, and (2) the use of a smoothly varying surface normal that allows a consistent treatment of shell intersection and corner contact conditions without ad hoc rules. Three examples are given to illustrate the performance of the newly proposed algorithm in the public DYNA3D code

  3. Fetal Intelligent Navigation Echocardiography (FINE): a novel method for rapid, simple, and automatic examination of the fetal heart.

    Science.gov (United States)

    Yeo, Lami; Romero, Roberto

    2013-09-01

    To describe a novel method (Fetal Intelligent Navigation Echocardiography (FINE)) for visualization of standard fetal echocardiography views from volume datasets obtained with spatiotemporal image correlation (STIC) and application of 'intelligent navigation' technology. We developed a method to: 1) demonstrate nine cardiac diagnostic planes; and 2) spontaneously navigate the anatomy surrounding each of the nine cardiac diagnostic planes (Virtual Intelligent Sonographer Assistance (VIS-Assistance®)). The method consists of marking seven anatomical structures of the fetal heart. The following echocardiography views are then automatically generated: 1) four chamber; 2) five chamber; 3) left ventricular outflow tract; 4) short-axis view of great vessels/right ventricular outflow tract; 5) three vessels and trachea; 6) abdomen/stomach; 7) ductal arch; 8) aortic arch; and 9) superior and inferior vena cava. The FINE method was tested in a separate set of 50 STIC volumes of normal hearts (18.6-37.2 weeks of gestation), and visualization rates for fetal echocardiography views using diagnostic planes and/or VIS-Assistance® were calculated. To examine the feasibility of identifying abnormal cardiac anatomy, we tested the method in four cases with proven congenital heart defects (coarctation of aorta, tetralogy of Fallot, transposition of great vessels and pulmonary atresia with intact ventricular septum). In normal cases, the FINE method was able to generate nine fetal echocardiography views using: 1) diagnostic planes in 78-100% of cases; 2) VIS-Assistance® in 98-100% of cases; and 3) a combination of diagnostic planes and/or VIS-Assistance® in 98-100% of cases. In all four abnormal cases, the FINE method demonstrated evidence of abnormal fetal cardiac anatomy. The FINE method can be used to visualize nine standard fetal echocardiography views in normal hearts by applying 'intelligent navigation' technology to STIC volume datasets. This method can simplify

  4. Application of computational fluid dynamics methods to improve thermal hydraulic code analysis

    Science.gov (United States)

    Sentell, Dennis Shannon, Jr.

    A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

  5. Decoy state method for quantum cryptography based on phase coding into faint laser pulses

    Science.gov (United States)

    Kulik, S. P.; Molotkov, S. N.

    2017-12-01

    We discuss the photon number splitting attack (PNS) in systems of quantum cryptography with phase coding. It is shown that this attack, as well as the structural equations for the PNS attack for phase encoding, differs physically from the analogous attack applied to the polarization coding. As far as we know, in practice, in all works to date processing of experimental data has been done for phase coding, but using formulas for polarization coding. This can lead to inadequate results for the length of the secret key. These calculations are important for the correct interpretation of the results, especially if it concerns the criterion of secrecy in quantum cryptography.

  6. Application of automatic change of interval to de Vogelaere's method of the solution of the differential equation y'' = f (x, y)

    International Nuclear Information System (INIS)

    Rogers, M.H.

    1960-11-01

    The paper gives an extension to de Vogelaere's method for the solution of systems of second order differential equations from which first derivatives are absent. The extension is a description of the way in which automatic change in step-length can be made to give a prescribed accuracy at each step. (author)

  7. Performance of human observers and an automatic 3-dimensional computer-vision-based locomotion scoring method to detect lameness and hoof lesions in dairy cows

    NARCIS (Netherlands)

    Schlageter-Tello, Andrés; Hertem, Van Tom; Bokkers, Eddie A.M.; Viazzi, Stefano; Bahr, Claudia; Lokhorst, Kees

    2018-01-01

    The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data

  8. Biodosimetry estimation using the ratio of the longest:shortest length in the premature chromosome condensation (PCC) method applying autocapture and automatic image analysis.

    Science.gov (United States)

    González, Jorge E; Romero, Ivonne; Gregoire, Eric; Martin, Cécile; Lamadrid, Ana I; Voisin, Philippe; Barquinero, Joan-Francesc; García, Omar

    2014-09-01

    The combination of automatic image acquisition and automatic image analysis of premature chromosome condensation (PCC) spreads was tested as a rapid biodosimeter protocol. Human peripheral lymphocytes were irradiated with (60)Co gamma rays in a single dose of between 1 and 20 Gy, stimulated with phytohaemaglutinin and incubated for 48 h, division blocked with Colcemid, and PCC-induced by Calyculin A. Images of chromosome spreads were captured and analysed automatically by combining the Metafer 4 and CellProfiler platforms. Automatic measurement of chromosome lengths allows the calculation of the length ratio (LR) of the longest and the shortest piece that can be used for dose estimation since this ratio is correlated with ionizing radiation dose. The LR of the longest and the shortest chromosome pieces showed the best goodness-of-fit to a linear model in the dose interval tested. The application of the automatic analysis increases the potential use of the PCC method for triage in the event of massive radiation causalities. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  9. Novel methods in the Particle-In-Cell accelerator Code-Framework Warp

    Energy Technology Data Exchange (ETDEWEB)

    Vay, J-L [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Grote, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Cohen, R. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Friedman, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-26

    The Particle-In-Cell (PIC) Code-Framework Warp is being developed by the Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) to guide the development of accelerators that can deliver beams suitable for high-energy density experiments and implosion of inertial fusion capsules. It is also applied in various areas outside the Heavy Ion Fusion program to the study and design of existing and next-generation high-energy accelerators, including the study of electron cloud effects and laser wakefield acceleration for example. This study presents an overview of Warp's capabilities, summarizing recent original numerical methods that were developed by the HIFS-VNL (including PIC with adaptive mesh refinement, a large-timestep 'drift-Lorentz' mover for arbitrarily magnetized species, a relativistic Lorentz invariant leapfrog particle pusher, simulations in Lorentz-boosted frames, an electromagnetic solver with tunable numerical dispersion and efficient stride-based digital filtering), with special emphasis on the description of the mesh refinement capability. In addition, selected examples of the applications of the methods to the abovementioned fields are given.

  10. Evaluation of automatic cloud removal method for high elevation areas in Landsat 8 OLI images to improve environmental indexes computation

    Science.gov (United States)

    Alvarez, César I.; Teodoro, Ana; Tierra, Alfonso

    2017-10-01

    Thin clouds in the optical remote sensing data are frequent and in most of the cases don't allow to have a pure surface data in order to calculate some indexes as Normalized Difference Vegetation Index (NDVI). This paper aims to evaluate the Automatic Cloud Removal Method (ACRM) algorithm over a high elevation city like Quito (Ecuador), with an altitude of 2800 meters above sea level, where the clouds are presented all the year. The ACRM is an algorithm that considers a linear regression between each Landsat 8 OLI band and the Cirrus band using the slope obtained with the linear regression established. This algorithm was employed without any reference image or mask to try to remove the clouds. The results of the application of the ACRM algorithm over Quito didn't show a good performance. Therefore, was considered improving this algorithm using a different slope value data (ACMR Improved). After, the NDVI computation was compared with a reference NDVI MODIS data (MOD13Q1). The ACMR Improved algorithm had a successful result when compared with the original ACRM algorithm. In the future, this Improved ACRM algorithm needs to be tested in different regions of the world with different conditions to evaluate if the algorithm works successfully for all conditions.

  11. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  12. On the Selection of Non-Invasive Methods Based on Speech Analysis Oriented to Automatic Alzheimer Disease Diagnosis

    Directory of Open Access Journals (Sweden)

    Unai Martinez de Lizardui

    2013-05-01

    Full Text Available The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN have been used for the automatic classification of the two classes (AD and control subjects. Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.

  13. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  14. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-02-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  15. A simple method for simulation of coherent synchrotron radiation in a tracking code

    International Nuclear Information System (INIS)

    Borland, M.

    2000-01-01

    Coherent synchrotron radiation (CSR) is of great interest to those designing accelerators as drivers for free-electron lasers (FELs). Although experimental evidence is incomplete, CSR is predicted to have potentially severe effects on the emittance of high-brightness electron beams. The performance of an FEL depends critically on the emittance, current, and energy spread of the beam. Attempts to increase the current through magnetic bunch compression can lead to increased emittance and energy spread due to CSR in the dipoles of such a compressor. The code elegant was used for design and simulation of the bunch compressor for the Low-Energy Undulator Test Line (LEUTL) FEL at the Advanced Photon Source (APS). In order to facilitate this design, a fast algorithm was developed based on the 1-D formalism of Saldin and coworkers. In addition, a plausible method of including CSR effects in drift spaces following the chicane magnets was developed and implemented. The algorithm is fast enough to permit running hundreds of tolerance simulations including CSR for 50 thousand particles. This article describes the details of the implementation and shows results for the APS bunch compressor

  16. REVA Advanced Fuel Design and Codes and Methods - Increasing Reliability, Operating Margin and Efficiency in Operation

    Energy Technology Data Exchange (ETDEWEB)

    Frichet, A.; Mollard, P.; Gentet, G.; Lippert, H. J.; Curva-Tivig, F.; Cole, S.; Garner, N.

    2014-07-01

    Since three decades, AREVA has been incrementally implementing upgrades in the BWR and PWR Fuel design and codes and methods leading to an ever greater fuel efficiency and easier licensing. For PWRs, AREVA is implementing upgraded versions of its HTP{sup T}M and AFA 3G technologies called HTP{sup T}M-I and AFA3G-I. These fuel assemblies feature improved robustness and dimensional stability through the ultimate optimization of their hold down system, the use of Q12, the AREVA advanced quaternary alloy for guide tube, the increase in their wall thickness and the stiffening of the spacer to guide tube connection. But an even bigger step forward has been achieved a s AREVA has successfully developed and introduces to the market the GAIA product which maintains the resistance to grid to rod fretting (GTRF) of the HTP{sup T}M product while providing addition al thermal-hydraulic margin and high resistance to Fuel Assembly bow. (Author)

  17. A massively parallel method of characteristic neutral particle transport code for GPUs

    International Nuclear Information System (INIS)

    Boyd, W. R.; Smith, K.; Forget, B.

    2013-01-01

    Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)

  18. Development of a code in three-dimensional cylindrical geometry based on analytic function expansion nodal (AFEN) method

    International Nuclear Information System (INIS)

    Lee, Joo Hee

    2006-02-01

    There is growing interest in developing pebble bed reactors (PBRs) as a candidate of very high temperature gas-cooled reactors (VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. But for realistic analysis of PBRs, there is strong desire of making available high fidelity nodal codes in three-dimensional (r,θ,z) cylindrical geometry. Recently, the Analytic Function Expansion Nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry was extended to two-group (r,z) cylindrical geometry, and gave very accurate results. In this thesis, we develop a method for the full three-dimensional cylindrical (r,θ,z) geometry and implement the method into a code named TOPS. The AFEN methodology in this geometry as in hexagonal geometry is 'robus' (e.g., no occurrence of singularity), due to the unique feature of the AFEN method that it does not use the transverse integration. The transverse integration in the usual nodal methods, however, leads to an impasse, that is, failure of the azimuthal term to be transverse-integrated over r-z surface. We use 13 nodal unknowns in an outer node and 7 nodal unknowns in an innermost node. The general solution of the node can be expressed in terms of that nodal unknowns, and can be updated using the nodal balance equation and the current continuity condition. For more realistic analysis of PBRs, we implemented em Marshak boundary condition to treat the incoming current zero boundary condition and the partial current translation (PCT) method to treat voids in the core. The TOPS code was verified in the various numerical tests derived from Dodds problem and PBMR-400 benchmark problem. The results of the TOPS code show high accuracy and fast computing time than the VENTURE code that is based on finite difference method (FDM)

  19. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  20. Automatic requirements traceability

    OpenAIRE

    Andžiulytė, Justė

    2017-01-01

    This paper focuses on automatic requirements traceability and algorithms that automatically find recommendation links for requirements. The main objective of this paper is the evaluation of these algorithms and preparation of the method defining algorithms to be used in different cases. This paper presents and examines probabilistic, vector space and latent semantic indexing models of information retrieval and association rule mining using authors own implementations of these algorithms and o...

  1. Pathway Detection from Protein Interaction Networks and Gene Expression Data Using Color-Coding Methods and A* Search Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2012-01-01

    Full Text Available With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73GHz and 1GB main memory running under windows operating system.

  2. Automatic method of analysis of OCT images in assessing the severity degree of glaucoma and the visual field loss.

    Science.gov (United States)

    Koprowski, Robert; Rzendkowski, Marek; Wróbel, Zygmunt

    2014-02-14

    In many practical aspects of ophthalmology, it is necessary to assess the severity degree of glaucoma in cases where, for various reasons, it is impossible to perform a visual field test - static perimetry. These are cases in which the visual field test result is not reliable, e.g. advanced AMD (Age-related Macular Degeneration). In these cases, there is a need to determine the severity of glaucoma, mainly on the basis of optic nerve head (ONH) and retinal nerve fibre layer (RNFL) structure. OCT is one of the diagnostic methods capable of analysing changes in both, ONH and RNFL in glaucoma. OCT images of the eye fundus of 55 patients (110 eyes) were obtained from the SOCT Copernicus (Optopol Tech. SA, Zawiercie, Poland). The authors proposed a new method for automatic determination of the RNFL (retinal nerve fibre layer) and other parameters using: mathematical morphology and profiled segmentation based on morphometric information of the eye fundus. A quantitative ratio of the quality of the optic disk and RNFL - BGA (biomorphological glaucoma advancement) was also proposed. The obtained results were compared with the results obtained from a static perimeter. Correlations between the known parameters of the optic disk as well as those suggested by the authors and the results obtained from static perimetry were calculated. The result of correlation with the static perimetry was 0.78 for the existing methods of image analysis and 0.86 for the proposed method. Practical usefulness of the proposed ratio BGA and the impact of the three most important features on the result were assessed. The following results of correlation for the three proposed classes were obtained: cup/disk diameter 0.84, disk diameter 0.97 and the RNFL 1.0. Thus, analysis of the supposed visual field result in the case of glaucoma is possible based only on OCT images of the eye fundus. The calculations and analyses performed with the proposed algorithm and BGA ratio confirm that it is possible to

  3. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... dried again for 10 minute periods until the final weight change of the load is 1 percent or less. 1... and Colorists (AATCC) Test Method 118—1997, Oil Repellency: Hydrocarbon Resistance Test (reaffirmed.... 552(a) and 1 CFR Part 51. Any subsequent amendment to a standard by the standard-setting organization...

  4. A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bravenec, Ronald [Fourth State Research, Austin, TX (United States)

    2017-11-14

    My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less than half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.

  5. Coherent Synchrotron Radiation A Simulation Code Based on the Non-Linear Extension of the Operator Splitting Method

    CERN Document Server

    Dattoli, Giuseppe

    2005-01-01

    The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...

  6. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-04-29

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  7. Development of multi dimensional analysis code for containment safety and performance based on staggered semi-implicit finite volume method

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Hwang, Su Hyun; Han, Tae Young; Lee, Byung Chul; Byun, Choong Sup

    2009-01-01

    A solver of 3-dimensional thermal hydraulic analysis code for a large building having multi rooms such as reactor containment was developed based on 2-phase and 3-field conservation equations. Three fields mean gas, continuous liquid, and dispersed drop. Gas field includes steam, air and hydrogen. Gas motion equation and state equation also considered. Homogeneous and equilibrium conditions were assumed for gas motion equation. Source terms related with phase change were explicitly expressed for the implicit scheme. Resultantly, total 17 independent equations were setup, and total 17 primitive unknowns were identified. Numerical scheme followed the FVM (Finite Volume Method) based on staggered orthogonal structured grid and semi-implicit method. Staggered grid system produces staggered numerical cells of a scalar cell and a vector cell. The porosity method was adopted for easy handling the complex structures inside a computational cell. Such porosity method has been known to be very effective in reducing mesh numbers and acquiring accurate results in spite of fewer meshes. In the actual programming C++ language of OOP (Object Oriented Programming) was used. The code developed by OOP has the features such as the information hiding, encapsulation, modularity and inheritance. These can offer code developers the more explicit and clearer development method. Classes were designed. Cell and Face, and Volume and Component are the bases of the largest Class, System. Class Solver was designed in order to run the solver. Sample runs showed physically reasonable results. The foundation of code was setup through a series of numerical development. (author)

  8. PCR-free quantitative detection of genetically modified organism from raw materials. An electrochemiluminescence-based bio bar code method.

    Science.gov (United States)

    Zhu, Debin; Tang, Yabing; Xing, Da; Chen, Wei R

    2008-05-15

    A bio bar code assay based on oligonucleotide-modified gold nanoparticles (Au-NPs) provides a PCR-free method for quantitative detection of nucleic acid targets. However, the current bio bar code assay requires lengthy experimental procedures including the preparation and release of bar code DNA probes from the target-nanoparticle complex and immobilization and hybridization of the probes for quantification. Herein, we report a novel PCR-free electrochemiluminescence (ECL)-based bio bar code assay for the quantitative detection of genetically modified organism (GMO) from raw materials. It consists of tris-(2,2'-bipyridyl) ruthenium (TBR)-labeled bar code DNA, nucleic acid hybridization using Au-NPs and biotin-labeled probes, and selective capture of the hybridization complex by streptavidin-coated paramagnetic beads. The detection of target DNA is realized by direct measurement of ECL emission of TBR. It can quantitatively detect target nucleic acids with high speed and sensitivity. This method can be used to quantitatively detect GMO fragments from real GMO products.

  9. Calculation of extended shields in the Monte Carlo method using importance function (BRAND and DD code systems)

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.; Kagalenko, I.Eh.; Mironovich, Yu.N.

    1992-01-01

    Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P 1 -approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs

  10. The response matrix method for the representation of the border conditions in the three-dimensional difussion codes

    International Nuclear Information System (INIS)

    Grant, C.R.

    1981-01-01

    It could take a considerable amount of memory and processing time to represent a reactor in its simulation by means of a diffusion code and considering areas in which nuclear and geometrical properties are invariant, such as reflector, water columns, etc. To avoid an explicit representation of these zones, a method employing a matrix was developed consisting in expressing the net currents of each group as a function of the total flux. Estimates are made for different geometries, introducing the PUMA difussion code of materials. Several tests made proved a very sound reliability of the results obtained in 2 and 5 groups. (author) [es

  11. U.S. Sodium Fast Reactor Codes and Methods: Current Capabilities and Path Forward

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, A. J.; Fanning, T. H.

    2017-06-26

    The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such as SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.

  12. Implementation of an implicit method into heat conduction calculation of TRAC-PF1/MOD2 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Abe, Yutaka; Ohnuki, Akira; Murao, Yoshio

    1990-08-01

    A two-dimensional unsteady heat conduction equation is solved in the TRAC-PF/MOD2 code to calculate temperature transients in fuel rod. A large CPU time is often required to get stable solution of temperature transients in the TRAC calculation with a small axial node size (less than 1.0 mm), because the heat conduction equation is discretized explicitly. To eliminate the restriction of the maximum time step size by the heat conduction calculation, an implicit method for solving the heat condition equation was developed and implemented into the TRAC code. Several assessment calculations were performed with the original and modified TRAC codes. It is confirmed that the implicit method is reliable and is successfully implemented into the TRAC code through comparison with theoretical solutions and assessment calculation results. It is demonstrated that the implicit method makes the heat conduction calculation practical even for the analyses of temperature transients with the axial node size less than 0.1 mm. (author)

  13. Generation of Java code from Alvis model

    Science.gov (United States)

    Matyasik, Piotr; Szpyrka, Marcin; Wypych, Michał

    2015-12-01

    Alvis is a formal language that combines graphical modelling of interconnections between system entities (called agents) and a high level programming language to describe behaviour of any individual agent. An Alvis model can be verified formally with model checking techniques applied to the model LTS graph that represents the model state space. This paper presents transformation of an Alvis model into executable Java code. Thus, the approach provides a method of automatic generation of a Java application from formally verified Alvis model.

  14. 'Statistical methods for automatic crack detection based on vibrothermography sequence-of-images data' by M. Li,S. D. Holland and W. Q. Meeker: Discussion 1

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2010-01-01

    Roč. 26, č. 5 (2010), s. 496-501 ISSN 1524-1904 Institutional research plan: CEZ:AV0Z10750506 Keywords : image analysis * statistical characteristics * material tests Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.829, year: 2010 http://library.utia.cas.cz/separaty/2011/SI/volf-statistical methods for automatic crack detection based on vibrothermography sequence-of- image s data.pdf

  15. A comparative method for finding and folding RNA secondary structures within protein-coding regions

    DEFF Research Database (Denmark)

    Pedersen, Jakob Skou; Meyer, Irmtraud Margret; Forsberg, Roald

    2004-01-01

    that RNA-DECODER's parameters can be automatically trained to successfully fold known secondary structures within the HCV genome. We scan the genomes of HCV and polio virus for conserved secondary-structure elements, and analyze performance as a function of available evolutionary information. On known...... secondary structures, RNA-DECODER shows a sensitivity similar to the programs MFOLD, PFOLD and RNAALIFOLD. When scanning the entire genomes of HCV and polio virus for structure elements, RNA-DECODER's results indicate a markedly higher specificity than MFOLD, PFOLD and RNAALIFOLD....

  16. Development of a computer code for neutronic calculations of a hexagonal lattice of nuclear reactor using the flux expansion nodal method

    Directory of Open Access Journals (Sweden)

    Mohammadnia Meysam

    2013-01-01

    Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.

  17. Four Methods to Determine RIASEC Codes for College Majors and a Comparison of Hit Rates.

    Science.gov (United States)

    Harrington, Thomas F.; And Others

    1993-01-01

    Compared Realistic, Investigative, Artistic, Social, Enterprising, and Conventional (RIASEC) college major codes derived from surveying students enrolled in 28 majors, judgments of subject matter and counseling experts, and workers employed in jobs related to majors. Highest degree of agreement was 96% between student Career Decision-Making codes…

  18. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  19. Systematic analysis of coding and noncoding DNA sequences using methods of statistical linguistics

    Science.gov (United States)

    Mantegna, R. N.; Buldyrev, S. V.; Goldberger, A. L.; Havlin, S.; Peng, C. K.; Simons, M.; Stanley, H. E.

    1995-01-01

    We compare the statistical properties of coding and noncoding regions in eukaryotic and viral DNA sequences by adapting two tests developed for the analysis of natural languages and symbolic sequences. The data set comprises all 30 sequences of length above 50 000 base pairs in GenBank Release No. 81.0, as well as the recently published sequences of C. elegans chromosome III (2.2 Mbp) and yeast chromosome XI (661 Kbp). We find that for the three chromosomes we studied the statistical properties of noncoding regions appear to be closer to those observed in natural languages than those of coding regions. In particular, (i) a n-tuple Zipf analysis of noncoding regions reveals a regime close to power-law behavior while the coding regions show logarithmic behavior over a wide interval, while (ii) an n-gram entropy measurement shows that the noncoding regions have a lower n-gram entropy (and hence a larger "n-gram redundancy") than the coding regions. In contrast to the three chromosomes, we find that for vertebrates such as primates and rodents and for viral DNA, the difference between the statistical properties of coding and noncoding regions is not pronounced and therefore the results of the analyses of the investigated sequences are less conclusive. After noting the intrinsic limitations of the n-gram redundancy analysis, we also briefly discuss the failure of the zeroth- and first-order Markovian models or simple nucleotide repeats to account fully for these "linguistic" features of DNA. Finally, we emphasize that our results by no means prove the existence of a "language" in noncoding DNA.

  20. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  1. Development of a three-dimensional neutron transport code DFEM based on the double finite element method

    International Nuclear Information System (INIS)

    Fujimura, Toichiro

    1996-01-01

    A three-dimensional neutron transport code DFEM has been developed by the double finite element method to analyze reactor cores with complex geometry as large fast reactors. Solution algorithm is based on the double finite element method in which the space and angle finite elements are employed. A reactor core system can be divided into some triangular and/or quadrangular prism elements, and the spatial distribution of neutron flux in each element is approximated with linear basis functions. As for the angular variables, various basis functions are applied, and their characteristics were clarified by comparison. In order to enhance the accuracy, a general method is derived to remedy the truncation errors at reflective boundaries, which are inherent in the conventional FEM. An adaptive acceleration method and the source extrapolation method were applied to accelerate the convergence of the iterations. The code structure is outlined and explanations are given on how to prepare input data. A sample input list is shown for reference. The eigenvalue and flux distribution for real scale fast reactors and the NEA benchmark problems were presented and discussed in comparison with the results of other transport codes. (author)

  2. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  3. Management of natural resources through automatic cartographic inventory

    Science.gov (United States)

    Rey, P. A.; Gourinard, Y.; Cambou, F. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Significant correspondence codes relating ERTS imagery to ground truth from vegetation and geology maps have been established. The use of color equidensity and color composite methods for selecting zones of equal densitometric value on ERTS imagery was perfected. Primary interest of temporal color composite is stressed. A chain of transfer operations from ERTS imagery to the automatic mapping of natural resources was developed.

  4. Computer program for automatic generation of BWR control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsia, M.Y.

    1990-01-01

    A computer program named OCTOPUS has been developed to automatically determine a control rod pattern that approximates some desired target power distribution as closely as possible without violating any thermal safety or reactor criticality constraints. The program OCTOPUS performs a semi-optimization task based on the method of approximation programming (MAP) to develop control rod patterns. The SIMULATE-E code is used to determine the nucleonic characteristics of the reactor core state

  5. Quantum dots-based double imaging combined with organic dye imaging to establish an automatic computerized method for cancer Ki67 measurement

    Science.gov (United States)

    Wang, Lin-Wei; Qu, Ai-Ping; Liu, Wen-Lou; Chen, Jia-Mei; Yuan, Jing-Ping; Wu, Han; Li, Yan; Liu, Juan

    2016-02-01

    As a widely used proliferative marker, Ki67 has important impacts on cancer prognosis, especially for breast cancer (BC). However, variations in analytical practice make it difficult for pathologists to manually measure Ki67 index. This study is to establish quantum dots (QDs)-based double imaging of nuclear Ki67 as red signal by QDs-655, cytoplasmic cytokeratin (CK) as yellow signal by QDs-585, and organic dye imaging of cell nucleus as blue signal by 4‧,6-diamidino-2-phenylindole (DAPI), and to develop a computer-aided automatic method for Ki67 index measurement. The newly developed automatic computerized Ki67 measurement could efficiently recognize and count Ki67-positive cancer cell nuclei with red signals and cancer cell nuclei with blue signals within cancer cell cytoplasmic with yellow signals. Comparisons of computerized Ki67 index, visual Ki67 index, and marked Ki67 index for 30 patients of 90 images with Ki67 ≤ 10% (low grade), 10% Ki67 Ki67 ≥ 50% (high grade) showed computerized Ki67 counting is better than visual Ki67 counting, especially for Ki67 low and moderate grades. Based on QDs-based double imaging and organic dye imaging on BC tissues, this study successfully developed an automatic computerized Ki67 counting method to measure Ki67 index.

  6. Automatically Preparing Safe SQL Queries

    Science.gov (United States)

    Bisht, Prithvi; Sistla, A. Prasad; Venkatakrishnan, V. N.

    We present the first sound program source transformation approach for automatically transforming the code of a legacy web application to employ PREPARE statements in place of unsafe SQL queries. Our approach therefore opens the way for eradicating the SQL injection threat vector from legacy web applications.

  7. Analysis of piping systems by finite element method using code SAP-IV

    International Nuclear Information System (INIS)

    Cizelj, L.; Ogrizek, D.

    1987-01-01

    Due to extensive and multiple use of the computer code SAP-IV we have decided to install it on VAX 11/750 machine. Installation required a large quantity of programming due to great discrepancies between the CDC (the original program version) and the VAX. Testing was performed basically in the field of pipe elements, based on a comparison between results obtained with the codes PSAFE2, DOCIJEV, PIPESD and SAP -V. Besides, the model of reactor pressure vessel with 3-D thick shell elements was done. The capabilities show good agreement with the results of other programs mentioned above. Along with the package installation, the graphical postprocessors being developed for mesh plotting. (author)

  8. Review of solution approach, methods, and recent results of the RELAP5 system code

    International Nuclear Information System (INIS)

    Trapp, J.A.; Ransom, V.H.

    1983-01-01

    The present RELAP5 code is based on a semi-implicit numerical scheme for the hydrodynamic model. The basic guidelines employed in the development of the semi-implicit numerical scheme are discussed and the numerical features of the scheme are illustrated by analysis for a simple, but analogous, single-equation model. The basic numerical scheme is recorded and results from several simulations are presented. The experimental results and code simulations are used in a complementary fashion to develop insights into nuclear-plant response that would not be obtained if either tool were used alone. Further analysis using the simple single-equation model is carried out to yield insights that are presently being used to implement a more-implicit multi-step scheme in the experimental version of RELAP5. The multi-step implicit scheme is also described

  9. Some questions of using coding theory and analytical calculation methods on computers

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1987-01-01

    Main results of investigations devoted to the application of theory and practice of correcting codes are presented. These results are used to create very fast units for the selection of events registered in multichannel detectors of nuclear particles. Using this theory and analytical computing calculations, practically new combination devices, for example, parallel encoders, have been developed. Questions concerning the creation of a new algorithm for the calculation of digital functions by computers and problems of devising universal, dynamically reprogrammable logic modules are discussed

  10. Method for computing self-consistent solution in a gun code

    Science.gov (United States)

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  11. The codes WAV3BDY and WAV4BDY and the variational Monte Carlo method

    International Nuclear Information System (INIS)

    Schiavilla, R.

    1987-01-01

    A description of the codes WAV3BDY and WAV4BDY, which generate the variational ground state wave functions of the A=3 and 4 nuclei, is given, followed by a discussion of the Monte Carlo integration technique, which is used to calculate expectation values and transition amplitudes of operators, and for whose implementation WAV3BDY and WAV4BDY are well suited

  12. Review and comparison of effective delayed neutron fraction calculation methods with Monte Carlo codes

    OpenAIRE

    Bécares, V.; Pérez Martín, S.; Vázquez Antolín, Miriam; Villamarín, D.; Martín Fuertes, Francisco; González Romero, E.M.; Merino Rodríguez, Iván

    2014-01-01

    The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we...

  13. Image encryption with chaotic random codes by grey relational grade and Taguchi method

    Science.gov (United States)

    Huang, Chuan-Kuei; Nien, Hsiau-Hsian; Changchien, Shih-Kuen; Shieh, Hong-Wei

    2007-12-01

    This paper presents a novel scheme for implementation of quasi-optimal chaotic random codes (CRC). Usually, the localization grey relational grade (LGRG) approaches 1 by using less random codes to encrypt digital color images. On the contrary, randomized codes cause highly independent images. In this paper, the LGRG between original and encoded image is used as the quality characteristic, and the chaotic system's initial values x0, y0 and z0 which influence the quality characteristic are chosen as control factors and the levels are also decided. According to the control factors and levels, this paper applied a Taguchi orthogonal array for the experiments, and generated a factor response graph, to figure out a set of chaotic initial values. Finally, the quasi-optimal CRC are decided by these initial values. Eventually, the most effective encryption of digital color images can be obtained by applying the quasi-optimal CRC. The experimental results have demonstrated that the proposed scheme is feasible and efficient.

  14. An error estimation for the implicit Euler method recommended for use in the RELAP4 family of codes

    International Nuclear Information System (INIS)

    Golos, S.

    1989-01-01

    A simple estimation of the absolute value of the error in the dependence of the step number performed for the implicit (backward) Euler method has been derived for the case of a single ordinary differential equation (ODE). This estimation distinctly shows the way and the degree to which the implicit Euler method (recommended in user guides for the RELAP4 family of codes) can give more inaccurate results than the explicit (forward) method. The short and simple reasoning presented should be treated as an indication of the problem. Error estimation for a general system of ODEs is an extremely difficult and complex task, and it is still not completely solved

  15. Study on fault diagnosis method for nuclear power plant based on hadamard error-correcting output code

    Science.gov (United States)

    Mu, Y.; Sheng, G. M.; Sun, P. N.

    2017-05-01

    The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.

  16. MOCUM: A two-dimensional method of characteristics code based on constructive solid geometry and unstructured meshing for general geometries

    International Nuclear Information System (INIS)

    Yang Xue; Satvat, Nader

    2012-01-01

    Highlight: ► A two-dimensional numerical code based on the method of characteristics is developed. ► The complex arbitrary geometries are represented by constructive solid geometry and decomposed by unstructured meshing. ► Excellent agreement between Monte Carlo and the developed code is observed. ► High efficiency is achieved by parallel computing. - Abstract: A transport theory code MOCUM based on the method of characteristics as the flux solver with an advanced general geometry processor has been developed for two-dimensional rectangular and hexagonal lattice and full core neutronics modeling. In the code, the core structure is represented by the constructive solid geometry that uses regularized Boolean operations to build complex geometries from simple polygons. Arbitrary-precision arithmetic is also used in the process of building geometry objects to eliminate the round-off error from the commonly used double precision numbers. Then, the constructed core frame will be decomposed and refined into a Conforming Delaunay Triangulation to ensure the quality of the meshes. The code is fully parallelized using OpenMP and is verified and validated by various benchmarks representing rectangular, hexagonal, plate type and CANDU reactor geometries. Compared with Monte Carlo and deterministic reference solution, MOCUM results are highly accurate. The mentioned characteristics of the MOCUM make it a perfect tool for high fidelity full core calculation for current and GenIV reactor core designs. The detailed representation of reactor physics parameters can enhance the safety margins with acceptable confidence levels, which lead to more economically optimized designs.

  17. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    Science.gov (United States)

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter

  18. Automatic image analysis methods for the determination of stereological parameters - application to the analysis of densification during solid state sintering of WC-Co compacts

    Science.gov (United States)

    Missiaen; Roure

    2000-08-01

    Automatic image analysis methods which were used to determine microstructural parameters of sintered materials are presented. Estimation of stereological parameters at interfaces, when the system contains more than two phases, is particularly detailed. It is shown that the specific surface areas and mean curvatures of the various interfaces can be estimated in the numerical space of the images. The methods are applied to the analysis of densification during solid state sintering of WC-Co compacts. The microstructural evolution is commented on. Application of microstructural measurements to the analysis of densification kinetics is also discussed.

  19. New quadrature-based moment method for the mixing of inert polydisperse fluidized powders in commercial CFD codes

    OpenAIRE

    Mazzei, L.; Marchisio, D. L.; Lettieri, P.

    2012-01-01

    To describe the behavior of polydisperse multiphase systems in an Eulerian framework, we solved the population balance equation (PBE), letting it account only for particle size dependencies. To integrate the PBE within a commercial computational fluid dynamics code, we formulated and implemented a novel version of the quadrature method of moments (QMOM). This no longer assumes that the particles move with the same velocity, allowing the latter to be size-dependent. To verify and test the mode...

  20. New Technique for Automatic Segmentation of Blood Vessels in CT Scan Images of Liver Based on Optimized Fuzzy C-Means Method.

    Science.gov (United States)

    Ahmadi, Katayoon; Karimi, Abbas; Fouladi Nia, Babak

    2016-01-01

    Automatic segmentation of medical CT scan images is one of the most challenging fields in digital image processing. The goal of this paper is to discuss the automatic segmentation of CT scan images to detect and separate vessels in the liver. The segmentation of liver vessels is very important in the liver surgery planning and identifying the structure of vessels and their relationship to tumors. Fuzzy C -means (FCM) method has already been proposed for segmentation of liver vessels. Due to classical optimization process, this method suffers lack of sensitivity to the initial values of class centers and segmentation of local minima. In this article, a method based on FCM in conjunction with genetic algorithms (GA) is applied for segmentation of liver's blood vessels. This method was simulated and validated using 20 CT scan images of the liver. The results showed that the accuracy, sensitivity, specificity, and CPU time of new method in comparison with FCM algorithm reaching up to 91%, 83.62, 94.11%, and 27.17 were achieved, respectively. Moreover, selection of optimal and robust parameters in the initial step led to rapid convergence of the proposed method. The outcome of this research assists medical teams in estimating disease progress and selecting proper treatments.

  1. Proceedings of the Seminar on Methods and Codes for Assessing the off-site consequences of nuclear accidents. Volume 1

    International Nuclear Information System (INIS)

    Kelly, G.N.; Luykx, F.

    1991-01-01

    The Commission of the European Communities, within the framework of its 1980-84 radiation protection research programme, initiated a two-year project in 1983 entitled 'methods for assessing the radiological impact of accidents' (Maria). This project was continued in a substantially enlarged form within the 1985-89 research programme. The main objectives of the project were, firstly, to develop a new probabilistic accident consequence code that was modular, incorporated the best features of those codes already in use, could be readily modified to take account of new data and model developments and would be broadly applicable within the EC; secondly, to acquire a better understanding of the limitations of current models and to develop more rigorous approaches where necessary; and, thirdly, to quantify the uncertainties associated with the model predictions. This research led to the development of the accident consequence code Cosyma (COde System from MAria), which will be made generally available later in 1990. The numerous and diverse studies that have been undertaken in support of this development are summarized in this paper, together with indications of where further effort might be most profitably directed. Consideration is also given to related research directed towards the development of real-time decision support systems for use in off-site emergency management

  2. An Investigation of the Methods of Logicalizing the Code-Checking System for Architectural Design Review in New Taipei City

    Directory of Open Access Journals (Sweden)

    Wei-I Lee

    2016-12-01

    Full Text Available The New Taipei City Government developed a Code-checking System (CCS using Building Information Modeling (BIM technology to facilitate an architectural design review in 2014. This system was intended to solve problems caused by cognitive gaps between designer and reviewer in the design review process. Along with considering information technology, the most important issue for the system’s development has been the logicalization of literal building codes. Therefore, to enhance the reliability and performance of the CCS, this study uses the Fuzzy Delphi Method (FDM on the basis of design thinking and communication theory to investigate the semantic difference and cognitive gaps among participants in the design review process and to propose the direction of system development. Our empirical results lead us to recommend grouping multi-stage screening and weighted assisted logicalization of non-quantitative building codes to improve the operability of CCS. Furthermore, CCS should integrate the Expert Evaluation System (EES to evaluate the design value under qualitative building codes.

  3. An improved phase-locked loop method for automatic resonance frequency tracing based on static capacitance broadband compensation for a high-power ultrasonic transducer.

    Science.gov (United States)

    Dong, Hui-juan; Wu, Jian; Zhang, Guang-yu; Wu, Han-fu

    2012-02-01

    The phase-locked loop (PLL) method is widely used for automatic resonance frequency tracing (ARFT) of high-power ultrasonic transducers, which are usually vibrating systems with high mechanical quality factor (Qm). However, a heavily-loaded transducer usually has a low Qm because the load has a large mechanical loss. In this paper, a series of theoretical analyses is carried out to detail why the traditional PLL method could cause serious frequency tracing problems, including loss of lock, antiresonance frequency tracing, and large tracing errors. The authors propose an improved ARFT method based on static capacitance broadband compensation (SCBC), which is able to address these problems. Experiments using a generator based on the novel method were carried out using crude oil as the transducer load. The results obtained have demonstrated the effectiveness of the novel method, compared with the conventional PLL method, in terms of improved tracing accuracy (±9 Hz) and immunity to antiresonance frequency tracing and loss of lock.

  4. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  5. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2009-01-01

    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  6. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  7. Development of a method for fast and automatic radiocarbon measurement of aerosol samples by online coupling of an elemental analyzer with a MICADAS AMS

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, G., E-mail: gary.salazar@dcb.unibe.ch [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland); Zhang, Y.L.; Agrios, K. [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland); Paul Scherrer Institut (PSI), 5232 Villigen (Switzerland); Szidat, S. [Department of Chemistry and Biochemistry & Oeschger Centre for Climate Change Research, University of Bern, 3012 Bern (Switzerland)

    2015-10-15

    A fast and automatic method for radiocarbon analysis of aerosol samples is presented. This type of analysis requires high number of sample measurements of low carbon masses, but accepts precisions lower than for carbon dating analysis. The method is based on online Trapping CO{sub 2} and coupling an elemental analyzer with a MICADAS AMS by means of a gas interface. It gives similar results to a previously validated reference method for the same set of samples. This method is fast and automatic and typically provides uncertainties of 1.5–5% for representative aerosol samples. It proves to be robust and reliable and allows for overnight and unattended measurements. A constant and cross contamination correction is included, which indicates a constant contamination of 1.4 ± 0.2 μg C with 70 ± 7 pMC and a cross contamination of (0.2 ± 0.1)% from the previous sample. A Real-time online coupling version of the method was also investigated. It shows promising results for standard materials with slightly higher uncertainties than the Trapping online approach.

  8. Flexible Automatic Discretization for Finite Differences: Eliminating the Human Factor

    Science.gov (United States)

    Pranger, Casper

    2017-04-01

    In the geophysical numerical modelling community, finite differences are (in part due to their small footprint) a popular spatial discretization method for PDEs in the regular-shaped continuum that is the earth. However, they rapidly become prone to programming mistakes when physics increase in complexity. To eliminate opportunities for human error, we have designed an automatic discretization algorithm using Wolfram Mathematica, in which the user supplies symbolic PDEs, the number of spatial dimensions, and a choice of symbolic boundary conditions, and the script transforms this information into matrix- and right-hand-side rules ready for use in a C++ code that will accept them. The symbolic PDEs are further used to automatically develop and perform manufactured solution benchmarks, ensuring at all stages physical fidelity while providing pragmatic targets for numerical accuracy. We find that this procedure greatly accelerates code development and provides a great deal of flexibility in ones choice of physics.

  9. Prioritized degree distribution in wireless sensor networks with a network coded data collection method.

    Science.gov (United States)

    Wan, Jan; Xiong, Naixue; Zhang, Wei; Zhang, Qinchao; Wan, Zheng

    2012-12-12

    The reliability of wireless sensor networks (WSNs) can be greatly affected by failures of sensor nodes due to energy exhaustion or the influence of brutal external environment conditions. Such failures seriously affect the data persistence and collection efficiency. Strategies based on network coding technology for WSNs such as LTCDS can improve the data persistence without mass redundancy. However, due to the bad intermediate performance of LTCDS, a serious 'cliff effect' may appear during the decoding period, and source data are hard to recover from sink nodes before sufficient encoded packets are collected. In this paper, the influence of coding degree distribution strategy on the 'cliff effect' is observed and the prioritized data storage and dissemination algorithm PLTD-ALPHA is presented to achieve better data persistence and recovering performance. With PLTD-ALPHA, the data in sensor network nodes present a trend that their degree distribution increases along with the degree level predefined, and the persistent data packets can be submitted to the sink node according to its degree in order. Finally, the performance of PLTD-ALPHA is evaluated and experiment results show that PLTD-ALPHA can greatly improve the data collection performance and decoding efficiency, while data persistence is not notably affected.

  10. Automatic exploitation system for photographic dosemeters; Systeme d`exploitation automatique des dosimetres photographiques

    Energy Technology Data Exchange (ETDEWEB)

    Magri, Y.; Devillard, D.; Godefroit, J.L.; Barillet, C.

    1997-01-01

    The Laboratory of Dosimetry Exploitation (LED) has realized an equipment allowing to exploit automatically photographic film dosemeters. This system uses an identification of the films by code-bars and gives the doses measurement with a completely automatic reader. The principle consists in putting in ribbon the emulsions to be exploited and to develop them in a circulation machine. The measurement of the blackening film is realized on a reading plate having fourteen points of reading, in which are circulating the emulsions in ribbon. The exploitation is made with the usual dose calculation method, with special computers codes. A comparison on 2000 dosemeters has shown that the results are the same in manual and automatical methods. This system has been operating since July 1995 by the LED. (N.C.).

  11. [Analysis of dental agenesis patterns of the oligodontia patients using the method of tooth agenesis code].

    Science.gov (United States)

    Zhu, Jun-xia; Zheng, Shu-guo; Ge, Li-hong

    2013-11-01

    To analyze the common dental agenesis patterns of the oligodontia patients. The information of 64 oligodontia patients was collected, including the histories, oral examinations and panoramic radiographs. The Tooth Agenesis Code (TAC) procedure was used to analyze the agenesis pattern of each quadrant. In the maxilla, 63% (40/64) (right side) and 58% (37/64) (left side) could be described using eight different patterns. The most common pattern was agenesis of the maxillary lateral incisor, canine and both premolars.In the mandible, 52% (33/64) (right side) and 53% (34/64) (left side) of the patients could be described using only five different patterns, the most common pattern was agenesis of both mandibular premolars. Common patterns of tooth agenesis were successfully identified in non-syndromic oligodontia patients.

  12. New automatic combustion method for the liquid scintillation assay of tritium and carbon-14 in singly or doubly labelled organic materials

    Energy Technology Data Exchange (ETDEWEB)

    Gacs, I.; Dobis, E.; Dombi, S.; Payer, K.; Oetvoes, L. (Magyar Tudomanyos Akademia Koezponti Kemiai Kutato Intezete, Budapest); Vargay, Z. (CHINOIN Gyogyszer es Vegyeszeti Termekek Gyara Rt., Budapest (Hungary))

    1982-01-01

    An automatic, rapid combustion method has been developed for the determination of tritium and /sup 14/C in singly or doubly labelled organic materials by liquid scintillation counting. The sample is burned in a stream of oxygen. The water formed and its tritium content are retained from the gas stream in an absorber containing a small amount of diethylene-glycol monoethyl ether. Radioactive carbon dioxide, if included in the combustion products, is transferred into 3-methoxypropylamine. The final solutions ready for counting are obtained in less than three minutes. Quantitative collection recoveries for both tritium and /sup 14/C are achieved and no cross-contamination occurs.

  13. A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes

    CSIR Research Space (South Africa)

    Haelterman, R

    2018-02-01

    Full Text Available ) and Switched Column-Updating Method (SCU) 1. The Column-Updating method is a quasi-Newton method that was introduced by Martinez [25, 27, 28]. The rank-one update of this method is such that the column of the approximate Jacobian corresponding to the largest...K,s = Argmax{|〈ı j,δxs〉|; j = 1, . . . ,mn}. (Kˆ′1) −1 is typically set to be −I, 2. The Inverse Column-Updating method (ICU) is a quasi-Newton method that was introduced by Martinez and Zambaldi [23, 26]. It uses a rank-one update such that the column...

  14. Fitting of two and three variant polynomials from experimental data through the least squares method. (Using of the codes AJUS-2D, AJUS-3D and LEGENDRE-2D)

    International Nuclear Information System (INIS)

    Sanchez Miro, J. J.; Sanz Martin, J. C.

    1994-01-01

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries. (Author) 10 refs

  15. Comparisons of ratchetting analysis methods using RCC-M, RCC-MR and ASME codes

    International Nuclear Information System (INIS)

    Yang Yu; Cabrillat, M.T.

    2005-01-01

    The present paper compares the simplified ratcheting analysis methods used in RCC-M, RCC-MR and ASME with some examples. Firstly, comparisons of the methods in RCC-M and efficiency diagram in RCC-MR are investigated. A special method is used to describe these two methods with curves in one coordinate, and the different conservation is demonstrated. RCC-M method is also be interpreted by SR (second ratio) and v (efficiency index) which is used in RCC-MR. Hence, we can easily compare the previous two methods by defining SR as abscissa and v as ordinate and plotting two curves of them. Secondly, comparisons of the efficiency curve in RCC-MR and methods in ASME-NH APPENDIX T are investigated, with significant creep. At last, two practical evaluations are performed to show the comparisons of aforementioned methods. (authors)

  16. A novel Morse code-inspired method for multiclass motor imagery brain-computer interface (BCI) design.

    Science.gov (United States)

    Jiang, Jun; Zhou, Zongtan; Yin, Erwei; Yu, Yang; Liu, Yadong; Hu, Dewen

    2015-11-01

    Motor imagery (MI)-based brain-computer interfaces (BCIs) allow disabled individuals to control external devices voluntarily, helping us to restore lost motor functions. However, the number of control commands available in MI-based BCIs remains limited, limiting the usability of BCI systems in control applications involving multiple degrees of freedom (DOF), such as control of a robot arm. To address this problem, we developed a novel Morse code-inspired method for MI-based BCI design to increase the number of output commands. Using this method, brain activities are modulated by sequences of MI (sMI) tasks, which are constructed by alternately imagining movements of the left or right hand or no motion. The codes of the sMI task was detected from EEG signals and mapped to special commands. According to permutation theory, an sMI task with N-length allows 2 × (2(N)-1) possible commands with the left and right MI tasks under self-paced conditions. To verify its feasibility, the new method was used to construct a six-class BCI system to control the arm of a humanoid robot. Four subjects participated in our experiment and the averaged accuracy of the six-class sMI tasks was 89.4%. The Cohen's kappa coefficient and the throughput of our BCI paradigm are 0.88 ± 0.060 and 23.5bits per minute (bpm), respectively. Furthermore, all of the subjects could operate an actual three-joint robot arm to grasp an object in around 49.1s using our approach. These promising results suggest that the Morse code-inspired method could be used in the design of BCIs for multi-DOF control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Faraday Rotation of Automatic Dependent Surveillance-Broadcast (ADS-B) Signals as a Method of Ionospheric Characterization

    Science.gov (United States)

    Cushley, A. C.; Kabin, K.; Noël, J.-M.

    2017-10-01

    Radio waves propagating through plasma in the Earth's ambient magnetic field experience Faraday rotation; the plane of the electric field of a linearly polarized wave changes as a function of the distance travelled through a plasma. Linearly polarized radio waves at 1090 MHz frequency are emitted by Automatic Dependent Surveillance Broadcast (ADS-B) devices that are installed on most commercial aircraft. These radio waves can be detected by satellites in low Earth orbits, and the change of the polarization angle caused by propagation through the terrestrial ionosphere can be measured. In this manuscript we discuss how these measurements can be used to characterize the ionospheric conditions. In the present study, we compute the amount of Faraday rotation from a prescribed total electron content value and two of the profile parameters of the NeQuick ionospheric model.

  18. Faraday rotation of Automatic Dependent Surveillance Broadcast (ADS-B) signals as a method of ionospheric characterization

    Science.gov (United States)

    Cushley, A. C.; Kabin, K.; Noel, J. M. A.

    2017-12-01

    Radio waves propagating through plasma in the Earth's ambient magnetic field experience Faraday rotation; the plane of the electric field of a linearly polarized wave changes as a function of the distance travelled through a plasma. Linearly polarized radio waves at 1090 MHz frequency are emitted by Automatic Dependent Surveillance Broadcast (ADS-B) devices which are installed on most commercial aircraft. These radio waves can be detected by satellites in low earth orbits, and the change of the polarization angle caused by propagation through the terrestrial ionosphere can be measured. In this work we discuss how these measurements can be used to characterize the ionospheric conditions. In the present study, we compute the amount of Faraday rotation from a prescribed total electron content value and two of the profile parameters of the NeQuick model.

  19. Semi-automatic construction of the Chinese-English MeSH using Web-based term translation method.

    Science.gov (United States)

    Lu, Wen-Hsiang; Lin, Shih-Jui; Chan, Yi-Che; Chen, Kuan-Hsi

    2005-01-01

    Due to language barrier, non-English users are unable to retrieve the most updated medical information from the U.S. authoritative medical websites, such as PubMed and MedlinePlus. A few cross-language medical information retrieval (CLMIR) systems have been utilizing MeSH (Medical Subject Heading) with multilingual thesaurus to bridge the gap. Unfortunately, MeSH has yet not been translated into traditional Chinese currently. We proposed a semi-automatic approach to constructing Chinese-English MeSH based on Web-based term translation. The system provides knowledge engineers with candidate terms mining from anchor texts and search-result pages. The result is encouraging. Currently, more than 19,000 Chinese-English MeSH entries have been complied. This thesaurus will be used in Chinese-English CLMIR in the future.

  20. CALIPSOS code report

    International Nuclear Information System (INIS)

    Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.S.

    1980-04-01

    CALIPSOS is a steady-state three-dimensional flow distribution code which predicts the fluid dynamics and heat transfer interactions of the secondary two-phase flow in a steam generator. The mathematical formulation is sufficiently general to accommodate two fluid models described by separate gas and liquid momentum equations. However, if the user selects the homogeneous flow option, the code automatically equates the gas and liquid phase velocities (thereby reducing the number of momentum equations solved to three) and utilizes a homogeneous density mixture. This report presents the basic features of the CALIPSOS code and includes assumptions, equations solved, the finite-difference grid, and highlights of the solution procedure