WorldWideScience

Sample records for reliable numerical computation

  1. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  2. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  3. Numerical computations with GPUs

    CERN Document Server

    Kindratenko, Volodymyr

    2014-01-01

    This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to

  4. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  5. Computing the Alexander Polynomial Numerically

    DEFF Research Database (Denmark)

    Hansen, Mikael Sonne

    2006-01-01

    Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....

  6. Reliable computation from contextual correlations

    Science.gov (United States)

    Oestereich, André L.; Galvão, Ernesto F.

    2017-12-01

    An operational approach to the study of computation based on correlations considers black boxes with one-bit inputs and outputs, controlled by a limited classical computer capable only of performing sums modulo-two. In this setting, it was shown that noncontextual correlations do not provide any extra computational power, while contextual correlations were found to be necessary for the deterministic evaluation of nonlinear Boolean functions. Here we investigate the requirements for reliable computation in this setting; that is, the evaluation of any Boolean function with success probability bounded away from 1 /2 . We show that bipartite CHSH quantum correlations suffice for reliable computation. We also prove that an arbitrarily small violation of a multipartite Greenberger-Horne-Zeilinger noncontextuality inequality also suffices for reliable computation.

  7. Computer scientist looks at reliability computations

    International Nuclear Information System (INIS)

    Rosenthal, A.

    1975-01-01

    Results from the theory of computational complexity are applied to reliability computations on fault trees and networks. A well known class of problems which almost certainly have no fast solution algorithms is presented. It is shown that even approximately computing the reliability of many systems is difficult enough to be in this class. In the face of this result, which indicates that for general systems the computation time will be exponential in the size of the system, decomposition techniques which can greatly reduce the effective size of a wide variety of realistic systems are explored

  8. Numerical Analysis of Multiscale Computations

    CERN Document Server

    Engquist, Björn; Tsai, Yen-Hsi R

    2012-01-01

    This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.

  9. Introduction to numerical computation in Pascal

    CERN Document Server

    Dew, P M

    1983-01-01

    Our intention in this book is to cover the core material in numerical analysis normally taught to students on degree courses in computer science. The main emphasis is placed on the use of analysis and programming techniques to produce well-designed, reliable mathematical software. The treatment should be of interest also to students of mathematics, science and engineering who wish to learn how to write good programs for mathematical computations. The reader is assumed to have some acquaintance with Pascal programming. Aspects of Pascal particularly relevant to numerical computation are revised and developed in the first chapter. Although Pascal has some drawbacks for serious numerical work (for example, only one precision for real numbers), the language has major compensating advantages: it is a widely used teaching language that will be familiar to many students and it encourages the writing of clear, well­ structured programs. By careful use of structure and documentation, we have produced codes that we be...

  10. Numerical computation of MHD equilibria

    International Nuclear Information System (INIS)

    Atanasiu, C.V.

    1982-10-01

    A numerical code for a two-dimensional MHD equilibrium computation has been carried out. The code solves the Grad-Shafranov equation in its integral form, for both formulations: the free-boundary problem and the fixed boundary one. Examples of the application of the code to tokamak design are given. (author)

  11. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  12. Numerical and symbolic scientific computing

    CERN Document Server

    Langer, Ulrich

    2011-01-01

    The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from

  13. Reliability and Availability of Cloud Computing

    CERN Document Server

    Bauer, Eric

    2012-01-01

    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  14. Numerical Computation of Detonation Stability

    KAUST Repository

    Kabanov, Dmitry

    2018-06-03

    Detonation is a supersonic mode of combustion that is modeled by a system of conservation laws of compressible fluid mechanics coupled with the equations describing thermodynamic and chemical properties of the fluid. Mathematically, these governing equations admit steady-state travelling-wave solutions consisting of a leading shock wave followed by a reaction zone. However, such solutions are often unstable to perturbations and rarely observed in laboratory experiments. The goal of this work is to study the stability of travelling-wave solutions of detonation models by the following novel approach. We linearize the governing equations about a base travelling-wave solution and solve the resultant linearized problem using high-order numerical methods. The results of these computations are postprocessed using dynamic mode decomposition to extract growth rates and frequencies of the perturbations and predict stability of travelling-wave solutions to infinitesimal perturbations. We apply this approach to two models based on the reactive Euler equations for perfect gases. For the first model with a one-step reaction mechanism, we find agreement of our results with the results of normal-mode analysis. For the second model with a two-step mechanism, we find that both types of admissible travelling-wave solutions exhibit the same stability spectra. Then we investigate the Fickett’s detonation analogue coupled with a particular reaction-rate expression. In addition to the linear stability analysis of this model, we demonstrate that it exhibits rich nonlinear dynamics with multiple bifurcations and chaotic behavior.

  15. Reliability of numerical wind tunnels for VAWT simulation

    International Nuclear Information System (INIS)

    Castelli, M. Raciti; Masi, M.; Battisti, L.; Benini, E.; Brighenti, A.; Dossena, V.; Persico, G.

    2016-01-01

    Computational Fluid Dynamics (CFD) based on the Unsteady Reynolds Averaged Navier Stokes (URANS) equations have long been widely used to study vertical axis wind turbines (VAWTs). Following a comprehensive experimental survey on the wakes downwind of a troposkien-shaped rotor, a campaign of bi-dimensional simulations is presented here, with the aim of assessing its reliability in reproducing the main features of the flow, also identifying areas needing additional research. Starting from both a well consolidated turbulence model (k-ω SST) and an unstructured grid typology, the main simulation settings are here manipulated in a convenient form to tackle rotating grids reproducing a VAWT operating in an open jet wind tunnel. The dependence of the numerical predictions from the selected grid spacing is investigated, thus establishing the less refined grid size that is still capable of capturing some relevant flow features such as integral quantities (rotor torque) and local ones (wake velocities). (paper)

  16. Reliability of numerical wind tunnels for VAWT simulation

    Science.gov (United States)

    Raciti Castelli, M.; Masi, M.; Battisti, L.; Benini, E.; Brighenti, A.; Dossena, V.; Persico, G.

    2016-09-01

    Computational Fluid Dynamics (CFD) based on the Unsteady Reynolds Averaged Navier Stokes (URANS) equations have long been widely used to study vertical axis wind turbines (VAWTs). Following a comprehensive experimental survey on the wakes downwind of a troposkien-shaped rotor, a campaign of bi-dimensional simulations is presented here, with the aim of assessing its reliability in reproducing the main features of the flow, also identifying areas needing additional research. Starting from both a well consolidated turbulence model (k-ω SST) and an unstructured grid typology, the main simulation settings are here manipulated in a convenient form to tackle rotating grids reproducing a VAWT operating in an open jet wind tunnel. The dependence of the numerical predictions from the selected grid spacing is investigated, thus establishing the less refined grid size that is still capable of capturing some relevant flow features such as integral quantities (rotor torque) and local ones (wake velocities).

  17. Building fast, reliable, and adaptive software for computational science

    International Nuclear Information System (INIS)

    Rendell, A P; Antony, J; Armstrong, W; Janes, P; Yang, R

    2008-01-01

    Building fast, reliable, and adaptive software is a constant challenge for computational science, especially given recent developments in computer architecture. This paper outlines some of our efforts to address these three issues in the context of computational chemistry. First, a simple linear performance that can be used to model and predict the performance of Hartree-Fock calculations is discussed. Second, the use of interval arithmetic to assess the numerical reliability of the sort of integrals used in electronic structure methods is presented. Third, use of dynamic code modification as part of a framework to support adaptive software is outlined

  18. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  19. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  20. Reliable computer systems design and evaluatuion

    CERN Document Server

    Siewiorek, Daniel

    2014-01-01

    Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.

  1. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  2. A History of Computer Numerical Control.

    Science.gov (United States)

    Haggen, Gilbert L.

    Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…

  3. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  4. Numerical Optimization Using Desktop Computers

    Science.gov (United States)

    1980-09-11

    geophysical, optical and economic analysis to compute a life-cycle cost for a design with a stated energy capacity. NISCO stands for NonImaging ...more efficiently by nonimaging optical systems than by conventional image forming systems. The methodology of designing optimized ronimaging systems...compound parabolic concentrating iWelford, W. T. and Winston, R., The Optics of Nonimaging Concentrators, Light and Solar Energy, p. ix, Academic

  5. Fluid dynamics theory, computation, and numerical simulation

    CERN Document Server

    Pozrikidis, C

    2001-01-01

    Fluid Dynamics Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes Two distinguishing features of the discourse are solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty Matlab codes are presented and discussed for a broad...

  6. Fluid Dynamics Theory, Computation, and Numerical Simulation

    CERN Document Server

    Pozrikidis, Constantine

    2009-01-01

    Fluid Dynamics: Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner. The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming. This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice. There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes. Two distinguishing features of the discourse are: solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty. Matlab codes are presented and discussed for ...

  7. Probabilistic numerics and uncertainty in computations.

    Science.gov (United States)

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  8. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  9. Numerical computation of linear instability of detonations

    Science.gov (United States)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  10. Fluid dynamics theory, computation, and numerical simulation

    CERN Document Server

    Pozrikidis, C

    2017-01-01

    This book provides an accessible introduction to the basic theory of fluid mechanics and computational fluid dynamics (CFD) from a modern perspective that unifies theory and numerical computation. Methods of scientific computing are introduced alongside with theoretical analysis and MATLAB® codes are presented and discussed for a broad range of topics: from interfacial shapes in hydrostatics, to vortex dynamics, to viscous flow, to turbulent flow, to panel methods for flow past airfoils. The third edition includes new topics, additional examples, solved and unsolved problems, and revised images. It adds more computational algorithms and MATLAB programs. It also incorporates discussion of the latest version of the fluid dynamics software library FDLIB, which is freely available online. FDLIB offers an extensive range of computer codes that demonstrate the implementation of elementary and advanced algorithms and provide an invaluable resource for research, teaching, classroom instruction, and self-study. This ...

  11. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  12. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  13. The Application of Visual Basic Computer Programming Language to Simulate Numerical Iterations

    Directory of Open Access Journals (Sweden)

    Abdulkadir Baba HASSAN

    2006-06-01

    Full Text Available This paper examines the application of Visual Basic Computer Programming Language to Simulate Numerical Iterations, the merit of Visual Basic as a Programming Language and the difficulties faced when solving numerical iterations analytically, this research paper encourage the uses of Computer Programming methods for the execution of numerical iterations and finally fashion out and develop a reliable solution using Visual Basic package to write a program for some selected iteration problems.

  14. Integrated optical circuits for numerical computation

    Science.gov (United States)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  15. Reliability and protection against failure in computer systems

    International Nuclear Information System (INIS)

    Daniels, B.K.

    1979-01-01

    Computers are being increasingly integrated into the control and safety systems of large and potentially hazardous industrial processes. This development introduces problems which are particular to computer systems and opens the way to new techniques of solving conventional reliability and availability problems. References to the developing fields of software reliability, human factors and software design are given, and these subjects are related, where possible, to the quantified assessment of reliability. Original material is presented in the areas of reliability growth and computer hardware failure data. The report draws on the experience of the National Centre of Systems Reliability in assessing the capability and reliability of computer systems both within the nuclear industry, and from the work carried out in other industries by the Systems Reliability Service. (author)

  16. Computer-aided reliability and risk assessment

    International Nuclear Information System (INIS)

    Leicht, R.; Wingender, H.J.

    1989-01-01

    Activities in the fields of reliability and risk analyses have led to the development of particular software tools which now are combined in the PC-based integrated CARARA system. The options available in this system cover a wide range of reliability-oriented tasks, like organizing raw failure data in the component/event data bank FDB, performing statistical analysis of those data with the program FDA, managing the resulting parameters in the reliability data bank RDB, and performing fault tree analysis with the fault tree code FTL or evaluating the risk of toxic or radioactive material release with the STAR code. (orig.)

  17. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  18. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  19. Planning is not sufficient - Reliable computers need good requirements specifications

    International Nuclear Information System (INIS)

    Matras, J.R.

    1992-01-01

    Computer system reliability is the assurance that a computer system will perform its functions when required to do so. To ensure such reliability, it is important to plan the activities needed for computer system development. These development activities, in turn, require a Computer Quality Assurance Plan (CQAP) that provides the following: a Configuration Management Plan, a Verification and Validation (V and V) Plan, documentation requirements, a defined life cycle, review requirements, and organizational responsibilities. These items are necessary for system reliability; ultimately, however, they are not enough. Development of a reliable system is dependent on the requirements specification. This paper discusses how to use existing industry standards to develop a CQAP. In particular, the paper emphasizes the importance of the requirements specification and of methods for establishing reliability goals. The paper also describes how the revision of ANSI/IEE-ANS-7-4.3.2, Application Criteria for Digital Computer Systems of Nuclear Power Generating Stations, has addressed these issues

  20. Numerical differences between Guttman's reliability coefficients and the GLB

    NARCIS (Netherlands)

    Oosterwijk, P.R.; van der Ark, L.A.; Sijtsma, K.; van der Ark, L.A.; Bolt, D.M; Wang, W.-C.; Douglas, J.A.; Wiberg, M.

    2016-01-01

    For samples smaller than 1000 and tests longer than ten items, the greatest lower bound (GLB) to the reliability is known to be biased and not recommended as a method to estimate test-score reliability. As a first step in finding alternative lower bounds under these conditions, we investigated the

  1. CADRIGS--computer aided design reliability interactive graphics system

    International Nuclear Information System (INIS)

    Kwik, R.J.; Polizzi, L.M.; Sticco, S.; Gerrard, P.B.; Yeater, M.L.; Hockenbury, R.W.; Phillips, M.A.

    1982-01-01

    An integrated reliability analysis program combining graphic representation of fault trees, automated data base loadings and reference, and automated construction of reliability code input files was developed. The functional specifications for CADRIGS, the computer aided design reliability interactive graphics system, are presented. Previously developed fault tree segments used in auxiliary feedwater system safety analysis were constructed on CADRIGS and, when combined, yielded results identical to those resulting from manual input to the same reliability codes

  2. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  3. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  4. Numerical discrepancy between serial and MPI parallel computations

    Directory of Open Access Journals (Sweden)

    Sang Bong Lee

    2016-09-01

    Full Text Available Numerical simulations of 1D Burgers equation and 2D sloshing problem were carried out to study numerical discrepancy between serial and parallel computations. The numerical domain was decomposed into 2 and 4 subdomains for parallel computations with message passing interface. The numerical solution of Burgers equation disclosed that fully explicit boundary conditions used on subdomains of parallel computation was responsible for the numerical discrepancy of transient solution between serial and parallel computations. Two dimensional sloshing problems in a rectangular domain were solved using OpenFOAM. After a lapse of initial transient time sloshing patterns of water were significantly different in serial and parallel computations although the same numerical conditions were given. Based on the histograms of pressure measured at two points near the wall the statistical characteristics of numerical solution was not affected by the number of subdomains as much as the transient solution was dependent on the number of subdomains.

  5. Numerical methods for reliability and safety assessment multiscale and multiphysics systems

    CERN Document Server

    Hami, Abdelkhalak

    2015-01-01

    This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together  describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: ·       Introduces novel numerical methods ·       Illustrates new practical applications ·       Examines recent engineering applications ·       Presents up-to-date theoretical results ·       Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers

  6. Reliability of Computer Analysis of Electrocardiograms (ECG) of ...

    African Journals Online (AJOL)

    Background: Computer programmes have been introduced to electrocardiography (ECG) with most physicians in Africa depending on computer interpretation of ECG. This study was undertaken to evaluate the reliability of computer interpretation of the 12-Lead ECG in the Black race. Methodology: Using the SCHILLER ...

  7. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  8. A new efficient algorithm for computing the imprecise reliability of monotone systems

    International Nuclear Information System (INIS)

    Utkin, Lev V.

    2004-01-01

    Reliability analysis of complex systems by partial information about reliability of components and by different conditions of independence of components may be carried out by means of the imprecise probability theory which provides a unified framework (natural extension, lower and upper previsions) for computing the system reliability. However, the application of imprecise probabilities to reliability analysis meets with a complexity of optimization problems which have to be solved for obtaining the system reliability measures. Therefore, an efficient simplified algorithm to solve and decompose the optimization problems is proposed in the paper. This algorithm allows us to practically implement reliability analysis of monotone systems under partial and heterogeneous information about reliability of components and under conditions of the component independence or the lack of information about independence. A numerical example illustrates the algorithm

  9. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    Science.gov (United States)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  10. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  11. Reliability analysis of numerical simulation in near field behavior

    International Nuclear Information System (INIS)

    Kobayashi, Akira; Yamamoto, Kiyohito; Chijimatsu, Masakazu; Fujita, Tomoo

    2008-01-01

    The uncertainties of the boundary conditions, the elastic modulus and Poisson's ratio on the mechanical behavior at near field of high level radioactive waste repository were examined. The method used to examine the error propagation was the first order second moment method. The reliability of the maximum principal stress, maximum shear stress at crown of the tunnel and the minimum principal stress at spring line was examined for one million years. For elastic model, the reliability of the maximum shear stress gradually decreased while that of the maximum principle stress increased. That of the minimum principal stress was relatively low for one million years. This tendency was similar to that from the damage model. (author)

  12. The numerical parallel computing of photon transport

    International Nuclear Information System (INIS)

    Huang Qingnan; Liang Xiaoguang; Zhang Lifa

    1998-12-01

    The parallel computing of photon transport is investigated, the parallel algorithm and the parallelization of programs on parallel computers both with shared memory and with distributed memory are discussed. By analyzing the inherent law of the mathematics and physics model of photon transport according to the structure feature of parallel computers, using the strategy of 'to divide and conquer', adjusting the algorithm structure of the program, dissolving the data relationship, finding parallel liable ingredients and creating large grain parallel subtasks, the sequential computing of photon transport into is efficiently transformed into parallel and vector computing. The program was run on various HP parallel computers such as the HY-1 (PVP), the Challenge (SMP) and the YH-3 (MPP) and very good parallel speedup has been gotten

  13. Topics in numerical partial differential equations and scientific computing

    CERN Document Server

    2016-01-01

    Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.

  14. An Evaluation of Java for Numerical Computing

    Directory of Open Access Journals (Sweden)

    Brian Blount

    1999-01-01

    Full Text Available This paper describes the design and implementation of high performance numerical software in Java. Our primary goals are to characterize the performance of object‐oriented numerical software written in Java and to investigate whether Java is a suitable language for such endeavors. We have implemented JLAPACK, a subset of the LAPACK library in Java. LAPACK is a high‐performance Fortran 77 library used to solve common linear algebra problems. JLAPACK is an object‐oriented library, using encapsulation, inheritance, and exception handling. It performs within a factor of four of the optimized Fortran version for certain platforms and test cases. When used with the native BLAS library, JLAPACK performs comparably with the Fortran version using the native BLAS library. We conclude that high‐performance numerical software could be written in Java if a handful of concerns about language features and compilation strategies are adequately addressed.

  15. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    Science.gov (United States)

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing

  16. Numerical computation of homogeneous slope stability.

    Science.gov (United States)

    Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong

    2015-01-01

    To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).

  17. Numerical Computation of Homogeneous Slope Stability

    Directory of Open Access Journals (Sweden)

    Shuangshuang Xiao

    2015-01-01

    Full Text Available To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM and particle swarm optimization algorithm (PSO to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759 were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS.

  18. Numerical characteristics of quantum computer simulation

    Science.gov (United States)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  19. Computing complex Airy functions by numerical quadrature

    NARCIS (Netherlands)

    A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2001-01-01

    textabstractIntegral representations are considered of solutions of the Airydifferential equation w''-z, w=0 for computing Airy functions for complex values of z.In a first method contour integral representations of the Airyfunctions are written as non-oscillating

  20. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  1. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  2. Numerical computation of generalized importance functions

    International Nuclear Information System (INIS)

    Gomit, J.M.; Nasr, M.; Ngyuen van Chi, G.; Pasquet, J.P.; Planchard, J.

    1981-01-01

    Thus far, an important effort has been devoted to developing and applying generalized perturbation theory in reactor physics analysis. In this work we are interested in the calculation of the importance functions by the method of A. Gandini. We have noted that in this method the convergence of the iterative procedure adopted is not rapid. Hence to accelerate this convergence we have used the semi-iterative technique. Two computer codes have been developed for one and two dimensional calculations (SPHINX-1D and SPHINX-2D). The advantage of our calculation was confirmed by some comparative tests in which the iteration number and the computing time were highly reduced with respect to classical calculation (CIAP-1D and CIAP-2D). (orig.) [de

  3. Numerical cosmology: Revealing the universe using computers

    International Nuclear Information System (INIS)

    Centrella, J.; Matzner, R.A.; Tolman, B.W.

    1986-01-01

    In this paper the authors present two research projects which study the evolution of different periods in the history of the universe using numerical simulations. The first investigates the synthesis of light elements in an inhomogeneous early universe dominated by shocks and non-linear gravitational waves. The second follows the evolution of large scale structures during the later history of the universe and calculates their effect on the 3K background radiation. Their simulations are carried out using modern supercomputers and make heavy use of multidimensional color graphics, including film to elucidate the results. Both projects provide the authors the opportunity to do experiments in cosmology and assess their results against fundamental cosmological observations

  4. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  5. Numerical computation of special functions with applications to physics

    CSIR Research Space (South Africa)

    Motsepe, K

    2008-09-01

    Full Text Available Students of mathematical physics, engineering, natural and biological sciences sometimes need to use special functions that are not found in ordinary mathematical software. In this paper a simple universal numerical algorithm is developed to compute...

  6. Numerical aspects for efficient welding computational mechanics

    Directory of Open Access Journals (Sweden)

    Aburuga Tarek Kh.S.

    2014-01-01

    Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.

  7. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  8. Comparison of reliability of lateral cephalogram and computed ...

    African Journals Online (AJOL)

    of malocclusion and airway space using lateral cephalogram and computed tomography (CT) and to compare its reliability. To obtain important information on the morphology of the soft palate on lateral cephalogram and to determine its etiopathogenesis in obstructive sleep apnea (OSA). Materials and Methods: Lateral ...

  9. Systems reliability analysis: applications of the SPARCS System-Reliability Assessment Computer Program

    International Nuclear Information System (INIS)

    Locks, M.O.

    1978-01-01

    SPARCS-2 (Simulation Program for Assessing the Reliabilities of Complex Systems, Version 2) is a PL/1 computer program for assessing (establishing interval estimates for) the reliability and the MTBF of a large and complex s-coherent system of any modular configuration. The system can consist of a complex logical assembly of independently failing attribute (binomial-Bernoulli) and time-to-failure (Poisson-exponential) components, without regard to their placement. Alternatively, it can be a configuration of independently failing modules, where each module has either or both attribute and time-to-failure components. SPARCS-2 also has an improved super modularity feature. Modules with minimal-cut unreliabiliy calculations can be mixed with those having minimal-path reliability calculations. All output has been standardized to system reliability or probability of success, regardless of the form in which the input data is presented, and whatever the configuration of modules or elements within modules

  10. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  11. Univolatility curves in ternary mixtures: geometry and numerical computation

    DEFF Research Database (Denmark)

    Shcherbakova, Nataliya; Rodriguez-Donis, Ivonne; Abildskov, Jens

    2017-01-01

    We propose a new non-iterative numerical algorithm allowing computation of all univolatility curves in homogeneous ternary mixtures independently of the presence of the azeotropes. The key point is the concept of generalized univolatility curves in the 3D state space, which allows the main comput...

  12. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  13. Planning Irreversible Electroporation in the Porcine Kidney: Are Numerical Simulations Reliable for Predicting Empiric Ablation Outcomes?

    International Nuclear Information System (INIS)

    Wimmer, Thomas; Srimathveeravalli, Govindarajan; Gutta, Narendra; Ezell, Paula C.; Monette, Sebastien; Maybody, Majid; Erinjery, Joseph P.; Durack, Jeremy C.; Coleman, Jonathan A.; Solomon, Stephen B.

    2015-01-01

    PurposeNumerical simulations are used for treatment planning in clinical applications of irreversible electroporation (IRE) to determine ablation size and shape. To assess the reliability of simulations for treatment planning, we compared simulation results with empiric outcomes of renal IRE using computed tomography (CT) and histology in an animal model.MethodsThe ablation size and shape for six different IRE parameter sets (70–90 pulses, 2,000–2,700 V, 70–100 µs) for monopolar and bipolar electrodes was simulated using a numerical model. Employing these treatment parameters, 35 CT-guided IRE ablations were created in both kidneys of six pigs and followed up with CT immediately and after 24 h. Histopathology was analyzed from postablation day 1.ResultsAblation zones on CT measured 81 ± 18 % (day 0, p ≤ 0.05) and 115 ± 18 % (day 1, p ≤ 0.09) of the simulated size for monopolar electrodes, and 190 ± 33 % (day 0, p ≤ 0.001) and 234 ± 12 % (day 1, p ≤ 0.0001) for bipolar electrodes. Histopathology indicated smaller ablation zones than simulated (71 ± 41 %, p ≤ 0.047) and measured on CT (47 ± 16 %, p ≤ 0.005) with complete ablation of kidney parenchyma within the central zone and incomplete ablation in the periphery.ConclusionBoth numerical simulations for planning renal IRE and CT measurements may overestimate the size of ablation compared to histology, and ablation effects may be incomplete in the periphery

  14. A Research Roadmap for Computation-Based Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  15. A Research Roadmap for Computation-Based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  16. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  17. A summary of numerical computation for special functions

    International Nuclear Information System (INIS)

    Zhang Shanjie

    1992-01-01

    In the paper, special functions frequently encountered in science and engineering calculations are introduced. The computation of the values of Bessel function and elliptic integrals are taken as the examples, and some common algorithms for computing most special functions, such as series expansion for small argument, asymptotic approximations for large argument, polynomial approximations, recurrence formulas and iteration method, are discussed. In addition, the determination of zeros of some special functions, and the other questions related to numerical computation are also discussed

  18. Reliability analysis framework for computer-assisted medical decision systems

    International Nuclear Information System (INIS)

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-01-01

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  19. Evaluation of Network Reliability for Computer Networks with Multiple Sources

    Directory of Open Access Journals (Sweden)

    Yi-Kuei Lin

    2012-01-01

    Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.

  20. Reliability analysis of Airbus A-330 computer flight management system

    OpenAIRE

    Fajmut, Metod

    2010-01-01

    Diploma thesis deals with digitized, computerized flight control system »Fly-by-wire« and security aspects of the computer system of an aircraft Airbus A330. As for space and military aircraft structures is also in commercial airplanes, much of the financial contribution devoted to reliability. Conventional aircraft control systems have, and some are still, to rely on mechanical and hydraulic connections between the controls on aircraft operated by the pilot and control surfaces. But newer a...

  1. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  2. A textbook of computer based numerical and statistical techniques

    CERN Document Server

    Jaiswal, AK

    2009-01-01

    About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in `C` langu

  3. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  4. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  5. Numerical problems with the Pascal triangle in moment computation

    Czech Academy of Sciences Publication Activity Database

    Kautsky, J.; Flusser, Jan

    2016-01-01

    Roč. 306, č. 1 (2016), s. 53-68 ISSN 0377-0427 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : moment computation * Pascal triangle * appropriate polynomial basis * numerical problems Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0459096.pdf

  6. Algorithmic mechanisms for reliable crowdsourcing computation under collusion.

    Science.gov (United States)

    Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel

    2015-01-01

    We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.

  7. Numerical computation of aeroacoustic transfer functions for realistic airfoils

    NARCIS (Netherlands)

    De Santana, Leandro Dantas; Miotto, Renato Fuzaro; Wolf, William Roberto

    2017-01-01

    Based on Amiet's theory formalism, we propose a numerical framework to compute the aeroacoustic transfer function of realistic airfoil geometries. The aeroacoustic transfer function relates the amplitude and phase of an incoming periodic gust to the respective unsteady lift response permitting,

  8. Computer-Numerical-Control and the EMCO Compact 5 Lathe.

    Science.gov (United States)

    Mullen, Frank M.

    This laboratory manual is intended for use in teaching computer-numerical-control (CNC) programming using the Emco Maier Compact 5 Lathe. Developed for use at the postsecondary level, this material contains a short introduction to CNC machine tools. This section covers CNC programs, CNC machine axes, and CNC coordinate systems. The following…

  9. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  10. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  11. Introduction to Numerical Computation - analysis and Matlab illustrations

    DEFF Research Database (Denmark)

    Elden, Lars; Wittmeyer-Koch, Linde; Nielsen, Hans Bruun

    In a modern programming environment like eg MATLAB it is possible by simple commands to perform advanced calculations on a personal computer. In order to use such a powerful tool efiiciently it is necessary to have an overview of available numerical methods and algorithms and to know about...... are illustrated by examples in MATLAB....

  12. Diagnostic reliability of MMPI-2 computer-based test interpretations.

    Science.gov (United States)

    Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E

    2014-09-01

    Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  14. Numerical computation of gravitational field for general axisymmetric objects

    Science.gov (United States)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  15. Numerical methods and computers used in elastohydrodynamic lubrication

    Science.gov (United States)

    Hamrock, B. J.; Tripp, J. H.

    1982-01-01

    Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.

  16. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  17. Reliability of an interactive computer program for advance care planning.

    Science.gov (United States)

    Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-06-01

    Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

  18. Reliability of an Interactive Computer Program for Advance Care Planning

    Science.gov (United States)

    Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J

    2012-01-01

    Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830

  19. The reliability of tablet computers in depicting maxillofacial radiographic landmarks

    Energy Technology Data Exchange (ETDEWEB)

    Tadinada, Aditya; Mahdian, Mina; Sheth, Sonam; Chandhoke, Taranpreet K.; Gopalakrishna, Aadarsh; Potluri, Anitha; Yadav, Sumit [University of Connecticut School of Dental Medicine, Farmington (United States)

    2015-09-15

    This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radiology of the University Of Connecticut School Of Dental Medicine. Major radiographic anatomical landmarks were independently reviewed by two examiners on both displays. The examiners initially reviewed ten panoramic and ten lateral cephalometric radiographs using each imaging system, in order to verify interoperator agreement in landmark identification. The images were scored on a four-point scale reflecting the diagnostic image quality and exposure level of the images. Statistical analysis showed no significant difference between the two displays regarding the visibility and clarity of the landmarks in either the panoramic or cephalometric radiographs. Tablet computers can reliably show anatomical landmarks in panoramic and lateral cephalometric radiographs.

  20. Computer-Aided Numerical Inversion of Laplace Transform

    Directory of Open Access Journals (Sweden)

    Umesh Kumar

    2000-01-01

    Full Text Available This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly because of the difficulty in finding the necessary poles and residues. The numerical inversion technique mentioned above does away with the poles and residues but uses precomputed numbers to find the time response. This technique is applicable to the solution of partially differentiable equations and certain classes of linear systems with time varying components.

  1. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1989-07-01

    At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary countermeasures in case of a nuclear reactor accident. The reliability of the results, however, are influenced by the choice of certain parameters that can not be determined by direct methods. Improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. Numerical examples for the uncertainties due to the above factors are analyzed. (author) 4 refs.; 14 figs

  2. A New Language Design for Prototyping Numerical Computation

    Directory of Open Access Journals (Sweden)

    Thomas Derby

    1996-01-01

    Full Text Available To naturally and conveniently express numerical algorithms, considerable expressive power is needed in the languages in which they are implemented. The language Matlab is widely used by numerical analysts for this reason. Expressiveness or ease-of-use can also result in a loss of efficiency, as is the case with Matlab. In particular, because numerical analysts are highly interested in the performance of their algorithms, prototypes are still often implemented in languages such as Fortran. In this article we describe a language design that is intended to both provide expressiveness for numerical computation, and at the same time provide performance guarantees. In our language, EQ, we attempt to include both syntactic and semantic features that correspond closely to the programmer's model of the problem, including unordered equations, large-granularity state transitions, and matrix notation. The resulting language does not fit into standard language categories such as functional or imperative but has features of both paradigms. We also introduce the notion of language dependability, which is the idea that a language should guarantee that certain program transformations are performed by all implementations. We first describe the interesting features of EQ, and then present three examples of algorithms written using it. We also provide encouraging performance results from an initial implementation of our language.

  3. A mutually profitable alliance - Asymptotic expansions and numerical computations

    Science.gov (United States)

    Euvrard, D.

    Problems including the flow past a wing airfoil at Mach 1, and the two-dimensional flow past a partially immersed body are used to show the advantages of coupling a standard numerical method for the whole domain where everything is of the order of 1, with an appropriate asymptotic expansion in the vicinity of some singular point. Cases more closely linking the two approaches are then considered. In the localized finite element method, the asymptotic expansion at infinity becomes a convergent series and the problem reduces to a variational form. Combined analytical and numerical methods are used in the singularity distribution method and in the various couplings of finite elements and a Green integral representation to design a subroutine to compute the Green function and its derivatives.

  4. Learning SciPy for numerical and scientific computing

    CERN Document Server

    Silva

    2013-01-01

    A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio

  5. Numerical analysis of boosting scheme for scalable NMR quantum computation

    International Nuclear Information System (INIS)

    SaiToh, Akira; Kitagawa, Masahiro

    2005-01-01

    Among initialization schemes for ensemble quantum computation beginning at thermal equilibrium, the scheme proposed by Schulman and Vazirani [in Proceedings of the 31st ACM Symposium on Theory of Computing (STOC'99) (ACM Press, New York, 1999), pp. 322-329] is known for the simple quantum circuit to redistribute the biases (polarizations) of qubits and small time complexity. However, our numerical simulation shows that the number of qubits initialized by the scheme is rather smaller than expected from the von Neumann entropy because of an increase in the sum of the binary entropies of individual qubits, which indicates a growth in the total classical correlation. This result--namely, that there is such a significant growth in the total binary entropy--disagrees with that of their analysis

  6. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Jacek

    2004-05-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  7. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    International Nuclear Information System (INIS)

    Malinowski, Jacek

    2004-01-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  8. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  9. Emerging opportunities in enterprise integration with open architecture computer numerical controls

    Science.gov (United States)

    Hudson, Christopher A.

    1997-01-01

    The shift to open-architecture machine tool computer numerical controls is providing new opportunities for metal working oriented manufacturers to streamline the entire 'art to part' process. Production cycle times, accuracy, consistency, predictability and process reliability are just some of the factors that can be improved, leading to better manufactured product at lower costs. Open architecture controllers are allowing manufacturers to apply general purpose software and hardware tools increase where previous approaches relied on proprietary and unique hardware and software. This includes DNC, SCADA, CAD, and CAM, where the increasing use of general purpose components is leading to lower cost system that are also more reliable and robust than the past proprietary approaches. In addition, a number of new opportunities exist, which in the past were likely impractical due to cost or performance constraints.

  10. Stable numerical method in computation of stellar evolution

    International Nuclear Information System (INIS)

    Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.

    1982-01-01

    To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)

  11. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  12. Numerical simulation of NQR/NMR: Applications in quantum computing.

    Science.gov (United States)

    Possa, Denimar; Gaudio, Anderson C; Freitas, Jair C C

    2011-04-01

    A numerical simulation program able to simulate nuclear quadrupole resonance (NQR) as well as nuclear magnetic resonance (NMR) experiments is presented, written using the Mathematica package, aiming especially applications in quantum computing. The program makes use of the interaction picture to compute the effect of the relevant nuclear spin interactions, without any assumption about the relative size of each interaction. This makes the program flexible and versatile, being useful in a wide range of experimental situations, going from NQR (at zero or under small applied magnetic field) to high-field NMR experiments. Some conditions specifically required for quantum computing applications are implemented in the program, such as the possibility of use of elliptically polarized radiofrequency and the inclusion of first- and second-order terms in the average Hamiltonian expansion. A number of examples dealing with simple NQR and quadrupole-perturbed NMR experiments are presented, along with the proposal of experiments to create quantum pseudopure states and logic gates using NQR. The program and the various application examples are freely available through the link http://www.profanderson.net/files/nmr_nqr.php. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  14. Summary of research in applied mathematics, numerical analysis, and computer sciences

    Science.gov (United States)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  15. Singularities of robot mechanisms numerical computation and avoidance path planning

    CERN Document Server

    Bohigas, Oriol; Ros, Lluís

    2017-01-01

    This book presents the singular configurations associated with a robot mechanism, together with robust methods for their computation, interpretation, and avoidance path planning. Having such methods is essential as singularities generally pose problems to the normal operation of a robot, but also determine the workspaces and motion impediments of its underlying mechanical structure. A distinctive feature of this volume is that the methods are applicable to nonredundant mechanisms of general architecture, defined by planar or spatial kinematic chains interconnected in an arbitrary way. Moreover, singularities are interpreted as silhouettes of the configuration space when seen from the input or output spaces. This leads to a powerful image that explains the consequences of traversing singular configurations, and all the rich information that can be extracted from them. The problems are solved by means of effective branch-and-prune and numerical continuation methods that are of independent interest in themselves...

  16. Numerical demonstration of neuromorphic computing with photonic crystal cavities.

    Science.gov (United States)

    Laporte, Floris; Katumba, Andrew; Dambre, Joni; Bienstman, Peter

    2018-04-02

    We propose a new design for a passive photonic reservoir computer on a silicon photonics chip which can be used in the context of optical communication applications, and study it through detailed numerical simulations. The design consists of a photonic crystal cavity with a quarter-stadium shape, which is known to foster interesting mixing dynamics. These mixing properties turn out to be very useful for memory-dependent optical signal processing tasks, such as header recognition. The proposed, ultra-compact photonic crystal cavity exhibits a memory of up to 6 bits, while simultaneously accepting bitrates in a wide region of operation. Moreover, because of the inherent low losses in a high-Q photonic crystal cavity, the proposed design is very power efficient.

  17. The reliable solution and computation time of variable parameters Logistic model

    OpenAIRE

    Pengfei, Wang; Xinnong, Pan

    2016-01-01

    The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different...

  18. CDIAC catalog of numeric data packages and computer model packages

    International Nuclear Information System (INIS)

    Boden, T.A.; Stoss, F.W.

    1993-05-01

    The Carbon Dioxide Information Analysis Center acquires, quality-assures, and distributes to the scientific community numeric data packages (NDPs) and computer model packages (CMPs) dealing with topics related to atmospheric trace-gas concentrations and global climate change. These packages include data on historic and present atmospheric CO 2 and CH 4 concentrations, historic and present oceanic CO 2 concentrations, historic weather and climate around the world, sea-level rise, storm occurrences, volcanic dust in the atmosphere, sources of atmospheric CO 2 , plants' response to elevated CO 2 levels, sunspot occurrences, and many other indicators of, contributors to, or components of climate change. This catalog describes the packages presently offered by CDIAC, reviews the processes used by CDIAC to assure the quality of the data contained in these packages, notes the media on which each package is available, describes the documentation that accompanies each package, and provides ordering information. Numeric data are available in the printed NDPs and CMPs, in CD-ROM format, and from an anonymous FTP area via Internet. All CDIAC information products are available at no cost

  19. Criteria for the reliability of numerical approximations to the solution of fluid flow problems

    International Nuclear Information System (INIS)

    Foias, C.

    1986-01-01

    The numerical approximation of the solutions of fluid flows models is a difficult problem in many cases of energy research. In all numerical methods implementable on digital computers, a basic question is if the number N of elements (Galerkin modes, finite-difference cells, finite-elements, etc.) is sufficient to describe the long time behavior of the exact solutions. It was shown using several approaches that some of the estimates based on physical intuition of N are rigorously valid under very general conditions and follow directly from the mathematical theory of the Navier-Stokes equations. Among the mathematical approaches to these estimates, the most promising (which can be and was already applied to many other dissipative partial differential systems) consists in giving upper estimates to the fractal dimension of the attractor associated to one (or all) solution(s) of the respective partial differential equations. 56 refs

  20. Reliability of voxel gray values in cone beam computed tomography for preoperative implant planning assessment

    NARCIS (Netherlands)

    Parsa, A.; Ibrahim, N.; Hassan, B.; Motroni, A.; van der Stelt, P.; Wismeijer, D.

    2012-01-01

    Purpose: To assess the reliability of cone beam computed tomography (CBCT) voxel gray value measurements using Hounsfield units (HU) derived from multislice computed tomography (MSCT) as a clinical reference (gold standard). Materials and Methods: Ten partially edentulous human mandibular cadavers

  1. Modeling Message Queueing Services with Reliability Guarantee in Cloud Computing Environment Using Colored Petri Nets

    Directory of Open Access Journals (Sweden)

    Jing Li

    2015-01-01

    Full Text Available Motivated by the need for loosely coupled and asynchronous dissemination of information, message queues are widely used in large-scale application areas. With the advent of virtualization technology, cloud-based message queueing services (CMQSs with distributed computing and storage are widely adopted to improve availability, scalability, and reliability; however, a critical issue is its performance and the quality of service (QoS. While numerous approaches evaluating system performance are available, there is no modeling approach for estimating and analyzing the performance of CMQSs. In this paper, we employ both the analytical and simulation modeling to address the performance of CMQSs with reliability guarantee. We present a visibility-based modeling approach (VMA for simulation model using colored Petri nets (CPN. Our model incorporates the important features of message queueing services in the cloud such as replication, message consistency, resource virtualization, and especially the mechanism named visibility timeout which is adopted in the services to guarantee system reliability. Finally, we evaluate our model through different experiments under varied scenarios to obtain important performance metrics such as total message delivery time, waiting number, and components utilization. Our results reveal considerable insights into resource scheduling and system configuration for service providers to estimate and gain performance optimization.

  2. Reliability of Lyapunov characteristic exponents computed by the two-particle method

    Science.gov (United States)

    Mei, Lijie; Huang, Li

    2018-03-01

    For highly complex problems, such as the post-Newtonian formulation of compact binaries, the two-particle method may be a better, or even the only, choice to compute the Lyapunov characteristic exponent (LCE). This method avoids the complex calculations of variational equations compared with the variational method. However, the two-particle method sometimes provides spurious estimates to LCEs. In this paper, we first analyze the equivalence in the definition of LCE between the variational and two-particle methods for Hamiltonian systems. Then, we develop a criterion to determine the reliability of LCEs computed by the two-particle method by considering the magnitude of the initial tangent (or separation) vector ξ0 (or δ0), renormalization time interval τ, machine precision ε, and global truncation error ɛT. The reliable Lyapunov characteristic indicators estimated by the two-particle method form a V-shaped region, which is restricted by d0, ε, and ɛT. Finally, the numerical experiments with the Hénon-Heiles system, the spinning compact binaries, and the post-Newtonian circular restricted three-body problem strongly support the theoretical results.

  3. To the problem of reliability standardization in computer-aided manufacturing at NPP units

    International Nuclear Information System (INIS)

    Yastrebenetskij, M.A.; Shvyryaev, Yu.V.; Spektor, L.I.; Nikonenko, I.V.

    1989-01-01

    The problems of reliability standardization in computer-aided manufacturing of NPP units considering the following approaches: computer-aided manufacturing of NPP units as a part of automated technological complex; computer-aided manufacturing of NPP units as multi-functional system, are analyzed. Selection of the composition of reliability indeces for computer-aided manufacturing of NPP units for each of the approaches considered is substantiated

  4. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  5. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  6. Comparison of reliability of lateral cephalogram and computed ...

    African Journals Online (AJOL)

    2014-05-07

    May 7, 2014 ... measurements acquired from both the modalities are reliable and reproducible, but .... paint on all the slice of the image stack in the axial plane of ..... between body mass index, age and upper airway measurements in snorers.

  7. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  8. Numerical and analytical solutions for problems relevant for quantum computers

    International Nuclear Information System (INIS)

    Spoerl, Andreas

    2008-01-01

    Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)

  9. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  10. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    Science.gov (United States)

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  11. Computer prediction of subsurface radionuclide transport: an adaptive numerical method

    International Nuclear Information System (INIS)

    Neuman, S.P.

    1983-01-01

    Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1

  12. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  13. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  14. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-06

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  15. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  16. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  17. Design for reliability information and computer-based systems

    CERN Document Server

    Bauer, Eric

    2010-01-01

    "System reliability, availability and robustness are often not well understood by system architects, engineers and developers. They often don't understand what drives customer's availability expectations, how to frame verifiable availability/robustness requirements, how to manage and budget availability/robustness, how to methodically architect and design systems that meet robustness requirements, and so on. The book takes a very pragmatic approach of framing reliability and robustness as a functional aspect of a system so that architects, designers, developers and testers can address it as a concrete, functional attribute of a system, rather than an abstract, non-functional notion"--Provided by publisher.

  18. Methods to compute reliabilities for genomic predictions of feed intake

    Science.gov (United States)

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  19. Direct unavailability computation of a maintained highly reliable system

    Czech Academy of Sciences Publication Activity Database

    Briš, R.; Byczanski, Petr

    2010-01-01

    Roč. 224, č. 3 (2010), s. 159-170 ISSN 1748-0078 Grant - others:GA Mšk(CZ) MSM6198910007 Institutional research plan: CEZ:AV0Z30860518 Keywords : high reliability * availability * directed acyclic graph Subject RIV: BA - General Mathematics http:// journals .pepublishing.com/content/rtp3178l17923m46/

  20. Models of Information Security Highly Reliable Computing Systems

    Directory of Open Access Journals (Sweden)

    Vsevolod Ozirisovich Chukanov

    2016-03-01

    Full Text Available Methods of the combined reservation are considered. The models of reliability of systems considering parameters of restoration and prevention of blocks of system are described. Ratios for average quantity prevention and an availability quotient of blocks of system are given.

  1. On the potential of computational methods and numerical simulation in ice mechanics

    International Nuclear Information System (INIS)

    Bergan, Paal G; Cammaert, Gus; Skeie, Geir; Tharigopula, Venkatapathi

    2010-01-01

    This paper deals with the challenge of developing better methods and tools for analysing interaction between sea ice and structures and, in particular, to be able to calculate ice loads on these structures. Ice loads have traditionally been estimated using empirical data and 'engineering judgment'. However, it is believed that computational mechanics and advanced computer simulations of ice-structure interaction can play an important role in developing safer and more efficient structures, especially for irregular structural configurations. The paper explains the complexity of ice as a material in computational mechanics terms. Some key words here are large displacements and deformations, multi-body contact mechanics, instabilities, multi-phase materials, inelasticity, time dependency and creep, thermal effects, fracture and crushing, and multi-scale effects. The paper points towards the use of advanced methods like ALE formulations, mesh-less methods, particle methods, XFEM, and multi-domain formulations in order to deal with these challenges. Some examples involving numerical simulation of interaction and loads between level sea ice and offshore structures are presented. It is concluded that computational mechanics may prove to become a very useful tool for analysing structures in ice; however, much research is still needed to achieve satisfactory reliability and versatility of these methods.

  2. Reliability of real-time computing with radiation data feedback at accidental release

    International Nuclear Information System (INIS)

    Deme, S.; Feher, I.; Lang, E.

    1990-01-01

    At the first workshop in 1985 we reported on the real-time dose computing method used at the Paks Nuclear Power Plant and on the telemetric system developed for the normalization of the computed data. At present, the computing method normalized for the telemetric data represents the primary information for deciding on any necessary counter measures in case of a nuclear reactor accident. In this connection we analyzed the reliability of the results obtained in this manner. The points of the analysis were: how the results are influenced by the choice of certain parameters that cannot be determined by direct methods and how the improperly chosen diffusion parameters would distort the determination of environmental radiation parameters normalized on the basis of the measurements ( 131 I activity concentration, gamma dose rate) at points lying at a given distance from the measuring stations. A further source of errors may be that, when determining the level of gamma radiation, the radionuclide doses in the cloud and on the ground surface are measured together by the environmental monitoring stations, whereas these doses appear separately in the computations. At the Paks NPP it is the time integral of the aiborne activity concentration of vapour form 131 I which is determined. This quantity includes neither the other physical and chemical forms of 131 I nor the other isotopes of radioiodine. We gave numerical examples for the uncertainties due to the above factors. As a result, we arrived at the conclusions that there is a need to decide on accident-related measures based on the computing method that the dose uncertainties may reach one order of magnitude for points lying far from the monitoring stations. Different measures are discussed to make the uncertainties significantly lower

  3. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  4. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  5. Reliability in Warehouse-Scale Computing: Why Low Latency Matters

    DEFF Research Database (Denmark)

    Nannarelli, Alberto

    2015-01-01

    , the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...

  6. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  7. EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II

    CERN Document Server

    Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012

    2013-01-01

    This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability,  performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal. 

  8. Numerical computation of fluid flow in different nonferrous metallurgical reactors

    International Nuclear Information System (INIS)

    Lackner, A.

    1996-10-01

    Heat, mass and fluid flow phenomena in metallurgical reactor systems such as smelting cyclones or electrolytic cells are complex and intricately linked through the governing equations of fluid flow, chemical reaction kinetics and chemical thermodynamics. The challenges for the representation of flow phenomena in such reactors as well as the transfers of these concepts to non-specialist modelers (e.g. plant operators and management personnel) can be met through scientific flow visualization techniques. In the first example the fluid flow of the gas phase and of concentrate particles in a smelting cyclone for copper production are calculated three dimensionally. The effect of design parameters (length and diameter of reactor, concentrate feeding tangentially or from the top, ..) and operating conditions are investigated. Single particle traces show, how to increase particle retention time before the particles reach the liquid film flowing down the cyclone wall. Cyclone separators are widely used in the metallurgical and chemical industry for collection of large quantities of dust. Most of the empirical models, which today are applied for the design, are lacking in being valid in the high temperature region. Therefore the numerical prediction of the collection efficiency of dust particles is done. The particle behavior close to the wall is considered by applying a particle restitution model, which calculates individual particle restitution coefficients as functions of impact velocity and impact angle. The effect of design parameters and operating are studied. Moreover, the fluid flow inside a copper refining electrolysis cell is modeled. The simulation is based on density variations in the boundary layer at the electrode surface. Density and thickness of the boundary layer are compared to measurements in a parametric study. The actual inhibitor concentration in the cell is calculated, too. Moreover, a two-phase flow approach is developed to simulate the behavior of

  9. Effective computing algorithm for maintenance optimization of highly reliable systems

    Czech Academy of Sciences Publication Activity Database

    Briš, R.; Byczanski, Petr

    2013-01-01

    Roč. 109, č. 1 (2013), s. 77-85 ISSN 0951-8320 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : exact computing * maintenance * optimization * unavailability Subject RIV: BA - General Mathematics Impact factor: 2.048, year: 2013 http://www.sciencedirect.com/science/article/pii/S0951832012001639

  10. Intraobserver and intermethod reliability for using two different computer programs in preoperative lower limb alignment analysis

    Directory of Open Access Journals (Sweden)

    Mohamed Kenawey

    2016-12-01

    Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.

  11. Distributed Information and Control system reliability enhancement by fog-computing concept application

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-03-01

    The paper focuses on the information and control system reliability issue. Authors of the current paper propose a new complex approach of information and control system reliability enhancement by application of the computing concept elements. The approach proposed consists of a complex of optimization problems to be solved. These problems are: estimation of computational complexity, which can be shifted to the edge of the network and fog-layer, distribution of computations among the data processing elements and distribution of computations among the sensors. The problems as well as some simulated results and discussion are formulated and presented within this paper.

  12. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  13. Reliability of a computer and Internet survey (Computer User Profile) used by adults with and without traumatic brain injury (TBI).

    Science.gov (United States)

    Kilov, Andrea M; Togher, Leanne; Power, Emma

    2015-01-01

    To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.

  14. Structural reliability assessment capability in NESSUS

    Science.gov (United States)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  15. Numerical computation of space shuttle orbiter flow field

    Science.gov (United States)

    Tannehill, John C.

    1988-01-01

    A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.

  16. Benchmark Numerical Toolkits for High Performance Computing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  17. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  18. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  19. Numerical Computational Technique for Scattering from Underwater Objects

    OpenAIRE

    T. Ratna Mani; Raj Kumar; Odamapally Vijay Kumar

    2013-01-01

    This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD) method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric s...

  20. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    Science.gov (United States)

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  1. Pulse cleaning flow models and numerical computation of candle ceramic filters.

    Science.gov (United States)

    Tian, Gui-shan; Ma, Zhen-ji; Zhang, Xin-yi; Xu, Ting-xiang

    2002-04-01

    Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and one-dimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.

  2. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  3. Fog-computing concept usage as means to enhance information and control system reliability

    Science.gov (United States)

    Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya

    2018-05-01

    This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.

  4. Reliability-Centric Analysis of Offloaded Computation in Cooperative Wearable Applications

    Directory of Open Access Journals (Sweden)

    Aleksandr Ometov

    2017-01-01

    Full Text Available Motivated by the unprecedented penetration of mobile communications technology, this work carefully brings into perspective the challenges related to heterogeneous communications and offloaded computation operating in cases of fault-tolerant computation, computing, and caching. We specifically focus on the emerging augmented reality applications that require reliable delegation of the computing and caching functionality to proximate resource-rich devices. The corresponding mathematical model proposed in this work becomes of value to assess system-level reliability in cases where one or more nearby collaborating nodes become temporarily unavailable. Our produced analytical and simulation results corroborate the asymptotic insensitivity of the stationary reliability of the system in question (under the “fast” recovery of its elements to the type of the “repair” time distribution, thus supporting the fault-tolerant system operation.

  5. Numerical simulation of information recovery in quantum computers

    International Nuclear Information System (INIS)

    Salas, P.J.; Sanz, A.L.

    2002-01-01

    Decoherence is the main problem to be solved before quantum computers can be built. To control decoherence, it is possible to use error correction methods, but these methods are themselves noisy quantum computation processes. In this work, we study the ability of Steane's and Shor's fault-tolerant recovering methods, as well as a modification of Steane's ancilla network, to correct errors in qubits. We test a way to measure correctly ancilla's fidelity for these methods, and state the possibility of carrying out an effective error correction through a noisy quantum channel, even using noisy error correction methods

  6. On the theories, techniques, and computer codes used in numerical reactor criticality and burnup calculations

    International Nuclear Information System (INIS)

    El-Osery, I.A.

    1981-01-01

    The purpose of this paper is to discuss the theories, techniques and computer codes that are frequently used in numerical reactor criticality and burnup calculations. It is a part of an integrated nuclear reactor calculation scheme conducted by the Reactors Department, Inshas Nuclear Research Centre. The crude part in numerical reactor criticality and burnup calculations includes the determination of neutron flux distribution which can be obtained in principle as a solution of Boltzmann transport equation. Numerical methods used for solving transport equations are discussed. Emphasis are made on numerical techniques based on multigroup diffusion theory. These numerical techniques include nodal, modal, and finite difference ones. The most commonly known computer codes utilizing these techniques are reviewed. Some of the main computer codes that have been already developed at the Reactors Department and related to numerical reactor criticality and burnup calculations have been presented

  7. Reliability issues related to the usage of Cloud Computing in Critical Infrastructures

    OpenAIRE

    Diez Gonzalez, Oscar Manuel; Silva Vazquez, Andrés

    2011-01-01

    The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud co...

  8. Rater reliability and concurrent validity of the Keyboard Personal Computer Style instrument (K-PeCS).

    Science.gov (United States)

    Baker, Nancy A; Cook, James R; Redfern, Mark S

    2009-01-01

    This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.

  9. HuRECA: Human Reliability Evaluator for Computer-based Control Room Actions

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Lee, Seung Jun; Jang, Seung Cheol

    2011-01-01

    As computer-based design features such as computer-based procedures (CBP), soft controls (SCs), and integrated information systems are being adopted in main control rooms (MCR) of nuclear power plants, a human reliability analysis (HRA) method capable of dealing with the effects of these design features on human reliability is needed. From the observations of human factors engineering verification and validation experiments, we have drawn some major important characteristics on operator behaviors and design-related influencing factors (DIFs) from the perspective of human reliability. Firstly, there are new DIFs that should be considered in developing an HRA method for computer-based control rooms including especially CBP and SCs. In the case of the computer-based procedure rather than the paper-based procedure, the structural and managerial elements should be considered as important PSFs in addition to the procedural contents. In the case of the soft controllers, the so-called interface management tasks (or secondary tasks) should be reflected in the assessment of human error probability. Secondly, computer-based control rooms can provide more effective error recovery features than conventional control rooms. Major error recovery features for computer-based control rooms include the automatic logic checking function of the computer-based procedure and the information sharing feature of the general computer-based designs

  10. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  11. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  12. An analytical model for computation of reliability of waste management facilities with intermediate storages

    International Nuclear Information System (INIS)

    Kallweit, A.; Schumacher, F.

    1977-01-01

    A high reliability is called for waste management facilities within the fuel cycle of nuclear power stations which can be fulfilled by providing intermediate storage facilities and reserve capacities. In this report a model based on the theory of Markov processes is described which allows computation of reliability characteristics of waste management facilities containing intermediate storage facilities. The application of the model is demonstrated by an example. (orig.) [de

  13. Improved methods for computing masses from numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kronfeld, A.S.

    1989-11-22

    An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.

  14. Improving the reliability of nuclear reprocessing by application of computers and mathematical modelling

    International Nuclear Information System (INIS)

    Gabowitsch, E.; Trauboth, H.

    1982-01-01

    After a brief survey of the present and expected future state of nuclear energy utilization, which should demonstrate the significance of nuclear reprocessing, safety and reliability aspects of nuclear reprocessing plants (NRP) are considered. Then, the principal possibilities of modern computer technology including computer systems architecture and application-oriented software for improving the reliability and availability are outlined. In this context, two information systems being developed at the Nuclear Research Center Karlsruhe (KfK) are briefly described. For design evaluation of certain areas of a large NRP mathematical methods and computer-aided tools developed, used or being designed by KfK are discussed. In conclusion, future research to be pursued in information processing and applied mathematics in support of reliable operation of NRP's is proposed. (Auth.)

  15. Research in progress in applied mathematics, numerical analysis, fluid mechanics, and computer science

    Science.gov (United States)

    1994-01-01

    This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.

  16. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Thomas J. Marlowe

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.

  17. Reliability analysis of microcomputer boards and computer based systems important to safety of nuclear plants

    International Nuclear Information System (INIS)

    Shrikhande, S.V.; Patil, V.K.; Ganesh, G.; Biswas, B.; Patil, R.K.

    2010-01-01

    Computer Based Systems (CBS) are employed in Indian nuclear plants for protection, control and monitoring purpose. For forthcoming CBS, Reactor Control Division has designed and developed a new standardized family of microcomputer boards qualified to stringent requirements of nuclear industry. These boards form the basic building blocks of CBS. Reliability analysis of these boards is being carried out using analysis package based on MIL-STD-217Plus methodology. The estimated failure rate values of these standardized microcomputer boards will be useful for reliability assessment of these systems. The paper presents reliability analysis of microcomputer boards and case study of a CBS system built using these boards. (author)

  18. Probability of extreme interference levels computed from reliability approaches: application to transmission lines with uncertain parameters

    International Nuclear Information System (INIS)

    Larbi, M.; Besnier, P.; Pecqueux, B.

    2014-01-01

    This paper deals with the risk analysis of an EMC default using a statistical approach. It is based on reliability methods from probabilistic engineering mechanics. A computation of probability of failure (i.e. probability of exceeding a threshold) of an induced current by crosstalk is established by taking into account uncertainties on input parameters influencing levels of interference in the context of transmission lines. The study has allowed us to evaluate the probability of failure of the induced current by using reliability methods having a relative low computational cost compared to Monte Carlo simulation. (authors)

  19. NUMERICAL COMPUTATION AND PREDICTION OF ELECTRICITY CONSUMPTION IN TOBACCO INDUSTRY

    Directory of Open Access Journals (Sweden)

    Mirjana Laković

    2017-12-01

    Full Text Available Electricity is a key energy source in each country and an important condition for economic development. It is necessary to use modern methods and tools to predict energy consumption for different types of systems and weather conditions. In every industrial plant, electricity consumption presents one of the greatest operating costs. Monitoring and forecasting of this parameter provide the opportunity to rationalize the use of electricity and thus significantly reduce the costs. The paper proposes the prediction of energy consumption by a new time-series model. This involves time series models using a set of previously collected data to predict the future load. The most commonly used linear time series models are the AR (Autoregressive Model, MA (Moving Average and ARMA (Autoregressive Moving Average Model. The AR model is used in this paper. Using the AR (Autoregressive Model model, the Monte Carlo simulation method is utilized for predicting and analyzing the energy consumption change in the considered tobacco industrial plant. One of the main parts of the AR model is a seasonal pattern that takes into account the climatic conditions for a given geographical area. This part of the model was delineated by the Fourier transform and was used with the aim of avoiding the model complexity. As an example, the numerical results were performed for tobacco production in one industrial plant. A probabilistic range of input values is used to determine the future probabilistic level of energy consumption.

  20. Proceeding of 1998-workshop on MHD computations. Study on numerical methods related to plasma confinement

    International Nuclear Information System (INIS)

    Kako, T.; Watanabe, T.

    1999-04-01

    This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)

  1. Proceeding of 1998-workshop on MHD computations. Study on numerical methods related to plasma confinement

    Energy Technology Data Exchange (ETDEWEB)

    Kako, T.; Watanabe, T. [eds.

    1999-04-01

    This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)

  2. Minimal features of a computer and its basic software to executs NEPTUNIX 2 numerical step

    International Nuclear Information System (INIS)

    Roux, Pierre.

    1982-12-01

    NEPTUNIX 2 is a package which carries out the simulation of complex processes described by numerous non linear algebro-differential equations. Main features are: non linear or time dependent parameters, implicit form, stiff systems, dynamic change of equations leading to discontinuities on some variables. Thus the mathematical model is built with an equation set F(x,x',t,l) = 0, where t is the independent variable, x' the derivative of x and l an ''algebrized'' logical variable. The NEPTUNIX 2 package is divided into two successive major steps: a non numerical step and a numerical step. The non numerical step must be executed on a series 370 IBM computer or a compatible computer. This step generates a FORTRAN language model picture fitted for the computer carrying out the numerical step. The numerical step consists in building and running a mathematical model simulator. This execution step of NEPTUNIX 2 has been designed in order to be transportable on many computers. The present manual describes minimal features of such host computer used for executing the NEPTUNIX 2 numerical step [fr

  3. Precision of lumbar intervertebral measurements: does a computer-assisted technique improve reliability?

    Science.gov (United States)

    Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K

    2011-04-01

    Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual

  4. The computer vision in the service of safety and reliability in steam generators inspection services

    International Nuclear Information System (INIS)

    Pineiro Fernandez, P.; Garcia Bueno, A.; Cabrera Jordan, E.

    2012-01-01

    The actual computational vision has matured very quickly in the last ten years by facilitating new developments in various areas of nuclear application allowing to automate and simplify processes and tasks, instead or in collaboration with the people and equipment efficiently. The current computer vision (more appropriate than the artificial vision concept) provides great possibilities of also improving in terms of the reliability and safety of NPPS inspection systems.

  5. Optimal reliability allocation for large software projects through soft computing techniques

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin

    2012-01-01

    or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  6. Numerical computing of elastic homogenized coefficients for periodic fibrous tissue

    Directory of Open Access Journals (Sweden)

    Roman S.

    2009-06-01

    Full Text Available The homogenization theory in linear elasticity is applied to a periodic array of cylindrical inclusions in rectangular pattern extending to infinity in the inclusions axial direction, such that the deformation of tissue along this last direction is negligible. In the plane of deformation, the homogenization scheme is based on the average strain energy whereas in the third direction it is based on the average normal stress along this direction. Namely, these average quantities have to be the same on a Repeating Unit Cell (RUC of heterogeneous and homogenized media when using a special form of boundary conditions forming by a periodic part and an affine part of displacement. It exists an infinity of RUCs generating the considered array. The computing procedure is tested with different choices of RUC to control that the results of the homogenization process are independent of the kind of RUC we employ. Then, the dependence of the homogenized coefficients on the microstructure can be studied. For instance, a special anisotropy and the role of the inclusion volume are investigated. In the second part of this work, mechanical traction tests are simulated. We consider two kinds of loading, applying a density of force or imposing a displacement. We test five samples of periodic array containing one, four, sixteen, sixty-four and one hundred of RUCs. The evolution of mean stresses, strains and energy with the numbers of inclusions is studied. Evolutions depend on the kind of loading, but not their limits, which could be predicted by simulating traction test of the homogenized medium.

  7. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Marlowe Thomas J.

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants. We illustrate this approach using threshhold graphs, and show that any computation of reliability using Gilbert's formula will be polynomial-time if and only if the number of invariants considered is polynomial; we then show families of graphs with polynomial-time, and non-polynomial reliability computation, and show that these encompass most previously known results. We then codify our approach to indicate how it can be used for other classes of graphs, and suggest several classes to which the technique can be applied.

  8. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    NARCIS (Netherlands)

    Rodriguez, A.; Ibanescu, M.; Iannuzzi, D.; Joannopoulos, J. D.; Johnson, S.T.

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the

  9. A virtual component method in numerical computation of cascades for isotope separation

    International Nuclear Information System (INIS)

    Zeng Shi; Cheng Lu

    2014-01-01

    The analysis, optimization, design and operation of cascades for isotope separation involve computations of cascades. In analytical analysis of cascades, using virtual components is a very useful analysis method. For complicated cases of cascades, numerical analysis has to be employed. However, bound up to the conventional idea that the concentration of a virtual component should be vanishingly small, virtual component is not yet applied to numerical computations. Here a method of introducing the method of using virtual components to numerical computations is elucidated, and its application to a few types of cascades is explained and tested by means of numerical experiments. The results show that the concentration of a virtual component is not restrained at all by the 'vanishingly small' idea. For the same requirements on cascades, the cascades obtained do not depend on the concentrations of virtual components. (authors)

  10. Reliable multicast for the Grid: a case study in experimental computer science.

    Science.gov (United States)

    Nekovee, Maziar; Barcellos, Marinho P; Daw, Michael

    2005-08-15

    In its simplest form, multicast communication is the process of sending data packets from a source to multiple destinations in the same logical multicast group. IP multicast allows the efficient transport of data through wide-area networks, and its potentially great value for the Grid has been highlighted recently by a number of research groups. In this paper, we focus on the use of IP multicast in Grid applications, which require high-throughput reliable multicast. These include Grid-enabled computational steering and collaborative visualization applications, and wide-area distributed computing. We describe the results of our extensive evaluation studies of state-of-the-art reliable-multicast protocols, which were performed on the UK's high-speed academic networks. Based on these studies, we examine the ability of current reliable multicast technology to meet the Grid's requirements and discuss future directions.

  11. SALP (Sensitivity Analysis by List Processing), a computer assisted technique for binary systems reliability analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Mancini, G.; Volta, G.; Van Den Muyzenberg, C.L.; Contini, S.; Garribba, S.

    1978-01-01

    A computerized technique which allows the modelling by AND, OR, NOT binary trees, of various complex situations encountered in safety and reliability assessment, is described. By the use of list-processing, numerical and non-numerical types of information are used together. By proper marking of gates and primary events, stand-by systems, common cause failure and multiphase systems can be analyzed. The basic algorithms used in this technique are shown in detail. Application to a stand-by and multiphase system is then illustrated

  12. Reliability Analysis-Based Numerical Calculation of Metal Structure of Bridge Crane

    Directory of Open Access Journals (Sweden)

    Wenjun Meng

    2013-01-01

    Full Text Available The study introduced a finite element model of DQ75t-28m bridge crane metal structure and made finite element static analysis to obtain the stress response of the dangerous point of metal structure in the most extreme condition. The simulated samples of the random variable and the stress of the dangerous point were successfully obtained through the orthogonal design. Then, we utilized BP neural network nonlinear mapping function trains to get the explicit expression of stress in response to the random variable. Combined with random perturbation theory and first-order second-moment (FOSM method, the study analyzed the reliability and its sensitivity of metal structure. In conclusion, we established a novel method for accurately quantitative analysis and design of bridge crane metal structure.

  13. RSAM: An enhanced architecture for achieving web services reliability in mobile cloud computing

    Directory of Open Access Journals (Sweden)

    Amr S. Abdelfattah

    2018-04-01

    Full Text Available The evolution of the mobile landscape is coupled with the ubiquitous nature of the internet with its intermittent wireless connectivity and the web services. Achieving the web service reliability results in low communication overhead and retrieving the appropriate response. The middleware approach (MA is highly tended to achieve the web service reliability. This paper proposes a Reliable Service Architecture using Middleware (RSAM that achieves the reliable web services consumption. The enhanced architecture focuses on ensuring and tracking the request execution under the communication limitations and service temporal unavailability. It considers the most measurement factors including: request size, response size, and consuming time. We conducted experiments to compare the enhanced architecture with the traditional one. In these experiments, we covered several cases to prove the achievement of reliability. Results also show that the request size was found to be constant, the response size is identical to the traditional architecture, and the increase in the consuming time was less than 5% of the transaction time with the different response sizes. Keywords: Reliable web service, Middleware architecture, Mobile cloud computing

  14. Current and planned numerical development for improving computing performance for long duration and/or low pressure transients

    Energy Technology Data Exchange (ETDEWEB)

    Faydide, B. [Commissariat a l`Energie Atomique, Grenoble (France)

    1997-07-01

    This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained with Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.

  15. Current and planned numerical development for improving computing performance for long duration and/or low pressure transients

    International Nuclear Information System (INIS)

    Faydide, B.

    1997-01-01

    This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained with Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients

  16. Numerical models for computation of pollutant-dispersion in the atmosphere

    International Nuclear Information System (INIS)

    Leder, S.M.; Biesemann-Krueger, A.

    1985-04-01

    The report describes some models which are used to compute the concentration of emitted pollutants in the lower atmosphere. A dispersion model, developed at the University of Hamburg, is considered in more detail and treated with two different numerical methods. The convergence of the methods is investigated and a comparison of numerical results and dispersion experiments carried out at the Nuclear Research Center Karlsruhe is given. (orig.) [de

  17. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    Science.gov (United States)

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  19. Numerical computation of soliton dynamics for NLS equations in a driving potential

    Directory of Open Access Journals (Sweden)

    Marco Caliari

    2010-06-01

    Full Text Available We provide numerical computations for the soliton dynamics of the nonlinear Schrodinger equation with an external potential. After computing the ground state solution r of a related elliptic equation we show that, in the semi-classical regime, the center of mass of the solution with initial datum built upon r is driven by the solution to $ddot x=- abla V(x$. Finally, we provide examples and analyze the numerical errors in the two dimensional case when V is a harmonic potential.

  20. Proceeding of 1999-workshop on MHD computations 'study on numerical methods related to plasma confinement'

    International Nuclear Information System (INIS)

    Kako, T.; Watanabe, T.

    2000-06-01

    This is the proceeding of 'study on numerical methods related to plasma confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. There are also various lectures on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. Separate abstracts were presented for 13 of the papers in this report. The remaining 6 were considered outside the subject scope of INIS. (J.P.N.)

  1. Reliability Assessment of Cloud Computing Platform Based on Semiquantitative Information and Evidential Reasoning

    Directory of Open Access Journals (Sweden)

    Hang Wei

    2016-01-01

    Full Text Available A reliability assessment method based on evidential reasoning (ER rule and semiquantitative information is proposed in this paper, where a new reliability assessment architecture including four aspects with both quantitative data and qualitative knowledge is established. The assessment architecture is more objective in describing complex dynamic cloud computing environment than that in traditional method. In addition, the ER rule which has good performance for multiple attribute decision making problem is employed to integrate different types of the attributes in assessment architecture, which can obtain more accurate assessment results. The assessment results of the case study in an actual cloud computing platform verify the effectiveness and the advantage of the proposed method.

  2. Virtualization of Legacy Instrumentation Control Computers for Improved Reliability, Operational Life, and Management.

    Science.gov (United States)

    Katz, Jonathan E

    2017-01-01

    Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.

  3. Vectorization on the star computer of several numerical methods for a fluid flow problem

    Science.gov (United States)

    Lambiotte, J. J., Jr.; Howser, L. M.

    1974-01-01

    A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.

  4. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  5. Review of the reliability of Bruce 'B' RRS dual computer system

    International Nuclear Information System (INIS)

    Arsenault, J.E.; Manship, R.A.; Levan, D.G.

    1995-07-01

    The review presents an analysis of the Bruce 'B' Reactor Regulating System (RRS) Digital Control Computer (DCC) system, based on system documentation, significant event reports (SERs), question sets, and a site visit. The intent is to evaluate the reliability of the RRS DCC and to identify the possible scenarios that could lead to a serious process failure. The evaluation is based on three relatively independent analyses, which are integrated and presented in the form of Conclusions and Recommendations

  6. An approach to first principles electronic structure calculation by symbolic-numeric computation

    Directory of Open Access Journals (Sweden)

    Akihito Kikuchi

    2013-04-01

    Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.

  7. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Directory of Open Access Journals (Sweden)

    Donald D. Anderson

    2012-01-01

    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  8. Numerical

    Directory of Open Access Journals (Sweden)

    M. Boumaza

    2015-07-01

    Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.

  9. Reliability analysis and computation of computer-based safety instrumentation and control used in German nuclear power plant. Final report

    International Nuclear Information System (INIS)

    Ding, Yongjian; Krause, Ulrich; Gu, Chunlei

    2014-01-01

    The trend of technological advancement in the field of safety instrumentation and control (I and C) leads to increasingly frequent use of computer-based (digital) control systems which consisting of distributed, connected bus communications computers and their functionalities are freely programmable by qualified software. The advantages of the new I and C system over the old I and C system with hard-wired technology are e.g. in the higher flexibility, cost-effective procurement of spare parts, higher hardware reliability (through higher integration density, intelligent self-monitoring mechanisms, etc.). On the other hand, skeptics see the new technology with the computer-based I and C a higher potential by influences of common cause failures (CCF), and the easier manipulation by sabotage (IT Security). In this joint research project funded by the Federal Ministry for Economical Affaires and Energy (BMWi) (2011-2014, FJZ 1501405) the Otto-von-Guericke-University Magdeburg and Magdeburg-Stendal University of Applied Sciences are therefore trying to develop suitable methods for the demonstration of the reliability of the new instrumentation and control systems with the focus on the investigation of CCF. This expertise of both houses shall be extended to this area and a scientific contribution to the sound reliability judgments of the digital safety I and C in domestic and foreign nuclear power plants. First, the state of science and technology will be worked out through the study of national and international standards in the field of functional safety of electrical and I and C systems and accompanying literature. On the basis of the existing nuclear Standards the deterministic requirements on the structure of the new digital I and C system will be determined. The possible methods of reliability modeling will be analyzed and compared. A suitable method called multi class binomial failure rate (MCFBR) which was successfully used in safety valve applications will be

  10. Research in progress in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1990-01-01

    Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.

  11. Transfer of numeric ASCII data files between Apple and IBM personal computers.

    Science.gov (United States)

    Allan, R W; Bermejo, R; Houben, D

    1986-01-01

    Listings for programs designed to transfer numeric ASCII data files between Apple and IBM personal computers are provided with accompanying descriptions of how the software operates. Details of the hardware used are also given. The programs may be easily adapted for transferring data between other microcomputers.

  12. CINDA-3G: Improved Numerical Differencing Analyzer Program for Third-Generation Computers

    Science.gov (United States)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1970-01-01

    The goal of this work was to develop a new and versatile program to supplement or replace the original Chrysler Improved Numerical Differencing Analyzer (CINDA) thermal analyzer program in order to take advantage of the improved systems software and machine speeds of the third-generation computers.

  13. Numerical calculation of the conductivity of percolation clusters and the use of special purpose computers

    International Nuclear Information System (INIS)

    Herrmann, H.J.

    1989-01-01

    Electrical conductivity diffusion or phonons, have an anomalous behaviour on percolation clusters at the percolation threshold due to the fractality of these clusters. The results that have been found numerically for this anomalous behaviour are reviewed. A special purpose computer built for this purpose is described and the evaluation of the data from this machine is discussed

  14. CNC Turning Center Operations and Prove Out. Computer Numerical Control Operator/Programmer. 444-334.

    Science.gov (United States)

    Skowronski, Steven D.

    This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…

  15. CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.

    Science.gov (United States)

    Skowronski, Steven D.; Tatum, Kenneth

    This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…

  16. Technology and Jobs: Computer-Aided Design. Numerical-Control Machine-Tool Operators. Office Automation.

    Science.gov (United States)

    Stanton, Michael; And Others

    1985-01-01

    Three reports on the effects of high technology on the nature of work include (1) Stanton on applications and implications of computer-aided design for engineers, drafters, and architects; (2) Nardone on the outlook and training of numerical-control machine tool operators; and (3) Austin and Drake on the future of clerical occupations in automated…

  17. Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field

    Science.gov (United States)

    Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie

    2018-04-01

    A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.

  18. How to Build an AppleSeed: A Parallel Macintosh Cluster for Numerically Intensive Computing

    Science.gov (United States)

    Decyk, V. K.; Dauger, D. E.

    We have constructed a parallel cluster consisting of a mixture of Apple Macintosh G3 and G4 computers running the Mac OS, and have achieved very good performance on numerically intensive, parallel plasma particle-incell simulations. A subset of the MPI message-passing library was implemented in Fortran77 and C. This library enabled us to port code, without modification, from other parallel processors to the Macintosh cluster. Unlike Unix-based clusters, no special expertise in operating systems is required to build and run the cluster. This enables us to move parallel computing from the realm of experts to the main stream of computing.

  19. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    International Nuclear Information System (INIS)

    Bailey, David

    2005-01-01

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  20. Sigma: computer vision in the service of safety and reliability in the inspection services

    International Nuclear Information System (INIS)

    Pineiro, P. J.; Mendez, M.; Garcia, A.; Cabrera, E.; Regidor, J. J.

    2012-01-01

    Vision Computing is growing very fast in the last decade with very efficient tools and algorithms. This allows new development of applications in the nuclear field providing more efficient equipment and tasks: redundant systems, vision-guided mobile robots, automated visual defects recognition, measurement, etc., In this paper Tecnatom describes a detailed example of visual computing application developed to provide secure redundant identification of the thousands of tubes existing in a power plant steam generator. some other on-going or planned visual computing projects by Tecnatom are also introduced. New possibilities of application in the inspection systems for nuclear components appear where the main objective is to maximize their reliability. (Author) 6 refs.

  1. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial

    Directory of Open Access Journals (Sweden)

    Kevin A. Hallgren

    2012-02-01

    Full Text Available Many research designs require the assessment of inter-rater reliability (IRR to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohen’s kappa and intra-class correlations to assess IRR.

  2. Numerical validation of selected computer programs in nonlinear analysis of steel frame exposed to fire

    Science.gov (United States)

    Maślak, Mariusz; Pazdanowski, Michał; Woźniczka, Piotr

    2018-01-01

    Validation of fire resistance for the same steel frame bearing structure is performed here using three different numerical models, i.e. a bar one prepared in the SAFIR environment, and two 3D models developed within the framework of Autodesk Simulation Mechanical (ASM) and an alternative one developed in the environment of the Abaqus code. The results of the computer simulations performed are compared with the experimental results obtained previously, in a laboratory fire test, on a structure having the same characteristics and subjected to the same heating regimen. Comparison of the experimental and numerically determined displacement evolution paths for selected nodes of the considered frame during the simulated fire exposure constitutes the basic criterion applied to evaluate the validity of the numerical results obtained. The experimental and numerically determined estimates of critical temperature specific to the considered frame and related to the limit state of bearing capacity in fire have been verified as well.

  3. Talbot's method for the numerical inversion of Laplace transforms: an implementation for personal computers

    International Nuclear Information System (INIS)

    Garratt, T.J.

    1989-05-01

    Safety assessments of radioactive waste disposal require efficient computer models for the important processes. The present paper is based on an efficient computational technique which can be used to solve a wide variety of safety assessment models. It involves the numerical inversion of analytical solutions to the Laplace-transformed differential equations using a method proposed by Talbot. This method has been implemented on a personal computer in a user-friendly manner. The steps required to implement a particular transform and run the program are outlined. Four examples are described which illustrate the flexibility, accuracy and efficiency of the program. The improvements in computational efficiency described in this paper have application to the probabilistic safety assessment codes ESCORT and MASCOT which are currently under development. Also, it is hoped that the present work will form the basis of software for personal computers which could be used to demonstrate safety assessment procedures to a wide audience. (author)

  4. RELIABILITY OF POSITRON EMISSION TOMOGRAPHY-COMPUTED TOMOGRAPHY IN EVALUATION OF TESTICULAR CARCINOMA PATIENTS.

    Science.gov (United States)

    Nikoletić, Katarina; Mihailović, Jasna; Matovina, Emil; Žeravica, Radmila; Srbovan, Dolores

    2015-01-01

    The study was aimed at assessing the reliability of 18F-fluorodeoxyglucose positron emission tomography-computed tomography scan in evaluation of testicular carcinoma patients. The study sample consisted of 26 scans performed in 23 patients with testicular carcinoma. According to the pathohistological finding, 14 patients had seminomas, 7 had nonseminomas and 2 patients had a mixed histological type. In 17 patients, the initial treatment was orchiectomy+chemotherapy, 2 patients had orchiectomy+chemotherapy+retroperitoneal lymph node dissection, 3 patients had orchiectomy only and one patient was treated with chemotherapy only. Abnormal computed tomography was the main cause for the oncologist to refer the patient to positron emission tomography-computed tomography scan (in 19 scans), magnetic resonance imaging abnormalities in 1 scan, high level oftumor markers in 3 and 3 scans were perforned for follow-up. Positron emission tomography-computed tomography imaging results were compared with histological results, other imaging modalities or the clinical follow-up of the patients. Positron emission tomography-computed tomography scans were positive in 6 and negative in 20 patients. In two patients, positron emission tomography-computed tomography was false positive. There were 20 negative positron emission omography-computed tomography scans perforned in 18 patients, one patient was lost for data analysis. Clinically stable disease was confirmed in 18 follow-up scans performed in 16 patients. The values of sensitivty, specificity, accuracy, and positive- and negative predictive value were 60%, 95%, 75%, 88% and 90.5%, respectively. A hgh negative predictive value obtained in our study (90.5%) suggests that there is a small possibility for a patient to have future relapse after normal positron emission tomography-computed tomography study. However, since the sensitivity and positive predictive value of the study ire rather low, there are limitations of positive

  5. The reliable solution and computation time of variable parameters logistic model

    Science.gov (United States)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  6. Chest computed tomography-based scoring of thoracic sarcoidosis: Inter-rater reliability of CT abnormalities

    Energy Technology Data Exchange (ETDEWEB)

    Heuvel, D.A.V. den; Es, H.W. van; Heesewijk, J.P. van; Spee, M. [St. Antonius Hospital Nieuwegein, Department of Radiology, Nieuwegein (Netherlands); Jong, P.A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Zanen, P.; Grutters, J.C. [University Medical Center Utrecht, Division Heart and Lungs, Utrecht (Netherlands); St. Antonius Hospital Nieuwegein, Center of Interstitial Lung Diseases, Department of Pulmonology, Nieuwegein (Netherlands)

    2015-09-15

    To determine inter-rater reliability of sarcoidosis-related computed tomography (CT) findings that can be used for scoring of thoracic sarcoidosis. CT images of 51 patients with sarcoidosis were scored by five chest radiologists for various abnormal CT findings (22 in total) encountered in thoracic sarcoidosis. Using intra-class correlation coefficient (ICC) analysis, inter-rater reliability was analysed and reported according to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) criteria. A pre-specified sub-analysis was performed to investigate the effect of training. Scoring was trained in a distinct set of 15 scans in which all abnormal CT findings were represented. Median age of the 51 patients (36 men, 70 %) was 43 years (range 26 - 64 years). All radiographic stages were present in this group. ICC ranged from 0.91 for honeycombing to 0.11 for nodular margin (sharp versus ill-defined). The ICC was above 0.60 in 13 of the 22 abnormal findings. Sub-analysis for the best-trained observers demonstrated an ICC improvement for all abnormal findings and values above 0.60 for 16 of the 22 abnormalities. In our cohort, reliability between raters was acceptable for 16 thoracic sarcoidosis-related abnormal CT findings. (orig.)

  7. Chest computed tomography-based scoring of thoracic sarcoidosis: Inter-rater reliability of CT abnormalities

    International Nuclear Information System (INIS)

    Heuvel, D.A.V. den; Es, H.W. van; Heesewijk, J.P. van; Spee, M.; Jong, P.A. de; Zanen, P.; Grutters, J.C.

    2015-01-01

    To determine inter-rater reliability of sarcoidosis-related computed tomography (CT) findings that can be used for scoring of thoracic sarcoidosis. CT images of 51 patients with sarcoidosis were scored by five chest radiologists for various abnormal CT findings (22 in total) encountered in thoracic sarcoidosis. Using intra-class correlation coefficient (ICC) analysis, inter-rater reliability was analysed and reported according to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) criteria. A pre-specified sub-analysis was performed to investigate the effect of training. Scoring was trained in a distinct set of 15 scans in which all abnormal CT findings were represented. Median age of the 51 patients (36 men, 70 %) was 43 years (range 26 - 64 years). All radiographic stages were present in this group. ICC ranged from 0.91 for honeycombing to 0.11 for nodular margin (sharp versus ill-defined). The ICC was above 0.60 in 13 of the 22 abnormal findings. Sub-analysis for the best-trained observers demonstrated an ICC improvement for all abnormal findings and values above 0.60 for 16 of the 22 abnormalities. In our cohort, reliability between raters was acceptable for 16 thoracic sarcoidosis-related abnormal CT findings. (orig.)

  8. Computational Enhancements for Direct Numerical Simulations of Statistically Stationary Turbulent Premixed Flames

    KAUST Repository

    Mukhadiyev, Nurzhan

    2017-05-01

    Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean

  9. Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity

    International Nuclear Information System (INIS)

    Schouman, Thomas; Courvoisier, Delphine S.; Imholz, Benoit; Van Issum, Christopher; Scolozzi, Paolo

    2012-01-01

    Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures

  10. Summary of research in applied mathematics, numerical analysis and computer science at the Institute for Computer Applications in Science and Engineering

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period October 1, 1983 through March 31, 1984 is summarized.

  11. Summary of research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science

    Science.gov (United States)

    1989-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized.

  12. Current research activities: Applied and numerical mathematics, fluid mechanics, experiments in transition and turbulence and aerodynamics, and computer science

    Science.gov (United States)

    1992-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, fluid mechanics including fluid dynamics, acoustics, and combustion, aerodynamics, and computer science during the period 1 Apr. 1992 - 30 Sep. 1992 is summarized.

  13. How to effectively compute the reliability of a thermal-hydraulic nuclear passive system

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2011-01-01

    Research highlights: → Optimized LS is the preferred choice for failure probability estimation. → Two alternative options are suggested for uncertainty and sensitivity analyses. → SS for simulation codes requiring seconds or minutes to run. → Regression models (e.g., ANNs) for simulation codes requiring hours or days to run. - Abstract: The computation of the reliability of a thermal-hydraulic (T-H) passive system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. The objective of this work is to provide operative guidelines to effectively handle the computation of the reliability of a nuclear passive system. Two directions of computation efficiency are considered: from one side, efficient Monte Carlo Simulation (MCS) techniques are indicated as a means to performing robust estimations with a limited number of samples: in particular, the Subset Simulation (SS) and Line Sampling (LS) methods are identified as most valuable; from the other side, fast-running, surrogate regression models (also called response surfaces or meta-models) are indicated as a valid replacement of the long-running T-H model codes: in particular, the use of bootstrapped Artificial Neural Networks (ANNs) is shown to have interesting potentials, including for uncertainty propagation. The recommendations drawn are supported by the results obtained in an illustrative application of literature.

  14. Re-Computation of Numerical Results Contained in NACA Report No. 496

    Science.gov (United States)

    Perry, Boyd, III

    2015-01-01

    An extensive examination of NACA Report No. 496 (NACA 496), "General Theory of Aerodynamic Instability and the Mechanism of Flutter," by Theodore Theodorsen, is described. The examination included checking equations and solution methods and re-computing interim quantities and all numerical examples in NACA 496. The checks revealed that NACA 496 contains computational shortcuts (time- and effort-saving devices for engineers of the time) and clever artifices (employed in its solution methods), but, unfortunately, also contains numerous tripping points (aspects of NACA 496 that have the potential to cause confusion) and some errors. The re-computations were performed employing the methods and procedures described in NACA 496, but using modern computational tools. With some exceptions, the magnitudes and trends of the original results were in fair-to-very-good agreement with the re-computed results. The exceptions included what are speculated to be computational errors in the original in some instances and transcription errors in the original in others. Independent flutter calculations were performed and, in all cases, including those where the original and re-computed results differed significantly, were in excellent agreement with the re-computed results. Appendix A contains NACA 496; Appendix B contains a Matlab(Reistered) program that performs the re-computation of results; Appendix C presents three alternate solution methods, with examples, for the two-degree-of-freedom solution method of NACA 496; Appendix D contains the three-degree-of-freedom solution method (outlined in NACA 496 but never implemented), with examples.

  15. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  16. Piv Method and Numerical Computation for Prediction of Liquid Steel Flow Structure in Tundish

    Directory of Open Access Journals (Sweden)

    Cwudziński A.

    2015-04-01

    Full Text Available This paper presents the results of computer simulations and laboratory experiments carried out to describe the motion of steel flow in the tundish. The facility under investigation is a single-nozzle tundish designed for casting concast slabs. For the validation of the numerical model and verification of the hydrodynamic conditions occurring in the examined tundish furniture variants, obtained from the computer simulations, a physical model of the tundish was employed. State-of-the-art vector flow field analysis measuring systems developed by Lavision were used in the laboratory tests. Computer simulations of liquid steel flow were performed using the commercial program Ansys-Fluent¯. In order to obtain a complete hydrodynamic picture in the tundish furniture variants tested, the computer simulations were performed for both isothermal and non-isothermal conditions.

  17. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  18. The Preliminary Study for Numerical Computation of 37 Rod Bundle in CANDU Reactor

    International Nuclear Information System (INIS)

    Jeon, Yu Mi; Bae, Jun Ho; Park, Joo Hwan

    2010-01-01

    A typical CANDU 6 fuel bundle consists of 37 fuel rods supported by two endplates and separated by spacer pads at various locations. In addition, the bearing pads are brazed to each outer fuel rod with the aim of reducing the contact area between the fuel bundle and the pressure tube. Although the recent progress of CFD methods has provided opportunities for computing the thermal-hydraulic phenomena inside of a fuel channel, it is yet impossible to reflect the detailed shape of rod bundle on the numerical computation due to a lot of computing mesh and memory capacity. Hence, the previous studies conducted a numerical computation for smooth channels without considering spacers, bearing pads. But, it is well known that these components are an important factor to predict the pressure drop and heat transfer rate in a channel. In this study, the new computational method is proposed to solve the complex geometry such as a fuel rod bundle. In front of applying the method to the problem of 37 rod bundle, the validity and the accuracy of the method are tested by applying the method to the simple geometry. Based on the present result, the calculation for the fully shaped 37-rod bundle is scheduled for the future works

  19. The numerical computation of seismic fragility of base-isolated Nuclear Power Plants buildings

    International Nuclear Information System (INIS)

    Perotti, Federico; Domaneschi, Marco; De Grandis, Silvia

    2013-01-01

    Highlights: • Seismic fragility of structural components in base isolated NPP is computed. • Dynamic integration, Response Surface, FORM and Monte Carlo Simulation are adopted. • Refined approach for modeling the non-linearities behavior of isolators is proposed. • Beyond-design conditions are addressed. • The preliminary design of the isolated IRIS is the application of the procedure. -- Abstract: The research work here described is devoted to the development of a numerical procedure for the computation of seismic fragilities for equipment and structural components in Nuclear Power Plants; in particular, reference is made, in the present paper, to the case of isolated buildings. The proposed procedure for fragility computation makes use of the Response Surface Methodology to model the influence of the random variables on the dynamic response. To account for stochastic loading, the latter is computed by means of a simulation procedure. Given the Response Surface, the Monte Carlo method is used to compute the failure probability. The procedure is here applied to the preliminary design of the Nuclear Power Plant reactor building within the International Reactor Innovative and Secure international project; the building is equipped with a base isolation system based on the introduction of High Damping Rubber Bearing elements showing a markedly non linear mechanical behavior. The fragility analysis is performed assuming that the isolation devices become the critical elements in terms of seismic risk and that, once base-isolation is introduced, the dynamic behavior of the building can be captured by low-dimensional numerical models

  20. Human-computer interfaces applied to numerical solution of the Plateau problem

    Science.gov (United States)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  1. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry; Kasimov, Aslan R.

    2018-01-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  2. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry I.

    2017-12-08

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  3. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry

    2018-03-20

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  4. FREQFIT: Computer program which performs numerical regression and statistical chi-squared goodness of fit analysis

    International Nuclear Information System (INIS)

    Hofland, G.S.; Barton, C.C.

    1990-01-01

    The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program's results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig

  5. Numerical computation of solar neutrino flux attenuated by the MSW mechanism

    Science.gov (United States)

    Kim, Jai Sam; Chae, Yoon Sang; Kim, Jung Dae

    1999-07-01

    We compute the survival probability of an electron neutrino in its flight through the solar core experiencing the Mikheyev-Smirnov-Wolfenstein effect with all three neutrino species considered. We adopted a hybrid method that uses an accurate approximation formula in the non-resonance region and numerical integration in the non-adiabatic resonance region. The key of our algorithm is to use the importance sampling method for sampling the neutrino creation energy and position and to find the optimum radii to start and stop numerical integration. We further developed a parallel algorithm for a message passing parallel computer. By using an idea of job token, we have developed a dynamical load balancing mechanism which is effective under any irregular load distributions

  6. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    International Nuclear Information System (INIS)

    Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P

    2016-01-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)

  7. An efficient approach for computing the geometrical optics field reflected from a numerically specified surface

    Science.gov (United States)

    Mittra, R.; Rushdi, A.

    1979-01-01

    An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.

  8. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available problems that utilise remeshing (i.e. the mesh topology is allowed to change) between design updates. Here, changes in mesh topology result in abrupt changes in the discretization error of the computed response. These abrupt changes in turn manifests... in shape optimization but may be present whenever (partial) differential equations are ap- proximated numerically with non-constant discretization methods e.g. remeshing of spatial domains or automatic time stepping in temporal domains. Keywords: Complex...

  9. Coupling artificial intelligence and numerical computation for engineering design (Invited paper)

    Science.gov (United States)

    Tong, S. S.

    1986-01-01

    The possibility of combining artificial intelligence (AI) systems and numerical computation methods for engineering designs is considered. Attention is given to three possible areas of application involving fan design, controlled vortex design of turbine stage blade angles, and preliminary design of turbine cascade profiles. Among the AI techniques discussed are: knowledge-based systems; intelligent search; and pattern recognition systems. The potential cost and performance advantages of an AI-based design-generation system are discussed in detail.

  10. Design and reliability, availability, maintainability, and safety analysis of a high availability quadruple vital computer system

    Institute of Scientific and Technical Information of China (English)

    Ping TAN; Wei-ting HE; Jia LIN; Hong-ming ZHAO; Jian CHU

    2011-01-01

    With the development of high-speed railways in China,more than 2000 high-speed trains will be put into use.Safety and efficiency of railway transportation is increasingly important.We have designed a high availability quadruple vital computer (HAQVC) system based on the analysis of the architecture of the traditional double 2-out-of-2 system and 2-out-of-3 system.The HAQVC system is a system with high availability and safety,with prominent characteristics such as fire-new internal architecture,high efficiency,reliable data interaction mechanism,and operation state change mechanism.The hardware of the vital CPU is based on ARM7 with the real-time embedded safe operation system (ES-OS).The Markov modeling method is designed to evaluate the reliability,availability,maintainability,and safety (RAMS) of the system.In this paper,we demonstrate that the HAQVC system is more reliable than the all voting triple modular redundancy (AVTMR) system and double 2-out-of-2 system.Thus,the design can be used for a specific application system,such as an airplane or high-speed railway system.

  11. Measurement of transplanted pancreatic volume using computed tomography: reliability by intra- and inter-observer variability

    International Nuclear Information System (INIS)

    Lundqvist, Eva; Segelsjoe, Monica; Magnusson, Anders; Andersson, Anna; Biglarnia, Ali-Reza

    2012-01-01

    Background Unlike other solid organ transplants, pancreas allografts can undergo a substantial decrease in baseline volume after transplantation. This phenomenon has not been well characterized, as there are insufficient data on reliable and reproducible volume assessments. We hypothesized that characterization of pancreatic volume by means of computed tomography (CT) could be a useful method for clinical follow-up in pancreas transplant patients. Purpose To evaluate the feasibility and reliability of pancreatic volume assessment using CT scan in transplanted patients. Material and Methods CT examinations were performed on 21 consecutive patients undergoing pancreas transplantation. Volume measurements were carried out by two observers tracing the pancreatic contours in all slices. The observers performed the measurements twice for each patient. Differences in volume measurement were used to evaluate intra- and inter-observer variability. Results The intra-observer variability for the pancreatic volume measurements of Observers 1 and 2 was found to be in almost perfect agreement, with an intraclass correlation coefficient (ICC) of 0.90 (0.77-0.96) and 0.99 (0.98-1.0), respectively. Regarding inter-observer validity, the ICCs for the first and second measurements were 0.90 (range, 0.77-0.96) and 0.95 (range, 0.85-0.98), respectively. Conclusion CT volumetry is a reliable and reproducible method for measurement of transplanted pancreatic volume

  12. Measurement of transplanted pancreatic volume using computed tomography: reliability by intra- and inter-observer variability

    Energy Technology Data Exchange (ETDEWEB)

    Lundqvist, Eva; Segelsjoe, Monica; Magnusson, Anders [Uppsala Univ., Dept. of Radiology, Oncology and Radiation Science, Section of Radiology, Uppsala (Sweden)], E-mail: eva.lundqvist.8954@student.uu.se; Andersson, Anna; Biglarnia, Ali-Reza [Dept. of Surgical Sciences, Section of Transplantation Surgery, Uppsala Univ. Hospital, Uppsala (Sweden)

    2012-11-15

    Background Unlike other solid organ transplants, pancreas allografts can undergo a substantial decrease in baseline volume after transplantation. This phenomenon has not been well characterized, as there are insufficient data on reliable and reproducible volume assessments. We hypothesized that characterization of pancreatic volume by means of computed tomography (CT) could be a useful method for clinical follow-up in pancreas transplant patients. Purpose To evaluate the feasibility and reliability of pancreatic volume assessment using CT scan in transplanted patients. Material and Methods CT examinations were performed on 21 consecutive patients undergoing pancreas transplantation. Volume measurements were carried out by two observers tracing the pancreatic contours in all slices. The observers performed the measurements twice for each patient. Differences in volume measurement were used to evaluate intra- and inter-observer variability. Results The intra-observer variability for the pancreatic volume measurements of Observers 1 and 2 was found to be in almost perfect agreement, with an intraclass correlation coefficient (ICC) of 0.90 (0.77-0.96) and 0.99 (0.98-1.0), respectively. Regarding inter-observer validity, the ICCs for the first and second measurements were 0.90 (range, 0.77-0.96) and 0.95 (range, 0.85-0.98), respectively. Conclusion CT volumetry is a reliable and reproducible method for measurement of transplanted pancreatic volume.

  13. Use of Soft Computing Technologies for a Qualitative and Reliable Engine Control System for Propulsion Systems

    Science.gov (United States)

    Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)

    2001-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by

  14. Fast and reliable method for computing free-bound emission coefficients for hydrogenic ions

    Energy Technology Data Exchange (ETDEWEB)

    Sarmiento, A; Canto, J

    1985-12-01

    An approximate formula for the computation of the free-bound emission coefficient for hydrogenic ions is presented. The approximation is obtained through a manipulation of the (free-bound) Gaunt factor which intentionally distinguish the dependence on frequency from the dependence on temperature and ionic composition. Numerical tests indicate that the derived formula is very precise, fast and easy to use, making the calculation of the free-bound contribution from an ionized region of varying temperature and ionic composition a very simple and time-saving task.

  15. A fast and reliable method for computing free-bound emission coefficients for hydrogenic ions

    International Nuclear Information System (INIS)

    Sarmiento, A.; Canto, J.

    1985-01-01

    An approximate formula for the computation of the free-bound emission coefficient for hydrogenic ions is presented. The approximation is obtained through a manipulation of the (free-bound) Gaunt factor which intentionally distinguish the dependence on frequency from the dependence on temperature and ionic composition. Numerical tests indicate that the derived formula is very precise, fast and easy to use, making the calculation of the free-bound contribution from an ionized region of varying temperature and ionic composition a very simple and time-saving task. (author)

  16. Evaluation of thermophysical properties of Al–Sn–Si alloys based on computational thermodynamics and validation by numerical and experimental simulation of solidification

    International Nuclear Information System (INIS)

    Bertelli, Felipe; Cheung, Noé; Ferreira, Ivaldo L.; Garcia, Amauri

    2016-01-01

    Highlights: • A numerical routine coupled to a computational thermodynamics software is proposed to calculate thermophysical properties. • The approach encompasses numerical and experimental simulation of solidification. • Al–Sn–Si alloys thermophysical properties are validated by experimental/numerical cooling rate results. - Abstract: Modelling of manufacturing processes of multicomponent Al-based alloys products, such as casting, requires thermophysical properties that are rarely found in the literature. It is extremely important to use reliable values of such properties, as they can influence critically on simulated output results. In the present study, a numerical routine is developed and connected in real runtime execution to a computational thermodynamic software with a view to permitting thermophysical properties such as: latent heats; specific heats; temperatures and heats of transformation; phase fractions and composition and density of Al–Sn–Si alloys as a function of temperature, to be determined. A numerical solidification model is used to run solidification simulations of ternary Al-based alloys using the appropriate calculated thermophysical properties. Directional solidification experiments are carried out with two Al–Sn–Si alloys compositions to provide experimental cooling rates profiles along the length of the castings, which are compared with numerical simulations in order to validate the calculated thermophysical data. For both cases a good agreement can be observed, indicating the relevance of applicability of the proposed approach.

  17. Reliability of computed tomography measurements in assessment of thigh muscle cross-sectional area and attenuation

    International Nuclear Information System (INIS)

    Strandberg, Sören; Wretling, Marie-Louise; Wredmark, Torsten; Shalabi, Adel

    2010-01-01

    Advancement in technology of computer tomography (CT) and introduction of new medical imaging softwares enables easy and rapid assessment of muscle cross-sectional area (CSA) and attenuation. Before using these techniques in clinical studies there is a need for evaluation of the reliability of the measurements. The purpose of the study was to evaluate the inter- and intra-observer reliability of ImageJ in measuring thigh muscles CSA and attenuation in patients with anterior cruciate ligament (ACL) injury by computer tomography. 31 patients from an ongoing study of rehabilitation and muscle atrophy after ACL reconstruction were included in the study. Axial CT images with slice thickness of 10 mm at the level of 150 mm above the knee joint were analyzed by two investigators independently at two times with a minimum of 3 weeks between the two readings using NIH ImageJ. CSA and the mean attenuation of individual thigh muscles were analyzed for both legs. Mean CSA and mean attenuation values were in good agreement both when comparing the two observers and the two replicates. The inter- and intraclass correlation (ICC) was generally very high with values from 0.98 to 1.00 for all comparisons except for the area of semimembranosus. All the ICC values were significant (p < 0,001). Pearson correlation coefficients were also generally very high with values from 0.98 to 1.00 for all comparisons except for the area of semimembranosus (0.95 for intraobserver and 0.92 for interobserver). This study has presented ImageJ as a method to monitor and evaluate CSA and attenuation of different muscles in the thigh using CT-imaging. The method shows an overall excellent reliability with respect to both observer and replicate

  18. Osteochondritis dissecans of the humeral capitellum: reliability of four classification systems using radiographs and computed tomography.

    Science.gov (United States)

    Claessen, Femke M A P; van den Ende, Kimberly I M; Doornberg, Job N; Guitton, Thierry G; Eygendaal, Denise; van den Bekerom, Michel P J

    2015-10-01

    The radiographic appearance of osteochondritis dissecans (OCD) of the humeral capitellum varies according to the stage of the lesion. It is important to evaluate the stage of OCD lesion carefully to guide treatment. We compared the interobserver reliability of currently used classification systems for OCD of the humeral capitellum to identify the most reliable classification system. Thirty-two musculoskeletal radiologists and orthopaedic surgeons specialized in elbow surgery from several countries evaluated anteroposterior and lateral radiographs and corresponding computed tomography (CT) scans of 22 patients to classify the stage of OCD of the humeral capitellum according to the classification systems developed by (1) Minami, (2) Berndt and Harty, (3) Ferkel and Sgaglione, and (4) Anderson on a Web-based study platform including a Digital Imaging and Communications in Medicine viewer. Magnetic resonance imaging was not evaluated as part of this study. We measured agreement among observers using the Siegel and Castellan multirater κ. All OCD classification systems, except for Berndt and Harty, which had poor agreement among observers (κ = 0.20), had fair interobserver agreement: κ was 0.27 for the Minami, 0.23 for Anderson, and 0.22 for Ferkel and Sgaglione classifications. The Minami Classification was significantly more reliable than the other classifications (P reliable for classifying different stages of OCD of the humeral capitellum. However, it is unclear whether radiographic evidence of OCD of the humeral capitellum, as categorized by the Minami Classification, guides treatment in clinical practice as a result of this fair agreement. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  19. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  20. Virtual photons in imaginary time: Computing exact Casimir forces via standard numerical electromagnetism techniques

    International Nuclear Information System (INIS)

    Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.; Johnson, Steven G.; Iannuzzi, Davide

    2007-01-01

    We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustrate our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls

  1. The Preliminary Study for Numerical Computation of 37 Rod Bundle in CANDU Reactor

    International Nuclear Information System (INIS)

    Jeon, Yu Mi; Park, Joo Hwan

    2010-09-01

    A typical CANDU 6 fuel bundle consists of 37 fuel rods supported by two endplates and separated by spacer pads at various locations. In addition, the bearing pads are brazed to each outer fuel rod with the aim of reducing the contact area between the fuel bundle and the pressure tube. Although the recent progress of CFD methods has provided opportunities for computing the thermal-hydraulic phenomena inside of a fuel channel, it is yet impossible to reflect numerical computations on the detailed shape of rod bundle due to challenges with computing mesh and memory capacity. Hence, the previous studies conducted a numerical computation for smooth channels without considering spacers and bearing pads. But, it is well known that these components are an important factor to predict the pressure drop and heat transfer rate in a channel. In this study, the new computational method is proposed to solve complex geometry such as a fuel rod bundle. Before applying a solution to the problem of the 37 rod bundle, the validity and the accuracy of the method are tested by applying the method to simple geometry. The split channel method has been proposed with the aim of computing the fully shaped CANDU fuel channel with detailed components. The validity was tested by applying the method to the single channel problem. The average temperature have similar values for the considered two methods, while the local temperature shows a slight difference by the effect of conduction heat transfer in the solid region of a rod. Based on the present result, the calculation for the fully shaped 37-rod bundle is scheduled for future work

  2. Validity and Reliability of the Verbal Numerical Rating Scale for Children Aged 4 to 17 Years With Acute Pain.

    Science.gov (United States)

    Tsze, Daniel S; von Baeyer, Carl L; Pahalyants, Vartan; Dayan, Peter S

    2018-06-01

    The Verbal Numerical Rating Scale is the most commonly used self-report measure of pain intensity. It is unclear how the validity and reliability of the scale scores vary across children's ages. We aimed to determine the validity and reliability of the scale for children presenting to the emergency department across a comprehensive spectrum of age. This was a cross-sectional study of children aged 4 to 17 years. Children self-reported their pain intensity, using the Verbal Numerical Rating Scale and Faces Pain Scale-Revised at 2 serial assessments. We evaluated convergent validity (strong validity defined as correlation coefficient ≥0.60), agreement (difference between concurrent Verbal Numerical Rating Scale and Faces Pain Scale-Revised scores), known-groups validity (difference in score between children with painful versus nonpainful conditions), responsivity (decrease in score after analgesic administration), and reliability (test-retest at 2 serial assessments) in the total sample and subgroups based on age. We enrolled 760 children; 27 did not understand the Verbal Numerical Rating Scale and were removed. Of the remainder, Pearson correlations were strong to very strong (0.62 to 0.96) in all years of age except 4 and 5 years, and agreement was strong for children aged 8 and older. Known-groups validity and responsivity were strong in all years of age. Reliability was strong in all age subgroups, including each year of age from 4 to 7 years. Convergent validity, known-groups validity, responsivity, and reliability of the Verbal Numerical Rating Scale were strong for children aged 6 to 17 years. Convergent validity was not strong for children aged 4 and 5 years. Our findings support the use of the Verbal Numerical Rating Scale for most children aged 6 years and older, but not for those aged 4 and 5 years. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  3. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  4. A reliable and valid questionnaire was developed to measure computer vision syndrome at the workplace.

    Science.gov (United States)

    Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena

    2015-06-01

    To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Numerical computation of discrete differential scattering cross sections for Monte Carlo charged particle transport

    International Nuclear Information System (INIS)

    Walsh, Jonathan A.; Palmer, Todd S.; Urbatsch, Todd J.

    2015-01-01

    Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.

  6. Spatial correlations in intense ionospheric scintillations - comparison between numerical computation and observation

    International Nuclear Information System (INIS)

    Kumagai, H.

    1987-01-01

    The spatial correlations in intense ionospheric scintillations were analyzed by comparing numerical results with observational ones. The observational results were obtained by spaced-receiver scintillation measurements of VHF satellite radiowave. The numerical computation was made by using the fourth-order moment equation with fairly realistic ionospheric irregularity models, in which power-law irregularities with spectral index 4, both thin and thick slabs, and both isotropic and anisotropic irregularities, were considered. Evolution of the S(4) index and the transverse correlation function was computed. The numerical result that the transverse correlation distance decreases with the increase in S(4) was consistent with that obtained in the observation, suggesting that multiple scattering plays an important role in the intense scintillations observed. The anisotropy of irregularities proved to act as if the density fluctuation increased. This effect, as well as the effect of slab thickness, was evaluated by the total phase fluctuations that the radiowave experienced in the slab. On the basis of the comparison, the irregularity height and electron-density fluctuation which is necessary to produce a particular strength of scintillation were estimated. 30 references

  7. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    Science.gov (United States)

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  8. Computational Flame Diagnostics for Direct Numerical Simulations with Detailed Chemistry of Transportation Fuels

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Tianfeng [Univ. of Connecticut, Storrs, CT (United States)

    2017-02-16

    The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily based on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.

  9. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    International Nuclear Information System (INIS)

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  10. Improving Wind Turbine Drivetrain Reliability Using a Combined Experimental, Computational, and Analytical Approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.

    2015-03-01

    Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.

  11. Quantitative software-reliability analysis of computer codes relevant to nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.

    1981-12-01

    This report presents the results of the first year of an ongoing research program to determine the probability of failure characteristics of computer codes relevant to nuclear safety. An introduction to both qualitative and quantitative aspects of nuclear software is given. A mathematical framework is presented which will enable the a priori prediction of the probability of failure characteristics of a code given the proper specification of its properties. The framework consists of four parts: (1) a classification system for software errors and code failures; (2) probabilistic modeling for selected reliability characteristics; (3) multivariate regression analyses to establish predictive relationships among reliability characteristics and generic code property and development parameters; and (4) the associated information base. Preliminary data of the type needed to support the modeling and the predictions of this program are described. Illustrations of the use of the modeling are given but the results so obtained, as well as all results of code failure probabilities presented herein, are based on data which at this point are preliminary, incomplete, and possibly non-representative of codes relevant to nuclear safety

  12. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Directory of Open Access Journals (Sweden)

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  13. A Newly Developed Method for Computing Reliability Measures in a Water Supply Network

    Directory of Open Access Journals (Sweden)

    Jacek Malinowski

    2016-01-01

    Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract

  14. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  15. Reliability of computer designed surgical guides in six implant rehabilitations with two years follow-up.

    Science.gov (United States)

    Giordano, Mauro; Ausiello, Pietro; Martorelli, Massimo; Sorrentino, Roberto

    2012-09-01

    To evaluate the reliability and accuracy of computer-designed surgical guides in osseointegrated oral implant rehabilitation. Six implant rehabilitations, with a total of 17 implants, were completed with computer-designed surgical guides, performed with the master model developed by muco-compressive and muco-static impressions. In the first case, the surgical guide had exclusively mucosal support, in the second case exclusively dental support. For all six cases computer-aided surgical planning was performed by virtual analyses with 3D models obtained by dental scan DICOM data. The accuracy and stability of implant osseointegration over two years post surgery was then evaluated with clinical and radiographic examinations. Radiographic examination, performed with digital acquisitions (RVG - Radio Video graph) and parallel techniques, allowed two-dimensional feedback with a margin of linear error of 10%. Implant osseointegration was recorded for all the examined rehabilitations. During the clinical and radiographic post-surgical assessments, over the following two years, the peri-implant bone level was found to be stable and without appearance of any complications. The margin of error recorded between pre-operative positions assigned by virtual analysis and the post-surgical digital radiographic observations was as low as 0.2mm. Computer-guided implant surgery can be very effective in oral rehabilitations, providing an opportunity for the surgeon: (a) to avoid the necessity of muco-periosteal detachments and then (b) to perform minimally invasive interventions, whenever appropriate, with a flapless approach. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  16. A structural approach to constructing perspective efficient and reliable human-computer interfaces

    International Nuclear Information System (INIS)

    Balint, L.

    1989-01-01

    The principles of human-computer interface (HCI) realizations are investigated with the aim of getting closer to a general framework and thus, to a more or less solid background of constructing perspective efficient, reliable and cost-effective human-computer interfaces. On the basis of characterizing and classifying the different HCI solutions, the fundamental problems of interface construction are pointed out especially with respect to human error occurrence possibilities. The evolution of HCI realizations is illustrated by summarizing the main properties of past, present and foreseeable future interface generations. HCI modeling is pointed out to be a crucial problem in theoretical and practical investigations. Suggestions concerning HCI structure (hierarchy and modularity), HCI functional dynamics (mapping from input to output information), minimization of human error caused system failures (error-tolerance, error-recovery and error-correcting) as well as cost-effective HCI design and realization methodology (universal and application-oriented vs. application-specific solutions) are presented. The concept of RISC-based and SCAMP-type HCI components is introduced with the aim of having a reduced interaction scheme in communication and a well defined architecture in HCI components' internal structure. HCI efficiency and reliability are dealt with, by taking into account complexity and flexibility. The application of fast computerized prototyping is also briefly investigated as an experimental device of achieving simple, parametrized, invariant HCI models. Finally, a concise outline of an approach of how to construct ideal HCI's is also suggested by emphasizing the open questions and the need of future work related to the proposals, as well. (author). 14 refs, 6 figs

  17. Flow field measurements using LDA and numerical computation for rod bundle of reactor fuel assembly

    International Nuclear Information System (INIS)

    Hu Jun; Zou Zunyu

    1995-02-01

    Local mean velocity and turbulence intensity measurements were obtained with DANTEC 55 X two-dimensional Laser Dopper Anemometry (LDA) for rod bundle of reactor fuel assembly test model which was a 4 x 4 rod bundle. The data were obtained from different experimental cross-sections both upstream and downstream of the model support plate. Measurements performed at test Reynolds numbers of 1.8 x 10 4 ∼3.6 x 10 4 . The results described the local and gross effects of the support plate on upstream and downstream flow distributions. A numerical computation was also given, the experimental results are in good agreement with the numerical one and the others in references. Finally, a few suggestions were proposed for how to use the LDA system well. (11 figs.)

  18. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    Science.gov (United States)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  19. SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes

    Directory of Open Access Journals (Sweden)

    Pedro Yuste

    2013-09-01

    Full Text Available The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH, based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.

  20. BAESNUM, a conversational computer program for the Bayesian estimation of a parameter by a numerical method

    International Nuclear Information System (INIS)

    Colombo, A.G.; Jaarsma, R.J.

    1982-01-01

    This report describes a conversational computer program which, via Bayes' theorem, numerically combines the prior distribution of a parameter with a likelihood function. Any type of prior and likelihood function can be considered. The present version of the program includes six types of prior and employs the binomial likelihood. As input the program requires the law and parameters of the prior distribution and the sample data. As output it gives the posterior distribution as a histogram. The use of the program for estimating the constant failure rate of an item is briefly described

  1. Numerical expressions for the computation of coincidence-summing correction factors in gamma-ray spectrometry with HPGe detectors

    International Nuclear Information System (INIS)

    Rizzo, S.; Tomarchio, E.

    2008-01-01

    Full text: The analytical relations used to compute the coincidence-summing effects on spectral response of Ge semiconductor detectors are quite complex and involve full-energy peak and total efficiencies. For point-sources, a general method for calculating the correction factors for gamma ray coincidences has been formulated by Andreev et al. and used by Schima and Hoppes to obtain γ-X K coincidence correction expressions for 17 nuclides. However, because the higher-order terms are neglected, the expressions supplied do not give reliable results in the case of short sample-detector distances. Using the formulae given by Morel et al.[3] and Lepy et al.[4], we have developed a computer program able to get numerical expressions to compute γ-γ e γ-X K coincidence summing corrections for point sources. Only full-energy peak and total efficiencies are needed. Alternatively, values of peak-to-total ratio can be introduced. For extended sources, the same expressions can be always considered with the introduction of 'effective efficiencies' as defined by Arnold and Sima, i.e. an average over the source volume of the spatial distribution of the elementary photon source total efficiency, weighted by the corresponding peak efficiency. We have considered the most used calibration radioisotopes as well as fission products, activation products and environmental isotopes. All decay data were taken from the most recent volumes of 'Table of Radionuclides', CEA Monographie BIPM-5 and a suitable matrix representation of a decay scheme was adopted. For the sake of brevity, we provide for each nuclide a set of expressions for the more intense gamma emissions, considered sufficient for most applications. However, numerical expressions are available for all the stored gamma transitions and can be obtained on request. As examples of the use of the expressions, the evaluation of correction values for point sources and a particulate sample reduced to a 6x6x0.7 cm packet - with reference

  2. Study on Production Management in Programming of Computer Numerical Control Machines

    Directory of Open Access Journals (Sweden)

    Gheorghe Popovici

    2014-12-01

    Full Text Available The paper presents the results of a study regarding the need for technology in programming for machinetools with computer-aided command. Engineering is the science of making skilled things. That is why, in the "factory of the future", programming engineering will have to realise the part processing on MU-CNCs (Computer Numerical Control Machines in the optimum economic variant. There is no "recipe" when it comes to technologies. In order to select the correct variant from among several technical variants, 10 technological requirements are forwarded for the engineer to take into account in MU-CNC programming. It is the first argued synthesis of the need for technological knowledge in MU-CNC programming.

  3. Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester

    . In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts....... Secondly, the applicability of Model Order Reduction (MOR) techniques to lower the computational complexity of hearing aid vibro-acoustic models is studied. For fine frequency response calculation and optimization, which require solving the numerical model repeatedly, a computational challenge...... is encountered due to the large number of Degrees of Freedom (DOFs) needed to represent the complexity of the hearing aid system accurately. In this context, several MOR techniques are discussed, and an adaptive reduction method for vibro-acoustic optimization problems is developed as a main contribution. Lastly...

  4. Using monomer vibrational wavefunctions to compute numerically exact (12D) rovibrational levels of water dimer

    Science.gov (United States)

    Wang, Xiao-Gang; Carrington, Tucker

    2018-02-01

    We compute numerically exact rovibrational levels of water dimer, with 12 vibrational coordinates, on the accurate CCpol-8sf ab initio flexible monomer potential energy surface [C. Leforestier et al., J. Chem. Phys. 137, 014305 (2012)]. It does not have a sum-of-products or multimode form and therefore quadrature in some form must be used. To do the calculation, it is necessary to use an efficient basis set and to develop computational tools, for evaluating the matrix-vector products required to calculate the spectrum, that obviate the need to store the potential on a 12D quadrature grid. The basis functions we use are products of monomer vibrational wavefunctions and standard rigid-monomer basis functions (which involve products of three Wigner functions). Potential matrix-vector products are evaluated using the F matrix idea previously used to compute rovibrational levels of 5-atom and 6-atom molecules. When the coupling between inter- and intra-monomer coordinates is weak, this crude adiabatic type basis is efficient (only a few monomer vibrational wavefunctions are necessary), although the calculation of matrix elements is straightforward. It is much easier to use than an adiabatic basis. The product structure of the basis is compatible with the product structure of the kinetic energy operator and this facilitates computation of matrix-vector products. Compared with the results obtained using a [6 + 6]D adiabatic approach, we find good agreement for the inter-molecular levels and larger differences for the intra-molecular water bend levels.

  5. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    Science.gov (United States)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  6. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  7. Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    International Nuclear Information System (INIS)

    Katsaounis, T D

    2005-01-01

    The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall

  8. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  9. Computational domain discretization in numerical analysis of flow within granular materials

    Science.gov (United States)

    Sosnowski, Marcin

    2018-06-01

    The discretization of computational domain is a crucial step in Computational Fluid Dynamics (CFD) because it influences not only the numerical stability of the analysed model but also the agreement of obtained results and real data. Modelling flow in packed beds of granular materials is a very challenging task in terms of discretization due to the existence of narrow spaces between spherical granules contacting tangentially in a single point. Standard approach to this issue results in a low quality mesh and unreliable results in consequence. Therefore the common method is to reduce the diameter of the modelled granules in order to eliminate the single-point contact between the individual granules. The drawback of such method is the adulteration of flow and contact heat resistance among others. Therefore an innovative method is proposed in the paper: single-point contact is extended to a cylinder-shaped volume contact. Such approach eliminates the low quality mesh elements and simultaneously introduces only slight distortion to the flow as well as contact heat transfer. The performed analysis of numerous test cases prove the great potential of the proposed method of meshing the packed beds of granular materials.

  10. On a numerical strategy to compute gravity currents of non-Newtonian fluids

    International Nuclear Information System (INIS)

    Vola, D.; Babik, F.; Latche, J.-C.

    2004-01-01

    This paper is devoted to the presentation of a numerical scheme for the simulation of gravity currents of non-Newtonian fluids. The two dimensional computational grid is fixed and the free-surface is described as a polygonal interface independent from the grid and advanced in time by a Lagrangian technique. Navier-Stokes equations are semi-discretized in time by the Characteristic-Galerkin method, which finally leads to solve a generalized Stokes problem posed on a physical domain limited by the free surface to only a part of the computational grid. To this purpose, we implement a Galerkin technique with a particular approximation space, defined as the restriction to the fluid domain of functions of a finite element space. The decomposition-coordination method allows to deal without any regularization with a variety of non-linear and possibly non-differentiable constitutive laws. Beside more analytical tests, we revisit with this numerical method some simulations of gravity currents of the literature, up to now investigated within the simplified thin-flow approximation framework

  11. WATERLOPP V2/64: A highly parallel machine for numerical computation

    Science.gov (United States)

    Ostlund, Neil S.

    1985-07-01

    Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.

  12. Description of the TREBIL, CRESSEX and STREUSL computer programs, that belongs to RALLY computer code pack for the analysis of reliability systems

    International Nuclear Information System (INIS)

    Fernandes Filho, T.L.

    1982-11-01

    The RALLY computer code pack (RALLY pack) is a set of computer codes destinate to the reliability of complex systems, aiming to a risk analysis. Three of the six codes, are commented, presenting their purpose, input description, calculation methods and results obtained with each one of those computer codes. The computer codes are: TREBIL, to obtain the fault tree logical equivalent; CRESSEX, to obtain the minimal cut and the punctual values of the non-reliability and non-availability of the system; and STREUSL, for the dispersion calculation of those values around the media. In spite of the CRESSEX, in its version available at CNEN, uses a little long method to obtain the minimal cut in an HB-CNEN system, the three computer programs show good results, mainly the STREUSL, which permits the simulation of various components. (E.G.) [pt

  13. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    International Nuclear Information System (INIS)

    Jacome, Paulo A.D.; Landim, Mariana C.; Garcia, Amauri; Furtado, Alexandre F.; Ferreira, Ivaldo L.

    2011-01-01

    Highlights: → Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. → Butler's scheme and ThermoCalc are used to compute the thermophysical properties. → Predictive cell/dendrite growth models depend on accurate thermophysical properties. → Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  14. The application of computational thermodynamics and a numerical model for the determination of surface tension and Gibbs-Thomson coefficient of aluminum based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Jacome, Paulo A.D.; Landim, Mariana C. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil); Garcia, Amauri, E-mail: amaurig@fem.unicamp.br [Department of Materials Engineering, University of Campinas, UNICAMP, PO Box 6122, 13083-970 Campinas, SP (Brazil); Furtado, Alexandre F.; Ferreira, Ivaldo L. [Department of Mechanical Engineering, Fluminense Federal University, Av. dos Trabalhadores, 420-27255-125 Volta Redonda, RJ (Brazil)

    2011-08-20

    Highlights: {yields} Surface tension and the Gibbs-Thomson coefficient are computed for Al-based alloys. {yields} Butler's scheme and ThermoCalc are used to compute the thermophysical properties. {yields} Predictive cell/dendrite growth models depend on accurate thermophysical properties. {yields} Mechanical properties can be related to the microstructural cell/dendrite spacing. - Abstract: In this paper, a solution for Butler's formulation is presented permitting the surface tension and the Gibbs-Thomson coefficient of Al-based binary alloys to be determined. The importance of Gibbs-Thomson coefficient for binary alloys is related to the reliability of predictions furnished by predictive cellular and dendritic growth models and of numerical computations of solidification thermal variables, which will be strongly dependent on the thermophysical properties assumed for the calculations. A numerical model based on Powell hybrid algorithm and a finite difference Jacobian approximation was coupled to a specific interface of a computational thermodynamics software in order to assess the excess Gibbs energy of the liquid phase, permitting the surface tension and Gibbs-Thomson coefficient for Al-Fe, Al-Ni, Al-Cu and Al-Si hypoeutectic alloys to be calculated. The computed results are presented as a function of the alloy composition.

  15. THE EFFECT OF THE PICTORIAL NUMERIC CARD MEDIA TOWARD IMPROVEMENT OF THE SUMMATION COMPUTATION ABILITY FOR STUDENT WITH INTELLECTUAL DISSABILITY

    OpenAIRE

    Isna Nur Hikmah; Usep Kustiawan

    2016-01-01

    The reseach’s purpose was to analyze the effect of picture numeric card media toward improvement of the summation computation ability for student with intellectual disability of grade IV in SDLB. Data collected was analyzed with experiment technique and single subject research A-B design. Research result showed that: after being analyzed between condition overlap persentase was 0%. Thus, it could be concluded that there was effect of pictorial numeric card media toward summation computation a...

  16. BOOK REVIEW: Advanced Topics in Computational Partial Differential Equations: Numerical Methods and Diffpack Programming

    Science.gov (United States)

    Katsaounis, T. D.

    2005-02-01

    equations in Diffpack can be used to derive fully implicit solvers for systems. The proposed techniques are illustrated in terms of two applications, namely a system of PDEs modelling pipeflow and a two-phase porous media flow. Stochastic PDEs is the topic of chapter 7. The first part of the chapter is a simple introduction to stochastic PDEs; basic analytical properties are presented for simple models like transport phenomena and viscous drag forces. The second part considers the numerical solution of stochastic PDEs. Two basic techniques are presented, namely Monte Carlo and perturbation methods. The last part explains how to implement and incorporate these solvers into Diffpack. Chapter 8 describes how to operate Diffpack from Python scripts. The main goal here is to provide all the programming and technical details in order to glue the programming environment of Diffpack with visualization packages through Python and in general take advantage of the Python interfaces. Chapter 9 attempts to show how to use numerical experiments to measure the performance of various PDE solvers. The authors gathered a rather impressive list, a total of 14 PDE solvers. Solvers for problems like Poisson, Navier--Stokes, elasticity, two-phase flows and methods such as finite difference, finite element, multigrid, and gradient type methods are presented. The authors provide a series of numerical results combining various solvers with various methods in order to gain insight into their computational performance and efficiency. In Chapter 10 the authors consider a computationally challenging problem, namely the computation of the electrical activity of the human heart. After a brief introduction on the biology of the problem the authors present the mathematical models involved and a numerical method for solving them within the framework of Diffpack. Chapter 11 and 12 are closely related; actually they could have been combined in a single chapter. Chapter 11 introduces several mathematical

  17. Accuracy and reliability of stitched cone-beam computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Egbert, Nicholas [Private Practice, Reconstructive Dental Specialists of Utah, Salt Lake (United States); Cagna, David R.; Ahuja, Swati; Wicks, Russell A. [Dept. of rosthodontics, University of Tennessee Health Science Center College of Dentistry, Memphis (United States)

    2015-03-15

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  18. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)

    2001-04-01

    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  19. Accuracy and reliability of stitched cone-beam computed tomography images

    International Nuclear Information System (INIS)

    Egbert, Nicholas; Cagna, David R.; Ahuja, Swati; Wicks, Russell A.

    2015-01-01

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  20. Accuracy and reliability of stitched cone-beam computed tomography images.

    Science.gov (United States)

    Egbert, Nicholas; Cagna, David R; Ahuja, Swati; Wicks, Russell A

    2015-03-01

    This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.

  1. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    Science.gov (United States)

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  2. Test-retest reliability of computer-based video analysis of general movements in healthy term-born infants.

    Science.gov (United States)

    Valle, Susanne Collier; Støen, Ragnhild; Sæther, Rannei; Jensenius, Alexander Refsum; Adde, Lars

    2015-10-01

    A computer-based video analysis has recently been presented for quantitative assessment of general movements (GMs). This method's test-retest reliability, however, has not yet been evaluated. The aim of the current study was to evaluate the test-retest reliability of computer-based video analysis of GMs, and to explore the association between computer-based video analysis and the temporal organization of fidgety movements (FMs). Test-retest reliability study. 75 healthy, term-born infants were recorded twice the same day during the FMs period using a standardized video set-up. The computer-based movement variables "quantity of motion mean" (Qmean), "quantity of motion standard deviation" (QSD) and "centroid of motion standard deviation" (CSD) were analyzed, reflecting the amount of motion and the variability of the spatial center of motion of the infant, respectively. In addition, the association between the variable CSD and the temporal organization of FMs was explored. Intraclass correlation coefficients (ICC 1.1 and ICC 3.1) were calculated to assess test-retest reliability. The ICC values for the variables CSD, Qmean and QSD were 0.80, 0.80 and 0.86 for ICC (1.1), respectively; and 0.80, 0.86 and 0.90 for ICC (3.1), respectively. There were significantly lower CSD values in the recordings with continual FMs compared to the recordings with intermittent FMs (ptest-retest reliability of computer-based video analysis of GMs, and a significant association between our computer-based video analysis and the temporal organization of FMs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Numerical simulation of mechatronic sensors and actuators finite elements for computational multiphysics

    CERN Document Server

    Kaltenbacher, Manfred

    2015-01-01

    Like the previous editions also the third edition of this book combines the detailed physical modeling of mechatronic systems and their precise numerical simulation using the Finite Element (FE) method. Thereby, the basic chapter concerning the Finite Element (FE) method is enhanced, provides now also a description of higher order finite elements (both for nodal and edge finite elements) and a detailed discussion of non-conforming mesh techniques. The author enhances and improves many discussions on principles and methods. In particular, more emphasis is put on the description of single fields by adding the flow field. Corresponding to these field, the book is augmented with the new chapter about coupled flow-structural mechanical systems. Thereby, the discussion of computational aeroacoustics is extended towards perturbation approaches, which allows a decomposition of flow and acoustic quantities within the flow region. Last but not least, applications are updated and restructured so that the book meets mode...

  4. Analysis of the transformations temperatures of helicoidal Ti-Ni actuators using computational numerical methods

    Directory of Open Access Journals (Sweden)

    Carlos Augusto do N. Oliveira

    2013-01-01

    Full Text Available The development of shape memory actuators has enabled noteworthy applications in the mechanical engineering, robotics, aerospace, and oil industries and in medicine. These applications have been targeted on miniaturization and taking full advantage of spaces. This article analyses a Ti-Ni shape memory actuator used as part of a flow control system. A Ti-Ni spring actuator is subjected to thermomechanical training and parameters such as transformation temperature, thermal hysteresis and shape memory effect performance were investigated. These parameters were important for understanding the behavior of the actuator related to martensitic phase transformation during the heating and cooling cycles which it undergoes when in service. The multiple regression methodology was used as a computational tool for analysing data in order to simulate and predict the results for stress and cycles where the experimental data was not developed. The results obtained using the training cycles enable actuators to be characterized and the numerical simulation to be validated.

  5. Programming for computations MATLAB/Octave : a gentle introduction to numerical simulations with MATLAB/Octave

    CERN Document Server

    Linge, Svein

    2016-01-01

    This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.

  6. Programming for computations Python : a gentle introduction to numerical simulations with Python

    CERN Document Server

    Linge, Svein

    2016-01-01

    This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.

  7. Computational Model and Numerical Simulation for Submerged Mooring Monitoring Platform’s Dynamical Response

    Directory of Open Access Journals (Sweden)

    He Kongde

    2015-01-01

    Full Text Available Computational model and numerical simulation for submerged mooring monitoring platform were formulated aimed at the dynamical response by the action of flow force, which based on Hopkinson impact load theory, taken into account the catenoid effect of mooring cable and revised the difference of tension and tangential direction action force by equivalent modulus of elasticity. Solved the equation by hydraulics theory and structural mechanics theory of oceaneering, studied the response of buoy on flow force. The validity of model were checked and the results were in good agreement; the result show the buoy will engender biggish heave and swaying displacement, but the swaying displacement got stable quickly and the heaven displacement cause vibration for the vortex-induced action by the flow.

  8. HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics

    International Nuclear Information System (INIS)

    McCann, R.A.

    1987-04-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs

  9. Numerical simulation of fragmentation of hot metal and oxide melts with the computer code IVA3

    International Nuclear Information System (INIS)

    Mussa, S.; Tromm, W.

    1994-01-01

    The phenomena of fragmentation of melts caused by water-inlet from the bottom with the computer code IVA3/11,12,13/ are investigated. With the computer code IVA3 three-component-multiphase flows can be numerically simulated. Two geometrical models are used. Both consist of a cylindrical vessel for water lying beneath a cylindrical vessel for melt. The vessels are connected to each other through a hole. Steel and UO 2 melts are. The following parameters were varied: the type of the melt (steel,UO 2 ), the water supply pressure and the geometry of the hole in the bottom plate through which the water and melt vessels are connected. As results of the numerical simulations temperature and pressure versus time curves are plotted. Additionally the volume flow rates and the volume fractions of the various phases in the vessels and the increase in surface and enthalpy of the melt during the time of simulation are depicted. With steel melts the rate of fragmentation increases with increasing water pressure and melt temperature, whereby stable channels are formed in the melt layer showing a very low flow resistance for steam. With UO 2 the formations of channels are also observed. However, these channels are not so stable that they eventually break apart and lead to the fragmentation of the UO 2 melt in drops. The fragmentation of the steel melt in water vessel is less than that of UO 2 . No essential solidification of the melt is observed in the respective duration of the simulations. However, a small drop in the melt temperature is observed. With a slight or no water pressure the melt flows from the upper vessel into the water vessel via the connecting hole. The processes take place in a very slow manner and with such a low steam production so that despite the occuring pressure peaks no sign of steam explosions could be observed. (orig./HP) [de

  10. Numerical analysis

    CERN Document Server

    Khabaza, I M

    1960-01-01

    Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput

  11. A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls

    Directory of Open Access Journals (Sweden)

    Arun Arjunan

    2015-08-01

    Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.

  12. Numerical computations of interior transmission eigenvalues for scattering objects with cavities

    International Nuclear Information System (INIS)

    Peters, Stefan; Kleefeld, Andreas

    2016-01-01

    In this article we extend the inside-outside duality for acoustic transmission eigenvalue problems by allowing scattering objects that may contain cavities. In this context we provide the functional analytical framework necessary to transfer the techniques that have been used in Kirsch and Lechleiter (2013 Inverse Problems, 29 104011) to derive the inside-outside duality. Additionally, extensive numerical results are presented to show that we are able to successfully detect interior transmission eigenvalues with the inside-outside duality approach for a variety of obstacles with and without cavities in three dimensions. In this context, we also discuss the advantages and disadvantages of the inside-outside duality approach from a numerical point of view. Furthermore we derive the integral equations necessary to extend the algorithm in Kleefeld (2013 Inverse Problems, 29 104012) to compute highly accurate interior transmission eigenvalues for scattering objects with cavities, which we will then use as reference values to examine the accuracy of the inside-outside duality algorithm. (paper)

  13. New algorithm to reduce the number of computing steps in reliability formula of Weighted-k-out-of-n system

    Directory of Open Access Journals (Sweden)

    Tatsunari Ohkura

    2007-02-01

    Full Text Available In the disjoint products version of reliability analysis of weighted–k–out–of–n systems, it is necessary to determine the order in which the weight of components is to be considered. The k–out–of–n:G(F system consists of n components; each com-ponent has its own probability and positive integer weight such that the system is operational (failed if and only if the total weight of some operational (failure components is at least k. This paper designs a method to compute the reliability in O(nk computing time and in O(nk memory space. The proposed method expresses the system reliability in fewer product terms than those already published.

  14. Electronic structure of BN-aromatics: Choice of reliable computational tools

    Science.gov (United States)

    Mazière, Audrey; Chrostowska, Anna; Darrigan, Clovis; Dargelos, Alain; Graciaa, Alain; Chermette, Henry

    2017-10-01

    The importance of having reliable calculation tools to interpret and predict the electronic properties of BN-aromatics is directly linked to the growing interest for these very promising new systems in the field of materials science, biomedical research, or energy sustainability. Ionization energy (IE) is one of the most important parameters to approach the electronic structure of molecules. It can be theoretically estimated, but in order to evaluate their persistence and propose the most reliable tools for the evaluation of different electronic properties of existent or only imagined BN-containing compounds, we took as reference experimental values of ionization energies provided by ultra-violet photoelectron spectroscopy (UV-PES) in gas phase—the only technique giving access to the energy levels of filled molecular orbitals. Thus, a set of 21 aromatic molecules containing B-N bonds and B-N-B patterns has been merged for a comparison between experimental IEs obtained by UV-PES and various theoretical approaches for their estimation. Time-Dependent Density Functional Theory (TD-DFT) methods using B3LYP and long-range corrected CAM-B3LYP functionals are used, combined with the Δ SCF approach, and compared with electron propagator theory such as outer valence Green's function (OVGF, P3) and symmetry adapted cluster-configuration interaction ab initio methods. Direct Kohn-Sham estimation and "corrected" Kohn-Sham estimation are also given. The deviation between experimental and theoretical values is computed for each molecule, and a statistical study is performed over the average and the root mean square for the whole set and sub-sets of molecules. It is shown that (i) Δ SCF+TDDFT(CAM-B3LYP), OVGF, and P3 are the most efficient way for a good agreement with UV-PES values, (ii) a CAM-B3LYP range-separated hybrid functional is significantly better than B3LYP for the purpose, especially for extended conjugated systems, and (iii) the "corrected" Kohn-Sham result is a

  15. Infragravity wave generation and dynamics over a mild slope beach : Experiments and numerical computations

    Science.gov (United States)

    Cienfuegos, R.; Duarte, L.; Hernandez, E.

    2008-12-01

    Charasteristic frequencies of gravity waves generated by wind and propagating towards the coast are usually comprised between 0.05Hz and 1Hz. Nevertheless, lower frequecy waves, in the range of 0.001Hz and 0.05Hz, have been observed in the nearshore zone. Those long waves, termed as infragravity waves, are generated by complex nonlinear mechanisms affecting the propagation of irregular waves up to the coast. The groupiness of an incident random wave field may be responsible for producing a slow modulation of the mean water surface thus generating bound long waves travelling at the group speed. Similarly, a quasi- periodic oscillation of the break-point location, will be accompained by a slow modulation of set-up/set-down in the surf zone and generation and release of long waves. If the primary structure of the carrying incident gravity waves is destroyed (e.g. by breaking), forced long waves can be freely released and even reflected at the coast. Infragravity waves can affect port operation through resonating conditions, or strongly affect sediment transport and beach morphodynamics. In the present study we investigate infragravity wave generation mechanisms both, from experiments and numerical computations. Measurements were conducted at the 70-meter long wave tank, located at the Instituto Nacional de Hidraulica (Chile), prepared with a beach of very mild slope of 1/80 in order to produce large surf zone extensions. A random JONSWAP type wave field (h0=0.52m, fp=0.25Hz, Hmo=0.17m) was generated by a piston wave-maker and measurements of the free surface displacements were performed all over its length at high spatial resolution (0.2m to 1m). Velocity profiles were also measured at four verticals inside the surf zone using an ADV. Correlation maps of wave group envelopes and infragravity waves are computed in order to identify long wave generation and dynamics in the experimental set-up. It appears that both mechanisms (groupiness and break-point oscillation) are

  16. Energy conserving numerical methods for the computation of complex vortical flows

    Science.gov (United States)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our

  17. Numerical Aspects of Eigenvalue and Eigenfunction Computations for Chaotic Quantum Systems

    Science.gov (United States)

    Bäcker, A.

    Summary: We give an introduction to some of the numerical aspects in quantum chaos. The classical dynamics of two-dimensional area-preserving maps on the torus is illustrated using the standard map and a perturbed cat map. The quantization of area-preserving maps given by their generating function is discussed and for the computation of the eigenvalues a computer program in Python is presented. We illustrate the eigenvalue distribution for two types of perturbed cat maps, one leading to COE and the other to CUE statistics. For the eigenfunctions of quantum maps we study the distribution of the eigenvectors and compare them with the corresponding random matrix distributions. The Husimi representation allows for a direct comparison of the localization of the eigenstates in phase space with the corresponding classical structures. Examples for a perturbed cat map and the standard map with different parameters are shown. Billiard systems and the corresponding quantum billiards are another important class of systems (which are also relevant to applications, for example in mesoscopic physics). We provide a detailed exposition of the boundary integral method, which is one important method to determine the eigenvalues and eigenfunctions of the Helmholtz equation. We discuss several methods to determine the eigenvalues from the Fredholm equation and illustrate them for the stadium billiard. The occurrence of spurious solutions is discussed in detail and illustrated for the circular billiard, the stadium billiard, and the annular sector billiard. We emphasize the role of the normal derivative function to compute the normalization of eigenfunctions, momentum representations or autocorrelation functions in a very efficient and direct way. Some examples for these quantities are given and discussed.

  18. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  19. Strength and Reliability of Wood for the Components of Low-cost Wind Turbines: Computational and Experimental Analysis and Applications

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan

    2009-01-01

    of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...... are developed, which should provide the basis for microstructure-based correlating of observable and service properties of wood. Some correlations between microstructure, strength and service properties of wood have been established....

  20. THE EFFECT OF THE PICTORIAL NUMERIC CARD MEDIA TOWARD IMPROVEMENT OF THE SUMMATION COMPUTATION ABILITY FOR STUDENT WITH INTELLECTUAL DISSABILITY

    Directory of Open Access Journals (Sweden)

    Isna Nur Hikmah

    2016-12-01

    Full Text Available The reseach’s purpose was to analyze the effect of picture numeric card media toward improvement of the summation computation ability for student with intellectual disability of grade IV in SDLB. Data collected was analyzed with experiment technique and single subject research A-B design. Research result showed that: after being analyzed between condition overlap persentase was 0%. Thus, it could be concluded that there was effect of pictorial numeric card media toward summation computation ability of student with intellectual disability

  1. Proceeding of 1999-workshop on MHD computations 'study on numerical methods related to plasma confinement'

    Energy Technology Data Exchange (ETDEWEB)

    Kako, T.; Watanabe, T. [eds.

    2000-06-01

    This is the proceeding of 'study on numerical methods related to plasma confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. There are also various lectures on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. Separate abstracts were presented for 13 of the papers in this report. The remaining 6 were considered outside the subject scope of INIS. (J.P.N.)

  2. Numerical Computation of Underground Inundation in Multiple Layers Using the Adaptive Transfer Method

    Directory of Open Access Journals (Sweden)

    Hyung-Jun Kim

    2018-01-01

    Full Text Available Extreme rainfall causes surface runoff to flow towards lowlands and subterranean facilities, such as subway stations and buildings with underground spaces in densely packed urban areas. These facilities and areas are therefore vulnerable to catastrophic submergence. However, flood modeling of underground space has not yet been adequately studied because there are difficulties in reproducing the associated multiple horizontal layers connected with staircases or elevators. This study proposes a convenient approach to simulate underground inundation when two layers are connected. The main facet of this approach is to compute the flow flux passing through staircases in an upper layer and to transfer the equivalent quantity to a lower layer. This is defined as the ‘adaptive transfer method’. This method overcomes the limitations of 2D modeling by introducing layers connecting concepts to prevent large variations in mesh sizes caused by complicated underlying obstacles or local details. Consequently, this study aims to contribute to the numerical analysis of flow in inundated underground spaces with multiple floors.

  3. Numerical and Experimental Investigation of Computed Tomography of Chemiluminescence for Hydrogen-Air Premixed Laminar Flames

    Directory of Open Access Journals (Sweden)

    Liang Lv

    2016-01-01

    Full Text Available Computed tomography of chemiluminescence (CTC is a promising technique for combustion diagnostics, providing instantaneous 3D information of flame structures, especially in harsh circumstance. This work focuses on assessing the feasibility of CTC and investigating structures of hydrogen-air premixed laminar flames using CTC. A numerical phantom study was performed to assess the accuracy of the reconstruction algorithm. A well-designed burner was used to generate stable hydrogen-air premixed laminar flames. The OH⁎ chemiluminescence intensity field reconstructed from 37 views using CTC was compared to the OH⁎ chemiluminescence distributions recorded directly by a single ICCD camera from the side view. The flame structures in different flow velocities and equivalence ratios were analyzed using the reconstructions. The results show that the CTC technique can effectively indicate real distributions of the flame chemiluminescence. The height of the flame becomes larger with increasing flow velocities, whereas it decreases with increasing equivalence ratios (no larger than 1. The increasing flow velocities gradually lift the flame reaction zones. A critical cone angle of 4.76 degrees is obtained to avoid blow-off. These results set up a foundation for next studies and the methods can be further developed to reconstruct 3D structures of flames.

  4. Numerical Simulation of Mixing in a Micro-well Scale Bioreactor by Computational Fluid Dynamics

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The introduction of the multi-well plate miniaturisation technology with its associated automated dispensers, readers and integrated systems coupled with advances in life sciences has a propelling effect on the rate at which new potential drug molecules are discovered. The translation of these discoveries to real outcome now demands parallel approaches which allow large numbers of process options to be rapidly assessed. The engineering challenges in achieving this provide the motivation for the proposed work. In this work we used computational fluid dynamics(CFD) analysis to study flow conditions in a gas-liquid contactor which has the potential to be used as a fermenter on a multi-well format. The bioreactor had a working volume of 6.5 mL with the major dimensions equal to those of a single well of a 24-well plate. The 6.5 mL bioreactor was mechanically agitated and aerated by a single sparger placed beneath the bottom impeller. Detailed numerical procedure for solving the governing flow equations is given. The CFD results are combined with population balance equations to establish the size of the bubbles and their distribution in the bioreactor, Power curves with and without aeration are provided based on the simulated results.

  5. The Design of a Templated C++ Small Vector Class for Numerical Computing

    Science.gov (United States)

    Moran, Patrick J.

    2000-01-01

    We describe the design and implementation of a templated C++ class for vectors. The vector class is templated both for vector length and vector component type; the vector length is fixed at template instantiation time. The vector implementation is such that for a vector of N components of type T, the total number of bytes required by the vector is equal to N * size of (T), where size of is the built-in C operator. The property of having a size no bigger than that required by the components themselves is key in many numerical computing applications, where one may allocate very large arrays of small, fixed-length vectors. In addition to the design trade-offs motivating our fixed-length vector design choice, we review some of the C++ template features essential to an efficient, succinct implementation. In particular, we highlight some of the standard C++ features, such as partial template specialization, that are not supported by all compilers currently. This report provides an inventory listing the relevant support currently provided by some key compilers, as well as test code one can use to verify compiler capabilities.

  6. Appraisal of the PREP, KITT, and SAMPLE computer codes for the evaluation of the reliability characteristics of engineered systems

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, P; White, R F

    1976-01-01

    For the probabilistic approach to reactor safety assessment by the use of event tree and fault tree techniques it is essential to be able to estimate the probabilities of failure of the various engineered safety features provided to mitigate the effects of postulated accident sequences. The PREP, KITT and SAMPLE computer codes, which incorporate Kinetic Tree Theory, perform these calculations and have been used extensively to evaluate the reliability characteristics of engineered safety features of American nuclear reactors. Working versions of these computer codes are now available in SRD, and this report explains the merits, capabilities and ease of application of the PREP, KITT, and SAMPLE programs for the solution of system reliability problems.

  7. HONEI: A collection of libraries for numerical computations targeting multiple processor architectures

    Science.gov (United States)

    van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten

    2009-12-01

    We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the

  8. Numerical Verification Of Equilibrium Chemistry

    International Nuclear Information System (INIS)

    Piro, Markus; Lewis, Brent; Thompson, William T.; Simunovic, Srdjan; Besmann, Theodore M.

    2010-01-01

    A numerical tool is in an advanced state of development to compute the equilibrium compositions of phases and their proportions in multi-component systems of importance to the nuclear industry. The resulting software is being conceived for direct integration into large multi-physics fuel performance codes, particularly for providing boundary conditions in heat and mass transport modules. However, any numerical errors produced in equilibrium chemistry computations will be propagated in subsequent heat and mass transport calculations, thus falsely predicting nuclear fuel behaviour. The necessity for a reliable method to numerically verify chemical equilibrium computations is emphasized by the requirement to handle the very large number of elements necessary to capture the entire fission product inventory. A simple, reliable and comprehensive numerical verification method is presented which can be invoked by any equilibrium chemistry solver for quality assurance purposes.

  9. Common-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest, M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, CROSSER, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), used independently of one another. Point of equality between reliability of system and common reliability of components found. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  10. Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems

    DEFF Research Database (Denmark)

    Alstrøm, Poul

    2001-01-01

    This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system and e...

  11. Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems

    DEFF Research Database (Denmark)

    Alstrøm, Poul

    2000-01-01

    This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system and e...

  12. Numerical simulation of an elementary Vortex-Induced-Vibration problem by using fully-coupled fluid solid system computation

    Directory of Open Access Journals (Sweden)

    M Pomarède

    2016-09-01

    Full Text Available Numerical simulation of Vortex-Induced-Vibrations (VIV of a rigid circular elastically-mounted cylinder submitted to a fluid cross-flow has been extensively studied over the past decades, both experimentally and numerically, because of its theoretical and practical interest for understanding Flow-Induced-Vibrations (FIV problems. In this context, the present article aims to expose a numerical study based on fully-coupled fluid-solid computations compared to previously published work [34], [36]. The computational procedure relies on a partitioned method ensuring the coupling between fluid and structure solvers. The fluid solver involves a moving mesh formulation for simulation of the fluid structure interface motion. Energy exchanges between fluid and solid models are ensured through convenient numerical schemes. The present study is devoted to a low Reynolds number configuration. Cylinder motion magnitude, hydrodynamic forces, oscillation frequency and fluid vortex shedding modes are investigated and the “lock-in” phenomenon is reproduced numerically. These numerical results are proposed for code validation purposes before investigating larger industrial applications such as configurations involving tube arrays under cross-flows [4].

  13. Using scattering theory to compute invariant manifolds and numerical results for the laser-driven Hénon-Heiles system.

    Science.gov (United States)

    Blazevski, Daniel; Franklin, Jennifer

    2012-12-01

    Scattering theory is a convenient way to describe systems that are subject to time-dependent perturbations which are localized in time. Using scattering theory, one can compute time-dependent invariant objects for the perturbed system knowing the invariant objects of the unperturbed system. In this paper, we use scattering theory to give numerical computations of invariant manifolds appearing in laser-driven reactions. In this setting, invariant manifolds separate regions of phase space that lead to different outcomes of the reaction and can be used to compute reaction rates.

  14. Numerical spin tracking in a synchrotron computer code Spink: Examples (RHIC)

    International Nuclear Information System (INIS)

    Luccio, A.

    1995-01-01

    In the course of acceleration of polarized protons in a synchrotron, many depolarizing resonances are encountered. They are classified in two categories: Intrinsic resonances that depend on the lattice structure of the ring and arise from the coupling of betatron oscillations with horizontal magnetic fields, and imperfection resonances caused by orbit distortions due to field errors. In general, the spectrum of resonances vs spin tune Gγ(G = 1.7928, the proton gyromagnetic anomaly, and y the proton relativistic energy ratio) for a given lattice tune ν, or vs ν for a given Gγ, contains a multitude of lines with various amplitudes or resonance strengths. The depolarization due to the resonance lines can be studied by numerically tracking protons with spin in a model accelerator. Tracking will allow one to check the strength of resonances, to study the effects of devices like Siberian Snakes, to find safe lattice tune regions where to operate, and finally to study in detail the operation of special devices such as Spin Flippers. A few computer codes exist that calculate resonance strengths E k and perform tracking, for proton and electron machines. Most relevant to our work for the AGS and RHIC machines are the programs Depol and Snake. Depol, calculates the E k 's by Fourier analysis. The input to Depol is the output of a machine model code, such as Synch or Mad, containing all details of the lattice. Snake, does the tracking, starting from a synthetic machine, that contains a certain number of periods, of FODO cells, of Siberian snakes, etc. We believed the complexities of machines like the AGS or RHIC could not be adequately represented by Snake. Then, we decided to write a new code, Spink, that combines some of the features of Depol and Snake. I.E., Spink reads a Mad output like Depol and tracks as Snake does. The structure of the code and examples for RHIC are described in the following

  15. Reliability of a computer software angle tool for measuring spine and pelvic flexibility during the sit-and-reach test.

    Science.gov (United States)

    Mier, Constance M; Shapiro, Belinda S

    2013-02-01

    The purpose of this study was to determine the reliability of a computer software angle tool that measures thoracic (T), lumbar (L), and pelvic (P) angles as a means of evaluating spine and pelvic flexibility during the sit-and-reach (SR) test. Thirty adults performed the SR twice on separate days. The SR test was captured on video and later analyzed for T, L, and P angles using the computer software angle tool. During the test, 3 markers were placed over T1, T12, and L5 vertebrae to identify T, L, and P angles. Intraclass correlation coefficient (ICC) indicated a very high internal consistency (between trials) for T, L, and P angles (0.95-0.99); thus, the average of trials was used for test-retest (between days) reliability. Mean (±SD) values did not differ between days for T (51.0 ± 14.3 vs. 52.3 ± 16.2°), L (23.9 ± 7.1 vs. 23.0 ± 6.9°), or P (98.4 ± 15.6 vs. 98.3 ± 14.7°) angles. Test-retest reliability (ICC) was high for T (0.96) and P (0.97) angles and moderate for L angle (0.84). Both intrarater and interrater reliabilities were high for T (0.95, 0.94) and P (0.97, 0.97) angles and moderate for L angle (0.87, 0.82). Thus, the computer software angle tool is a highly objective method for assessing spine and pelvic flexibility during a video-captured SR test.

  16. Accuracy and reliability of facial soft tissue depth measurements using cone beam computer tomography

    NARCIS (Netherlands)

    Fourie, Zacharias; Damstra, Janalt; Gerrits, Pieter; Ren, Yijin

    2010-01-01

    It is important to have accurate and reliable measurements of soft tissue thickness for specific landmarks of the face and scalp when producing a facial reconstruction. In the past several methods have been created to measure facial soft tissue thickness (FSTT) in cadavers and in the living. The

  17. Computational intelligence methods for the efficient reliability analysis of complex flood defence structures

    NARCIS (Netherlands)

    Kingston, Greer B.; Rajabali Nejad, Mohammadreza; Gouldby, Ben P.; van Gelder, Pieter H.A.J.M.

    2011-01-01

    With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex

  18. Evaluation of the reliability concerning the identification of human factors as contributing factors by a computer supported event analysis (CEA)

    International Nuclear Information System (INIS)

    Wilpert, B.; Maimer, H.; Loroff, C.

    2000-01-01

    The project's objectives are the evaluation of the reliability concerning the identification of Human Factors as contributing factors by a computer supported event analysis (CEA). CEA is a computer version of SOL (Safety through Organizational Learning). Parts of the first step were interviews with experts from the nuclear power industry and the evaluation of existing computer supported event analysis methods. This information was combined to a requirement profile for the CEA software. The next step contained the implementation of the software in an iterative process of evaluation. The completion of this project was the testing of the CEA software. As a result the testing demonstrated that it is possible to identify contributing factors with CEA validly. In addition, CEA received a very positive feedback from the experts. (orig.) [de

  19. Improvement of level-1 PSA computer code package - Modeling and analysis for dynamic reliability of nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)

    1996-08-01

    The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)

  20. Computer numerically controlled (CNC) aspheric shaping with toroidal Wheels (Abstract Only)

    Science.gov (United States)

    Ketelsen, D.; Kittrell, W. C.; Kuhn, W. M.; Parks, R. E.; Lamb, George L.; Baker, Lynn

    1987-01-01

    Contouring with computer numerically controlled (CNC) machines can be accomplished with several different tool geometries and coordinated machine axes. To minimize the number of coordinated axes for nonsymmetric work to three, it is common practice to use a spherically shaped tool such as a ball-end mill. However, to minimize grooving due to the feed and ball radius, it is desirable to use a long ball radius, but there is clearly a practical limit to ball diameter with the spherical tool. We have found that the use of commercially available toroidal wheels permits long effective cutting radii, which in turn improve finish and minimize grooving for a set feed. In addition, toroidal wheels are easier than spherical wheels to center accurately. Cutting parameters are also easier to control because the feed rate past the tool does not change as the slope of the work changes. The drawback to the toroidal wheel is the more complex calculation of the tool path. Of course, once the algorithm is worked out, the tool path is as easily calculated as for a spherical tool. We have performed two experiments with the Large Optical Generator (LOG) that were ideally suited to three-axis contouring--surfaces that have no axis of rotational symmetry. By oscillating the cutting head horizontally or vertically (in addition to the motions required to generate the power of the surface) , and carefully coordinating those motions with table rotation, the mostly astigmatic departure for these surfaces is produced. The first experiment was a pair of reflector molds that together correct the spherical aberration of the Arecibo radio telescope. The larger of these was 5 m in diameter and had a 12 cm departure from the best-fit sphere. The second experiment was the generation of a purely astigmatic surface to demonstrate the feasibility of producing axially symmetric asphe.rics while mounted and rotated about any off-axis point. Measurements of the latter (the first experiment had relatively

  1. Optimal design methods for a digital human-computer interface based on human reliability in a nuclear power plant

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Zhang, Li; Xie, Tian; Wu, Daqing; Li, Min; Wang, Yiqun; Peng, Yuyuan; Peng, Jie; Zhang, Mengjia; Li, Peiyao; Ma, Congmin; Wu, Xing

    2017-01-01

    Highlights: • A complete optimization process is established for digital human-computer interfaces of Npps. • A quick convergence search method is proposed. • The authors propose an affinity error probability mapping function to test human reliability. - Abstract: This is the second in a series of papers describing the optimal design method for a digital human-computer interface of nuclear power plant (Npp) from three different points based on human reliability. The purpose of this series is to explore different optimization methods from varying perspectives. This present paper mainly discusses the optimal design method for quantity of components of the same factor. In monitoring process, quantity of components has brought heavy burden to operators, thus, human errors are easily triggered. To solve the problem, the authors propose an optimization process, a quick convergence search method and an affinity error probability mapping function. Two balanceable parameter values of the affinity error probability function are obtained by experiments. The experimental results show that the affinity error probability mapping function about human-computer interface has very good sensitivity and stability, and that quick convergence search method for fuzzy segments divided by component quantity has better performance than general algorithm.

  2. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  3. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  4. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  5. SLIM-MAUD - a computer based technique for human reliability assessment

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1985-01-01

    The Success Likelihood Index Methodology (SLIM) is a widely applicable technique which can be used to assess human error probabilities in both proceduralized and cognitive tasks (i.e. those involving decision making, problem solving, etc.). It assumes that expert assessors are able to evaluate the relative importance (or weights) of different factors called Performance Shaping Factors (PSFs), in determining the likelihood of error for the situations being assessed. Typical PSFs are the extent to which good procedures are available, operators are adequately trained, the man-machine interface is well designed, etc. If numerical ratings are made of the PSFs for the specific tasks being evaluated, these can be combined with the weights to give a numerical index, called the Success Likelihood Index (SLI). The SLI represents, in numerical form, the overall assessment of the experts of the likelihood of task success. The SLI can be subsequently transformed to a corresponding human error probability (HEP) estimate. The latest form of the SLIM technique is implemented using a microcomputer based system called MAUD (Multi-Attribute Utility Decomposition), the resulting technique being called SLIM-MAUD. A detailed description of the SLIM-MAUD technique and case studies of applications are available. An illustrative example of the application of SLIM-MAUD in probabilistic risk assessment is given

  6. 3-D Numerical Realization of Contituent-Level FRP Composites Using X-Ray Computer Tomography

    Data.gov (United States)

    National Aeronautics and Space Administration — Develop met . hods coupling state-of-the-art, nondestructive characterization techniques with three-dimensional, numerical modeling to study the constituent-level...

  7. On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equations

    Directory of Open Access Journals (Sweden)

    H. Montazeri

    2012-01-01

    Full Text Available We consider a system of nonlinear equations F(x=0. A new iterative method for solving this problem numerically is suggested. The analytical discussions of the method are provided to reveal its sixth order of convergence. A discussion on the efficiency index of the contribution with comparison to the other iterative methods is also given. Finally, numerical tests illustrate the theoretical aspects using the programming package Mathematica.

  8. Numerical Model of Air Valve For Computation of One-dimensional Flow

    Directory of Open Access Journals (Sweden)

    Daniel HIMR

    2014-06-01

    Full Text Available The paper is focused on a numerical simulation of unsteady flow in a pipeline. The special attention is paid to a numerical model of an air valve, which has to include all possible regimes: critical/subcritical inflow and critical/subcritical outflow of air. Thermodynamic equation of subcritical mass flow was simplified to get more friendly shape of relevant equations, which enables easier solution of the problem.

  9. Beyond redundancy how geographic redundancy can improve service availability and reliability of computer-based systems

    CERN Document Server

    Bauer, Eric; Eustace, Dan

    2012-01-01

    "While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy"--

  10. TEMPEST: A three-dimensional time-dependent computer program for hydrothermal analysis: Volume 1, Numerical methods and input instructions

    International Nuclear Information System (INIS)

    Trent, D.S.; Eyler, L.L.; Budden, M.J.

    1983-09-01

    This document describes the numerical methods, current capabilities, and the use of the TEMPEST (Version L, MOD 2) computer program. TEMPEST is a transient, three-dimensional, hydrothermal computer program that is designed to analyze a broad range of coupled fluid dynamic and heat transfer systems of particular interest to the Fast Breeder Reactor thermal-hydraulic design community. The full three-dimensional, time-dependent equations of motion, continuity, and heat transport are solved for either laminar or turbulent fluid flow, including heat diffusion and generation in both solid and liquid materials. 10 refs., 22 figs., 2 tabs

  11. Nature Inspired Computational Technique for the Numerical Solution of Nonlinear Singular Boundary Value Problems Arising in Physiology

    Directory of Open Access Journals (Sweden)

    Suheel Abdullah Malik

    2014-01-01

    Full Text Available We present a hybrid heuristic computing method for the numerical solution of nonlinear singular boundary value problems arising in physiology. The approximate solution is deduced as a linear combination of some log sigmoid basis functions. A fitness function representing the sum of the mean square error of the given nonlinear ordinary differential equation (ODE and its boundary conditions is formulated. The optimization of the unknown adjustable parameters contained in the fitness function is performed by the hybrid heuristic computation algorithm based on genetic algorithm (GA, interior point algorithm (IPA, and active set algorithm (ASA. The efficiency and the viability of the proposed method are confirmed by solving three examples from physiology. The obtained approximate solutions are found in excellent agreement with the exact solutions as well as some conventional numerical solutions.

  12. The reliability of computer analysis of ultrasonographic prostate images: the influence of inconsistent histopathology

    NARCIS (Netherlands)

    Giesen, R. J.; Huynen, A. L.; de la Rosette, J. J.; Schaafsma, H. E.; van Iersel, M. P.; Aarnink, R. G.; Debruyne, F. M.; Wijkstra, H.

    1994-01-01

    This article describes a method to investigate the influence of inconsistent histopathology during the development of tissue discrimination algorithms. Review of the pathology is performed on the biopsies used as training set of a computer system for cancer detection in ultrasonographic prostate

  13. Computational approaches to standard-compliant biofilm data for reliable analysis and integration

    Directory of Open Access Journals (Sweden)

    Sousa Ana Margarida

    2012-12-01

    Full Text Available The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes.

  14. Interobserver reliability of coronoid fracture classification: two-dimensional versus three-dimensional computed tomography

    NARCIS (Netherlands)

    Lindenhovius, Anneluuk; Karanicolas, Paul Jack; Bhandari, Mohit; van Dijk, Niek; Ring, David; Allan, Christopher; Anglen, Jeffrey; Axelrod, Terry; Baratz, Mark; Beingessner, Daphne; Brink, Peter; Cassidy, Charles; Coles, Chad; Conflitti, Joe; Crist, Brett; Della Rocca, Gregory; Dijkstra, Sander; Elmans, L. H. G. J.; Feibel, Roger; Flores, Luis; Frihagen, Frede; Gosens, Taco; Goslings, J. C.; Greenberg, Jeffrey; Grosso, Elena; Harness, Neil; van der Heide, Huub; Jeray, Kyle; Kalainov, David; van Kampen, Albert; Kawamura, Sumito; Kloen, Peter; McKee, Michael; Nork, Sean; Page, Richard; Pesantez, Rodrigo; Peters, Anil; Poolman, Rudolf; Prayson, Michael; Richardson, Martin; Seiler, John; Swiontkowski, Marc; Thomas, George; Trumble, Tom; van Vugt, Arie; Wright, Thomas; Zalavras, Charalampos; Zura, Robert

    2009-01-01

    This study tests the hypothesis that 3-dimensional computed tomography (CT) reconstructions improve interobserver agreement on classification and treatment of coronoid fractures compared with 2-dimensional CT. A total of 29 orthopedic surgeons evaluated 10 coronoid fractures on 2 occasions (first

  15. Improving the capacity, reliability & life of mobile devices with cloud computing

    CSIR Research Space (South Africa)

    Nkosi, MT

    2011-05-01

    Full Text Available devices. The approach in this paper is to model the mobile cloud computing process in a 3GPP IMS software development and emulator environment. And show that multimedia and security operations can be performed in the cloud, allowing mobile service...

  16. Reliability and availability of redundant systems: Computational program and the use of nomograms

    International Nuclear Information System (INIS)

    Signoret, J.P.

    1975-01-01

    A rigorous mathematical approach to determining the reliability and availability of repairable actively redundant systems - (r/m) systems - is considered for the case where the m units comprising the system are identical and the failure and repair rates, lambda and μ respectively, are constant. The method used involves the Markov processes, operator calculus and matrix calculus. All the results of the study are handled by the FIDIAS program, which is a practical tool for calculating with a high degree of precision the reliability or availability of such (r/m) systems whatever the values of m and r. In the FIDIAS-TC version of FIDIAS it is possible to plot curves with a Benson plotter, so that nomograms are produced for rapid and simple determination of the probabilities of failure or non-availability of the (r/m) systems considered. The practical application of nomograms is of interest because (2/3) and (2/4) actively redundant systems are very often used in the control circuits of power reactors. It is shown how easily one can compare these two systems using nomograms and how one can determine lambda or μ as a function of the anticipated result

  17. User's manual of SECOM2: a computer code for seismic system reliability analysis

    International Nuclear Information System (INIS)

    Uchiyama, Tomoaki; Oikawa, Tetsukuni; Kondo, Masaaki; Tamura, Kazuo

    2002-03-01

    This report is the user's manual of seismic system reliability analysis code SECOM2 (Seismic Core Melt Frequency Evaluation Code Ver.2) developed at the Japan Atomic Energy Research Institute for systems reliability analysis, which is one of the tasks of seismic probabilistic safety assessment (PSA) of nuclear power plants (NPPs). The SECOM2 code has many functions such as: Calculation of component failure probabilities based on the response factor method, Extraction of minimal cut sets (MCSs), Calculation of conditional system failure probabilities for given seismic motion levels at the site of an NPP, Calculation of accident sequence frequencies and the core damage frequency (CDF) with use of the seismic hazard curve, Importance analysis using various indicators, Uncertainty analysis, Calculation of the CDF taking into account the effect of the correlations of responses and capacities of components, and Efficient sensitivity analysis by changing parameters on responses and capacities of components. These analyses require the fault tree (FT) representing the occurrence condition of the system failures and core damage, information about response and capacity of components and seismic hazard curve for the NPP site as inputs. This report presents the models and methods applied in the SECOM2 code and how to use those functions. (author)

  18. Translation, Validation, and Reliability of the Dutch Late-Life Function and Disability Instrument Computer Adaptive Test.

    Science.gov (United States)

    Arensman, Remco M; Pisters, Martijn F; de Man-van Ginkel, Janneke M; Schuurmans, Marieke J; Jette, Alan M; de Bie, Rob A

    2016-09-01

    Adequate and user-friendly instruments for assessing physical function and disability in older adults are vital for estimating and predicting health care needs in clinical practice. The Late-Life Function and Disability Instrument Computer Adaptive Test (LLFDI-CAT) is a promising instrument for assessing physical function and disability in gerontology research and clinical practice. The aims of this study were: (1) to translate the LLFDI-CAT to the Dutch language and (2) to investigate its validity and reliability in a sample of older adults who spoke Dutch and dwelled in the community. For the assessment of validity of the LLFDI-CAT, a cross-sectional design was used. To assess reliability, measurement of the LLFDI-CAT was repeated in the same sample. The item bank of the LLFDI-CAT was translated with a forward-backward procedure. A sample of 54 older adults completed the LLFDI-CAT, World Health Organization Disability Assessment Schedule 2.0, RAND 36-Item Short-Form Health Survey physical functioning scale (10 items), and 10-Meter Walk Test. The LLFDI-CAT was repeated in 2 to 8 days (mean=4.5 days). Pearson's r and the intraclass correlation coefficient (ICC) (2,1) were calculated to assess validity, group-level reliability, and participant-level reliability. A correlation of .74 for the LLFDI-CAT function scale and the RAND 36-Item Short-Form Health Survey physical functioning scale (10 items) was found. The correlations of the LLFDI-CAT disability scale with the World Health Organization Disability Assessment Schedule 2.0 and the 10-Meter Walk Test were -.57 and -.53, respectively. The ICC (2,1) of the LLFDI-CAT function scale was .84, with a group-level reliability score of .85. The ICC (2,1) of the LLFDI-CAT disability scale was .76, with a group-level reliability score of .81. The high percentage of women in the study and the exclusion of older adults with recent joint replacement or hospitalization limit the generalizability of the results. The Dutch LLFDI

  19. Reliability and reproducibility analysis of the Cobb angle and assessing sagittal plane by computer-assisted and manual measurement tools.

    Science.gov (United States)

    Wu, Weifei; Liang, Jie; Du, Yuanli; Tan, Xiaoyi; Xiang, Xuanping; Wang, Wanhong; Ru, Neng; Le, Jinbo

    2014-02-06

    Although many studies on reliability and reproducibility of measurement have been performed on coronal Cobb angle, few results about reliability and reproducibility are reported on sagittal alignment measurement including the pelvis. We usually use SurgimapSpine software to measure the Cobb angle in our studies; however, there are no reports till date on its reliability and reproducible measurements. Sixty-eight standard standing posteroanterior whole-spine radiographs were reviewed. Three examiners carried out the measurements independently under the settings of manual measurement on X-ray radiographies and SurgimapSpine software on the computer. Parameters measured included pelvic incidence, sacral slope, pelvic tilt, Lumbar lordosis (LL), thoracic kyphosis, and coronal Cobb angle. SPSS 16.0 software was used for statistical analyses. The means, standard deviations, intraclass and interclass correlation coefficient (ICC), and 95% confidence intervals (CI) were calculated. There was no notable difference between the two tools (P = 0.21) for the coronal Cobb angle. In the sagittal plane parameters, the ICC of intraobserver reliability for the manual measures varied from 0.65 (T2-T5 angle) to 0.95 (LL angle). Further, for SurgimapSpine tool, the ICC ranged from 0.75 to 0.98. No significant difference in intraobserver reliability was found between the two measurements (P > 0.05). As for the interobserver reliability, measurements with SurgimapSpine tool had better ICC (0.71 to 0.98 vs 0.59 to 0.96) and Pearson's coefficient (0.76 to 0.99 vs 0.60 to 0.97). The reliability of SurgimapSpine measures was significantly higher in all parameters except for the coronal Cobb angle where the difference was not significant (P > 0.05). Although the differences between the two methods are very small, the results of this study indicate that the SurgimapSpine measurement is an equivalent measuring tool to the traditional manual in coronal Cobb angle, but is advantageous in spino

  20. Validity and Reliability of Orthodontic Loops between Mechanical Testing and Computer Simulation: An Finite Element Method Study

    Directory of Open Access Journals (Sweden)

    Gaurav Sepolia

    2014-01-01

    Full Text Available The magnitude and direction of orthodontic force is one of the essential concerns of orthodontic tooth movements. Excessive force may cause root resorption and mobility of the tooth, whereas low force level may results in prolonged treatment. The addition of loops allows the clinician to more accurately achieve the desired results. Aims and objectives: The purpose of the study was to evaluate the validity and reliability of orthodontic loops between mechanical testing and computer simulation. Materials and methods: Different types of loops were taken and divided into four groups: The Teardrop loop, Opus loop, L loop and T loop. These were artificially activated for multiple lengths and studied using the FEM. Results: The Teardrop loop showed the highest force level, and there is no significant difference between mechanical testing and computer simulation.

  1. Numerical and structural genomic aberrations are reliably detectable in tissue microarrays of formalin-fixed paraffin-embedded tumor samples by fluorescence in-situ hybridization.

    Directory of Open Access Journals (Sweden)

    Heike Horn

    Full Text Available Few data are available regarding the reliability of fluorescence in-situ hybridization (FISH, especially for chromosomal deletions, in high-throughput settings using tissue microarrays (TMAs. We performed a comprehensive FISH study for the detection of chromosomal translocations and deletions in formalin-fixed and paraffin-embedded (FFPE tumor specimens arranged in TMA format. We analyzed 46 B-cell lymphoma (B-NHL specimens with known karyotypes for translocations of IGH-, BCL2-, BCL6- and MYC-genes. Locus-specific DNA probes were used for the detection of deletions in chromosome bands 6q21 and 9p21 in 62 follicular lymphomas (FL and six malignant mesothelioma (MM samples, respectively. To test for aberrant signals generated by truncation of nuclei following sectioning of FFPE tissue samples, cell line dilutions with 9p21-deletions were embedded into paraffin blocks. The overall TMA hybridization efficiency was 94%. FISH results regarding translocations matched karyotyping data in 93%. As for chromosomal deletions, sectioning artefacts occurred in 17% to 25% of cells, suggesting that the proportion of cells showing deletions should exceed 25% to be reliably detectable. In conclusion, FISH represents a robust tool for the detection of structural as well as numerical aberrations in FFPE tissue samples in a TMA-based high-throughput setting, when rigorous cut-off values and appropriate controls are maintained, and, of note, was superior to quantitative PCR approaches.

  2. A Numerical Study of the Impact of Radial Baffles in solid Bowl Centrifuges Using computational Fluid Dynamics

    OpenAIRE

    Romani, Xiana; Nirschl, Hermann

    2010-01-01

    Centrifugal separation equipment, such as solid bowl centrifuges, is used to carry out an effective separation of fine particles from industrial fluids. Knowledge of the streams and sedimentation behavior inside solid bowl centrifuges is necessary to determine the geometry and the process parameters that lead to an optimal performance. Regarding a given industrial centrifuge geometry, a grid was built to calculate numerically the multiphase flow of water, air, and particles with a computation...

  3. CENTAURE, a numerical model for the computation of the flow and isotopic concentration fields in a gas centrifuge

    International Nuclear Information System (INIS)

    Soubbaramayer

    1977-01-01

    A numerical code (CENTAURE) built up with 36000 cards and 343 subroutines to investigate the full interconnected field of velocity, temperature, pressure and isotopic concentration in a gas centrifuge is presented. The complete set of Navier-Stokes equations, continuity equation, energy balance, isotopic diffusion equation and gas state law, form the basis of the model with proper boundary conditions, depending essentially upon the nature of the countercurrent and the thermal condition of the walls. Sources and sinks are located either inside the centrifuge or in the boundaries. The model includes not only the usual terms of CORIOLIS, compressibility, viscosity and thermal diffusivity but also the non linear terms of inertia in momentum equations, thermal convection and viscous dissipation in energy equation. The computation is based on finite element method and direct resolution instead of finite difference and iterative process. The code is quite flexible and well adapted to compute many physical cases in one centrifuge: the computation time per one case is then very small (we work with an IBM-360-91). The numerical results are exploited with the help of a visualisation screen IBM 2250. The possibilities of the code are exposed with numerical illustration. Some results are commented and compared to linear theories

  4. Numeric algorithms for parallel processors computer architectures with applications to the few-groups neutron diffusion equations

    International Nuclear Information System (INIS)

    Zee, S.K.

    1987-01-01

    A numeric algorithm and an associated computer code were developed for the rapid solution of the finite-difference method representation of the few-group neutron-diffusion equations on parallel computers. Applications of the numeric algorithm on both SIMD (vector pipeline) and MIMD/SIMD (multi-CUP/vector pipeline) architectures were explored. The algorithm was successfully implemented in the two-group, 3-D neutron diffusion computer code named DIFPAR3D (DIFfusion PARallel 3-Dimension). Numerical-solution techniques used in the code include the Chebyshev polynomial acceleration technique in conjunction with the power method of outer iteration. For inner iterations, a parallel form of red-black (cyclic) line SOR with automated determination of group dependent relaxation factors and iteration numbers required to achieve specified inner iteration error tolerance is incorporated. The code employs a macroscopic depletion model with trace capability for selected fission products' transients and critical boron. In addition to this, moderator and fuel temperature feedback models are also incorporated into the DIFPAR3D code, for realistic simulation of power reactor cores. The physics models used were proven acceptable in separate benchmarking studies

  5. Numerical Recipes in C++: The Art of Scientific Computing (2nd edn). Numerical Recipes Example Book (C++) (2nd edn). Numerical Recipes Multi-Language Code CD ROM with LINUX or UNIX Single-Screen License Revised Version

    International Nuclear Information System (INIS)

    Borcherds, P

    2003-01-01

    The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR

  6. Accuracy and reliability of linear cephalometric measurements from cone-beam computed tomography scans of a dry human skull.

    Science.gov (United States)

    Berco, Mauricio; Rigali, Paul H; Miner, R Matthew; DeLuca, Stephelynn; Anderson, Nina K; Will, Leslie A

    2009-07-01

    The purpose of this study was to determine the accuracy and reliability of 3-dimensional craniofacial measurements obtained from cone-beam computed tomography (CBCT) scans of a dry human skull. Seventeen landmarks were identified on the skull. CBCT scans were then obtained, with 2 skull orientations during scanning. Twenty-nine interlandmark linear measurements were made directly on the skull and compared with the same measurements made on the CBCT scans. All measurements were made by 2 operators on 4 separate occasions. The method errors were 0.19, 0.21, and 0.19 mm in the x-, y- and z-axes, respectively. Repeated measures analysis of variance (ANOVA) showed no significant intraoperator or interoperator differences. The mean measurement error was -0.01 mm (SD, 0.129 mm). Five measurement errors were found to be statistically significantly different; however, all measurement errors were below the known voxel size and clinically insignificant. No differences were found in the measurements from the 2 CBCT scan orientations of the skull. CBCT allows for clinically accurate and reliable 3-dimensional linear measurements of the craniofacial complex. Moreover, skull orientation during CBCT scanning does not affect the accuracy or the reliability of these measurements.

  7. Accuracy and Reliability of Cone-Beam Computed Tomography for Linear and Volumetric Mandibular Condyle Measurements. A Human Cadaver Study.

    Science.gov (United States)

    García-Sanz, Verónica; Bellot-Arcís, Carlos; Hernández, Virginia; Serrano-Sánchez, Pedro; Guarinos, Juan; Paredes-Gallardo, Vanessa

    2017-09-20

    The accuracy of Cone-Beam Computed Tomography (CBCT) on linear and volumetric measurements on condyles has only been assessed on dry skulls. The aim of this study was to evaluate the reliability and accuracy of linear and volumetric measurements of mandibular condyles in the presence of soft tissues using CBCT. Six embalmed cadaver heads were used. CBCT scans were taken, followed by the extraction of the condyles. The water displacement technique was used to calculate the volumes of the condyles and three linear measurements were made using a digital caliper, these measurements serving as the gold standard. Surface models of the condyles were obtained using a 3D scanner, and superimposed onto the CBCT images. Condyles were isolated on the CBCT render volume using the surface models as reference and volumes were measured. Linear measurements were made on CBCT slices. The CBCT method was found to be reliable for both volumetric and linear measurements (CV  0.90). Highly accurate values were obtained for the three linear measurements and volume. CBCT is a reliable and accurate method for taking volumetric and linear measurements on mandibular condyles in the presence of soft tissue, and so a valid tool for clinical diagnosis.

  8. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  9. Computing interval-valued statistical characteristics: What is the stumbling block for reliability applications?

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, V.G.

    2009-01-01

    The application of interval-valued statistical models is often hindered by the rapid growth in imprecision that occurs when intervals are propagated through models. Is this deficiency inherent in the models? If so, what is the underlying cause of imprecision in mathematical terms? What kind...... of additional information can be incorporated to make the bounds tighter? The present paper gives an account of the source of this imprecision that prevents interval-valued statistical models from being widely applied. Firstly, the mathematical approach to building interval-valued models (discrete...... and continuous) is delineated. Secondly, a degree of imprecision is demonstrated on some simple reliability models. Thirdly, the root mathematical cause of sizeable imprecision is elucidated and, finally, a method of making the intervals tighter is described. A number of examples are given throughout the paper....

  10. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  11. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    International Nuclear Information System (INIS)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS 'pathways,' or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  12. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herberger, Sarah Elizabeth Marie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  13. An analytically based numerical method for computing view factors in real urban environments

    Science.gov (United States)

    Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

    2018-01-01

    A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

  14. Fast numerical solution of KKR-CPA equations: Testing new algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, E.; Florio, G.M.; Ginatempo, B.; Giuliano, E.S. (Universita di Messina (Italy))

    1994-04-01

    Some numerical methods for the solution of KKR-CPA equations are discussed and tested. New, efficient, computational algorithms are proposed, allowing a remarkable reduction of computing time and a good reliability in evaluating spectral quantities. 16 refs., 7 figs.

  15. Computation of posterior distribution in Bayesian analysis – application in an intermittently used reliability system

    Directory of Open Access Journals (Sweden)

    V. S.S. Yadavalli

    2002-09-01

    Full Text Available Bayesian estimation is presented for the stationary rate of disappointments, D∞, for two models (with different specifications of intermittently used systems. The random variables in the system are considered to be independently exponentially distributed. Jeffreys’ prior is assumed for the unknown parameters in the system. Inference about D∞ is being restrained in both models by the complex and non-linear definition of D∞. Monte Carlo simulation is used to derive the posterior distribution of D∞ and subsequently the highest posterior density (HPD intervals. A numerical example where Bayes estimates and the HPD intervals are determined illustrates these results. This illustration is extended to determine the frequentistical properties of this Bayes procedure, by calculating covering proportions for each of these HPD intervals, assuming fixed values for the parameters.

  16. Numerical investigation of the High Temperature Reactor (VHTR) using computational fluid dynamics

    International Nuclear Information System (INIS)

    Pinto, Joao Pedro C.T.A.; Santos, Andre A. Campagnole dos; Mesquita, Amir Z.

    2013-01-01

    This work consists to evaluate and continue the study that is being developed in the Laboratory of Thermo-Hydraulics of the CNEN/CDTN (Centro de Desenvolvimento da Tecnologia Nuclear), aiming to validate the methods and procedures used in the numerical calculations of fluid flow in fuel elements of the core of the VHTR

  17. Film Cooling Optimization Using Numerical Computation of the Compressible Viscous Flow Equations and Simplex Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed M. Elsayed

    2013-01-01

    Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.

  18. Decoherence and Noise in Spin-based Solid State Quantum Computers. Approximation-Free Numerical Simulations

    National Research Council Canada - National Science Library

    Harmon, Bruce N; Dobrovitski, Viatcheslav V

    2007-01-01

    ...) have also been developed and applied. Most recently, specific strategies for quantum control have been investigated for realistic systems in order to extend the coherence times for spin-based quantum computing implementations...

  19. SINCRO/CAR: An interactive numerical system for computer-aided control engineering and maintenance

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Despujols, A.

    1986-01-01

    This presentation describes a dialogue-oriented software implemented on a portable computer for computer-aided engineering and training in control instrumentation and also for on-line verification of the performances of the analog controllers installed on power plants. The SINCRO/CAR software includes algorithms for controller design, simulation, identification, optimization, frequency response and real time data acquisition. Various results obtained on fossil-fired and nuclear plants are given to illustrate the efficiency of the SINCRO/CAR software

  20. Reliability Analysis Based on a Jump Diffusion Model with Two Wiener Processes for Cloud Computing with Big Data

    Directory of Open Access Journals (Sweden)

    Yoshinobu Tamura

    2015-06-01

    Full Text Available At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.

  1. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    Science.gov (United States)

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  2. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  3. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  4. A computational model for reliability calculation of steam generators from defects in its tubes

    International Nuclear Information System (INIS)

    Rivero, Paulo C.M.; Melo, P.F. Frutuoso e

    2000-01-01

    Nowadays, probability approaches are employed for calculating the reliability of steam generators as a function of defects in their tubes without any deterministic association with warranty assurance. Unfortunately, probability models produce large failure values, as opposed to the recommendation of the U.S. Code of Federal Regulations, that is, failure probabilities must be as small as possible In this paper, we propose the association of the deterministic methodology with the probabilistic one. At first, the failure probability evaluation of steam generators follows a probabilistic methodology: to find the failure probability, critical cracks - obtained from Monte Carlo simulations - are limited to have length's in the interval defined by their lower value and the plugging limit one, so as to obtain a failure probability of at most 1%. The distribution employed for modeling the observed (measured) cracks considers the same interval. Any length outside the mentioned interval is not considered for the probability evaluation: it is approached by the deterministic model. The deterministic approach is to plug the tube when any anomalous crack is detected in it. Such a crack is an observed one placed in the third region on the plot of the logarithmic time derivative of crack lengths versus the mode I stress intensity factor, while for normal cracks the plugging of tubes occurs in the second region of that plot - if they are dangerous, of course, considering their random evolution. A methodology for identifying anomalous cracks is also presented. (author)

  5. Computed tomography and angiography do not reliably discriminate malignant meningiomas from benign ones

    International Nuclear Information System (INIS)

    Servo, A.; Porras, M.; Jaeaeskelaeinen, J.; Paetau, A.; Haltia, M.

    1990-01-01

    Histological anaplasia, found in up to 10% of meningiomas, is an important prognostic sign as it is associated with increased recurrence rate and volume growth rate. We studied in retrospect a series of 230 primary intracranial meningiomas to discover whether histological anaplasia can be reliably foreseen in CT scans and angiograms. 205 meningiomas were histologically benign, and 25 meningiomas were classified as malignant (atypical or anaplastic), with either incipient (20) or overt (5) signs of anaplasia. Of ten CT parameters tested, three were associated significantly more often with malignant meningiomas: Nodular contour (58.3% vs 26.7%); cysts (20.0% vs 4.4%) and absence of calcifications (92% vs 65.3%); none of these parameters was an absolute sign of anaplasia. 'Mushrooming', previously regarded as a definite sign of malignancy, was seen in 9% of benign meningiomas and in 21% of malignant ones. In angiography, no apparent differences between benign and malignant meningiomas were seen. The conclusion is that it is not possible to distinguish malignant meningiomas from benign ones with CT or angiography. (orig.)

  6. Numerical stabilization of entanglement computation in auxiliary-field quantum Monte Carlo simulations of interacting many-fermion systems.

    Science.gov (United States)

    Broecker, Peter; Trebst, Simon

    2016-12-01

    In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.

  7. A numerical scheme using multi-shockpeakons to compute solutions of the Degasperis-Procesi equation

    Directory of Open Access Journals (Sweden)

    Hakon A. Hoel

    2007-07-01

    Full Text Available We consider a numerical scheme for entropy weak solutions of the DP (Degasperis-Procesi equation $u_t - u_{xxt} + 4uu_x = 3u_{x}u_{xx}+ uu_{xxx}$. Multi-shockpeakons, functions of the form $$ u(x,t =sum_{i=1}^n(m_i(t -hbox{sign}(x-x_i(ts_i(te^{-|x-x_i(t|}, $$ are solutions of the DP equation with a special property; their evolution in time is described by a dynamical system of ODEs. This property makes multi-shockpeakons relatively easy to simulate numerically. We prove that if we are given a non-negative initial function $u_0 in L^1(mathbb{R}cap BV(mathbb{R}$ such that $u_{0} - u_{0,x}$ is a positive Radon measure, then one can construct a sequence of multi-shockpeakons which converges to the unique entropy weak solution in $mathbb{R}imes[0,T$ for any $T>0$. From this convergence result, we construct a multi-shockpeakon based numerical scheme for solving the DP equation.

  8. Computation of Quasi-Periodic Normally Hyperbolic Invariant Tori: Algorithms, Numerical Explorations and Mechanisms of Breakdown

    Science.gov (United States)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    We present several algorithms for computing normally hyperbolic invariant tori carrying quasi-periodic motion of a fixed frequency in families of dynamical systems. The algorithms are based on a KAM scheme presented in Canadell and Haro (J Nonlinear Sci, 2016. doi: 10.1007/s00332-017-9389-y), to find the parameterization of the torus with prescribed dynamics by detuning parameters of the model. The algorithms use different hyperbolicity and reducibility properties and, in particular, compute also the invariant bundles and Floquet transformations. We implement these methods in several 2-parameter families of dynamical systems, to compute quasi-periodic arcs, that is, the parameters for which 1D normally hyperbolic invariant tori with a given fixed frequency do exist. The implementation lets us to perform the continuations up to the tip of the quasi-periodic arcs, for which the invariant curves break down. Three different mechanisms of breakdown are analyzed, using several observables, leading to several conjectures.

  9. Estimation of numerical uncertainty in computational fluid dynamics simulations of a passively controlled wave energy converter

    DEFF Research Database (Denmark)

    Wang, Weizhi; Wu, Minghao; Palm, Johannes

    2018-01-01

    for almost linear incident waves. First, we show that the computational fluid dynamics simulations have acceptable agreement to experimental data. We then present a verification and validation study focusing on the solution verification covering spatial and temporal discretization, iterative and domain......The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...

  10. Unified algorithm for partial differential equations and examples of numerical computation

    International Nuclear Information System (INIS)

    Watanabe, Tsuguhiro

    1999-01-01

    A new unified algorithm is proposed to solve partial differential equations which describe nonlinear boundary value problems, eigenvalue problems and time developing boundary value problems. The algorithm is composed of implicit difference scheme and multiple shooting scheme and is named as HIDM (Higher order Implicit Difference Method). A new prototype computer programs for 2-dimensional partial differential equations is constructed and tested successfully to several problems. Extension of the computer programs to 3 or more higher order dimension problems will be easy due to the direct product type difference scheme. (author)

  11. Reliable computation of roots in analytical waveguide modeling using an interval-Newton approach and algorithmic differentiation.

    Science.gov (United States)

    Bause, Fabian; Walther, Andrea; Rautenberg, Jens; Henning, Bernd

    2013-12-01

    For the modeling and simulation of wave propagation in geometrically simple waveguides such as plates or rods, one may employ the analytical global matrix method. That is, a certain (global) matrix depending on the two parameters wavenumber and frequency is built. Subsequently, one must calculate all parameter pairs within the domain of interest where the global matrix becomes singular. For this purpose, one could compute all roots of the determinant of the global matrix when the two parameters vary in the given intervals. This requirement to calculate all roots is actually the method's most concerning restriction. Previous approaches are based on so-called mode-tracers, which use the physical phenomenon that solutions, i.e., roots of the determinant of the global matrix, appear in a certain pattern, the waveguide modes, to limit the root-finding algorithm's search space with respect to consecutive solutions. In some cases, these reductions of the search space yield only an incomplete set of solutions, because some roots may be missed as a result of uncertain predictions. Therefore, we propose replacement of the mode-tracer approach with a suitable version of an interval- Newton method. To apply this interval-based method, we extended the interval and derivative computation provided by a numerical computing environment such that corresponding information is also available for Bessel functions used in circular models of acoustic waveguides. We present numerical results for two different scenarios. First, a polymeric cylindrical waveguide is simulated, and second, we show simulation results of a one-sided fluid-loaded plate. For both scenarios, we compare results obtained with the proposed interval-Newton algorithm and commercial software.

  12. Numerical solution of conservation equations in the transient model for the system thermal - hydraulics in the Korsar computer code

    International Nuclear Information System (INIS)

    Yudov, Y.V.

    2001-01-01

    The functional part of the KORSAR computer code is based on the computational unit for the reactor system thermal-hydraulics and other thermal power systems with water cooling. The two-phase flow dynamics of the thermal-hydraulic network is modelled by KORSAR in one-dimensional two-fluid (non-equilibrium and nonhomogeneous) approximation with the same pressure of both phases. Each phase is characterized by parameters averaged over the channel sections, and described by the conservation equations for mass, energy and momentum. The KORSAR computer code relies upon a novel approach to mathematical modelling of two-phase dispersed-annular flows. This approach allows a two-fluid model to differentiate the effects of the liquid film and droplets in the gas core on the flow characteristics. A semi-implicit numerical scheme has been chosen for deriving discrete analogs the conservation equations in KORSAR. In the semi-implicit numerical scheme, solution of finite-difference equations is reduced to the problem of determining the pressure field at a new time level. For the one-channel case, the pressure field is found from the solution of a system of linear algebraic equations by using the tri-diagonal matrix method. In the branched network calculation, the matrix of coefficients in the equations describing the pressure field is no longer tri-diagonal but has a sparseness structure. In this case, the system of linear equations for the pressure field can be solved with any of the known classical methods. Such an approach is implemented in the existing best-estimate thermal-hydraulic computer codes (TRAC, RELAP5, etc.) For the KORSAR computer code, we have developed a new non-iterative method for calculating the pressure field in the network of any topology. This method is based on the tri-diagonal matrix method and performs well when solving the thermal-hydraulic network problems. (author)

  13. Numerical methodologies for investigation of moderate-velocity flow using a hybrid computational fluid dynamics - molecular dynamics simulation approach

    International Nuclear Information System (INIS)

    Ko, Soon Heum; Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel; Jha, Shantenu

    2014-01-01

    Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.

  14. Computational approaches to standard-compliant biofilm data for reliable analysis and integration.

    Science.gov (United States)

    Sousa, Ana Margarida; Ferreira, Andreia; Azevedo, Nuno F; Pereira, Maria Olivia; Lourenço, Anália

    2012-12-01

    The study of microorganism consortia, also known as biofilms, is associated to a number of applications in biotechnology, ecotechnology and clinical domains. Nowadays, biofilm studies are heterogeneous and data-intensive, encompassing different levels of analysis. Computational modelling of biofilm studies has become thus a requirement to make sense of these vast and ever-expanding biofilm data volumes. The rationale of the present work is a machine-readable format for representing biofilm studies and supporting biofilm data interchange and data integration. This format is supported by the Biofilm Science Ontology (BSO), the first ontology on biofilms information. The ontology is decomposed into a number of areas of interest, namely: the Experimental Procedure Ontology (EPO) which describes biofilm experimental procedures; the Colony Morphology Ontology (CMO) which characterises morphologically microorganism colonies; and other modules concerning biofilm phenotype, antimicrobial susceptibility and virulence traits. The overall objective behind BSO is to develop semantic resources to capture, represent and share data on biofilms and related experiments in a regularized fashion manner. Furthermore, the present work also introduces a framework in assistance of biofilm data interchange and analysis - BiofOmics (http://biofomics.org) - and a public repository on colony morphology signatures - MorphoCol (http://stardust.deb.uminho.pt/morphocol).

  15. A comparative study of computed radiographic cephalometry and conventional cephalometry in reliability of head film measurements

    International Nuclear Information System (INIS)

    Kim, Hyung Done; Kim, Kee Deog; Park, Chang Seo

    1997-01-01

    The purpose of this study was to compare and to find out the variability of head film measurements (landmarks identification) between Fuji computed radiographic cephalometry and conventional cephalometry. 28 Korean adults were selected. Lateral cephalometric FCR film and conventional cephalometric film of each subject was taken. Four investigators identified 24 cephalometric landmarks on lateral cephalometric FCR film and conventional cephalometric film were statistically analysed. The results were as follows : 1. In FCR film and conventional film, coefficient of variation (C.V.) of 24 landmarks was taken horizontally and vertically. 2. In comparison of significant differences of landmarks variability between FCR film and conventional film, horizontal l value of coefficient of variation showed significant differences in four landmarks among twenty-four landmarks, but vertical a value of coefficient of variation showed significant differences in sixteen landmarks among twenty-four landmarks. FCR film showed significantly less variability than conventional film in 17 subjects among 20 (4+16) subjects that sho wed significant difference.

  16. Nonlinear dynamics and numerical uncertainties in CFD

    Science.gov (United States)

    Yee, H. C.; Sweby, P. K.

    1996-01-01

    The application of nonlinear dynamics to improve the understanding of numerical uncertainties in computational fluid dynamics (CFD) is reviewed. Elementary examples in the use of dynamics to explain the nonlinear phenomena and spurious behavior that occur in numerics are given. The role of dynamics in the understanding of long time behavior of numerical integrations and the nonlinear stability, convergence, and reliability of using time-marching, approaches for obtaining steady-state numerical solutions in CFD is explained. The study is complemented with spurious behavior observed in CFD computations.

  17. Chrysler improved numerical differencing analyzer for third generation computers CINDA-3G

    Science.gov (United States)

    Gaski, J. D.; Lewis, D. R.; Thompson, L. R.

    1972-01-01

    New and versatile method has been developed to supplement or replace use of original CINDA thermal analyzer program in order to take advantage of improved systems software and machine speeds of third generation computers. CINDA-3G program options offer variety of methods for solution of thermal analog models presented in network format.

  18. Theory of surface enrichment in disordered monophasic binary alloys. Numerical computations for Ag-Au alloys

    NARCIS (Netherlands)

    Santen, van R.A.; Boersma, M.A.M.

    1974-01-01

    The regular solution model is used to compute the surface enrichment in the (111)- and (100)-faces of silver-gold alloys. Surface enrichment by silver is predicted to increase if the surface plane becomes less saturated and decreases if one raises the temperature. The possible implications of these

  19. Dosimetric reconstruction of radiological accident by numerical simulations by means associating an anthropomorphic model and a Monte Carlo computation code

    International Nuclear Information System (INIS)

    Courageot, Estelle

    2010-01-01

    After a description of the context of radiological accidents (definition, history, context, exposure types, associated clinic symptoms of irradiation and contamination, medical treatment, return on experience) and a presentation of dose assessment in the case of external exposure (clinic, biological and physical dosimetry), this research thesis describes the principles of numerical reconstruction of a radiological accident, presents some computation codes (Monte Carlo code, MCNPX code) and the SESAME tool, and reports an application to an actual case (an accident which occurred in Equator in April 2009). The next part reports the developments performed to modify the posture of voxelized phantoms and the experimental and numerical validations. The last part reports a feasibility study for the reconstruction of radiological accidents occurring in external radiotherapy. This work is based on a Monte Carlo simulation of a linear accelerator, with the aim of identifying the most relevant parameters to be implemented in SESAME in the case of external radiotherapy

  20. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    International Nuclear Information System (INIS)

    Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T

    2006-01-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems

  1. Evaluation of the reliability and accuracy of using cone-beam computed tomography for diagnosing periapical cysts from granulomas.

    Science.gov (United States)

    Guo, Jing; Simon, James H; Sedghizadeh, Parish; Soliman, Osman N; Chapman, Travis; Enciso, Reyes

    2013-12-01

    The purpose of this study was to evaluate the reliability and accuracy of cone-beam computed tomographic (CBCT) imaging against the histopathologic diagnosis for the differential diagnosis of periapical cysts (cavitated lesions) from (solid) granulomas. Thirty-six periapical lesions were imaged using CBCT scans. Apicoectomy surgeries were conducted for histopathological examination. Evaluator 1 examined each CBCT scan for the presence of 6 radiologic characteristics of a cyst (ie, location, periphery, shape, internal structure, effects on surrounding structure, and perforation of the cortical plate). Not every cyst showed all radiologic features (eg, not all cysts perforate the cortical plate). For the purpose of finding the minimum number of diagnostic criteria present in a scan to diagnose a lesion as a cyst, we conducted 6 receiver operating characteristic curve analyses comparing CBCT diagnoses with the histopathologic diagnosis. Two other independent evaluators examined the CBCT lesions. Statistical tests were conducted to examine the accuracy, inter-rater reliability, and intrarater reliability of CBCT images. Findings showed that a score of ≥4 positive findings was the optimal scoring system. The accuracies of differential diagnoses of 3 evaluators were moderate (area under the curve = 0.76, 0.70, and 0.69 for evaluators 1, 2, and 3, respectively). The inter-rater agreement of the 3 evaluators was excellent (α = 0.87). The intrarater agreement was good to excellent (κ = 0.71, 0.76, and 0.77). CBCT images can provide a moderately accurate diagnosis between cysts and granulomas. Copyright © 2013 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  2. A Simple and Efficient Numerical Method for Computing the Dynamics of Rotating Bose--Einstein Condensates via Rotating Lagrangian Coordinates

    KAUST Repository

    Bao, Weizhu

    2013-01-01

    We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three-dimensional (3D) Gross-Pitaevskii equation (GPE) with an angular momentum rotation term and/or long-range DDI, state the twodimensional (2D) GPE obtained from the 3D GPE via dimension reduction under anisotropic external potential, and review some dynamical laws related to the 2D and 3D GPEs. By introducing a rotating Lagrangian coordinate system, the original GPEs are reformulated to GPEs without the angular momentum rotation, which is replaced by a time-dependent potential in the new coordinate system. We then cast the conserved quantities and dynamical laws in the new rotating Lagrangian coordinates. Based on the new formulation of the GPE for rotating BECs in the rotating Lagrangian coordinates, a time-splitting spectral method is presented for computing the dynamics of rotating BECs. The new numerical method is explicit, simple to implement, unconditionally stable, and very efficient in computation. It is spectral-order accurate in space and second-order accurate in time and conserves the mass on the discrete level. We compare our method with some representative methods in the literature to demonstrate its efficiency and accuracy. In addition, the numerical method is applied to test the dynamical laws of rotating BECs such as the dynamics of condensate width, angular momentum expectation, and center of mass, and to investigate numerically the dynamics and interaction of quantized vortex lattices in rotating BECs without or with the long-range DDI.Copyright © by SIAM.

  3. Physical models and numerical methods of the reactor dynamic computer program RETRAN

    International Nuclear Information System (INIS)

    Kamelander, G.; Woloch, F.; Sdouz, G.; Koinig, H.

    1984-03-01

    This report describes the physical models and the numerical methods of the reactor dynamic code RETRAN simulating reactivity transients in Light-Water-Reactors. The neutron-physical part of RETRAN bases on the two-group-diffusion equations which are solved by discretization similar to the TWIGL-method. An exponential transformation is applied and the inner iterations are accelerated by a coarse-mesh-rebalancing procedure. The thermo-hydraulic model approximates the equation of state by a built-in steam-water-table and disposes of options for the calculation of heat-conduction coefficients and heat transfer coefficients. (Author) [de

  4. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    Science.gov (United States)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  5. Contributions to the uncertainty management in numerical modelization: wave propagation in random media and analysis of computer experiments

    International Nuclear Information System (INIS)

    Iooss, B.

    2009-01-01

    The present document constitutes my Habilitation thesis report. It recalls my scientific activity of the twelve last years, since my PhD thesis until the works completed as a research engineer at CEA Cadarache. The two main chapters of this document correspond to two different research fields both referring to the uncertainty treatment in engineering problems. The first chapter establishes a synthesis of my work on high frequency wave propagation in random medium. It more specifically relates to the study of the statistical fluctuations of acoustic wave travel-times in random and/or turbulent media. The new results mainly concern the introduction of the velocity field statistical anisotropy in the analytical expressions of the travel-time statistical moments according to those of the velocity field. This work was primarily carried by requirements in geophysics (oil exploration and seismology). The second chapter is concerned by the probabilistic techniques to study the effect of input variables uncertainties in numerical models. My main applications in this chapter relate to the nuclear engineering domain which offers a large variety of uncertainty problems to be treated. First of all, a complete synthesis is carried out on the statistical methods of sensitivity analysis and global exploration of numerical models. The construction and the use of a meta-model (inexpensive mathematical function replacing an expensive computer code) are then illustrated by my work on the Gaussian process model (kriging). Two additional topics are finally approached: the high quantile estimation of a computer code output and the analysis of stochastic computer codes. We conclude this memory with some perspectives about the numerical simulation and the use of predictive models in industry. This context is extremely positive for future researches and application developments. (author)

  6. Intraobserver and interobserver reliability of radial torsion angle measurements by a new and alternative method with computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Luiz Fernando Pinheiro de; Barbieri, Claudio Henrique; Mazzer, Nilton; Zatiti, Salomao Chade Assan; Bellucci, Angela Delete [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). School of Medicine. Dept. of Biomechanics, Medicine and Rehabilitation; Nogueira-Barbosa, Marcello Henrique, E-mail: marcello@fmrp.usp.b [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). School of Medicine. Radiology Div.

    2010-07-01

    Objective: to evaluate the intraobserver and interobserver reliability of radial torsion angle measurement using computed tomography. Methods: twelve pairs of cadaver radii and 116 forearms from 58 healthy volunteers were evaluated using axial computed tomography sections measured at the level of the bicipital tuberosity and the subchondral region of the radius. During digital imaging, the angle was formed by two lines, one diametrically perpendicular to the radial tubercle and the other tangential to the volar rim of the distal joint surface. Measurements were performed twice each by three observers. Results: in cadaveric bones, the mean radial torsion angle was 1.48 deg (-6 deg - 9 deg) on the right and 1.62 deg (-6 deg - 8 deg) on the left, with a mean difference between the right and left sides of 1.61 deg (0 deg - 8 deg). In volunteers, the mean radial torsion angle was 3.00 deg (-17 deg - 17 deg) on the right and 2.91 deg (-16 deg- 15 deg) on the left, with a mean difference between the sides of 1.58 deg (0 deg - 7 deg). There was no significant difference between each side. The interobserver correlation coefficient for the cadaver radii measurements was 0.88 (0.72 - 0.96) and 0.81 (0.58 - 0.93) for the right and left radius, respectively, while for the volunteers, the difference was 0.84 (0.77 - 0.90) and 0.83 (0.75 - 0.89), respectively. Intraobserver reliability was high. Conclusion: the described method is reproducible and applicable even when the radial tubercle has a rounded contour. (author)

  7. Use of computational methods for substitution and numerical dosimetry of real bones

    International Nuclear Information System (INIS)

    Silva, I.C.S.; Gonzalez, K.M.L.; Barbosa, A.J.A.; Lucindo Junior, C.R.; Vieira, J.W.; Lima, F.R.A.

    2017-01-01

    Estimating the dose that ionizing radiation deposits in the soft tissues of the skeleton within the cavities of the trabecular bones represents one of the greatest difficulties faced by numerical dosimetry. The Numerical Dosimetry Group (GDN/CNPq) Brazil, Recife-PE has used a method based on micro-CT images. The problem of the implementation of micro-CT is the difficulty in obtaining samples of real bones (OR). The objective of this work was to evaluate the sample of a virtual block of trabecular bone through the nonparametric method based on the voxel frequencies (VF) and samples of the climbing plant called Luffa aegyptica, whose dry fruit is known as vegetal bush (BV) substitution of OR samples. For this, a theoretical study of the two techniques developed by the GDN was made. The study showed in both techniques, after the dosimetric evaluations, that the actual sample can be replaced by the synthetic samples, since they have shown dose estimates close to the actual one

  8. A reliable computational workflow for the selection of optimal screening libraries.

    Science.gov (United States)

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    components, it can be easily adapted and reproduced by computational groups interested in rational selection of screening libraries. Furthermore, the workflow could be readily modified to include additional components. This workflow has been routinely used in our laboratory for the selection of libraries in multiple projects and consistently selects libraries which are well balanced across multiple parameters.Graphical abstract.

  9. Design and evaluation of the computer-based training program Calcularis for enhancing numerical cognition

    Directory of Open Access Journals (Sweden)

    Tanja eKäser

    2013-08-01

    Full Text Available This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child’s individual learning and knowledge profile. 32 children with difficulties in learning mathematics completed the 6 to 12-weeks computer training. The children played the game for 20 minutes per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities.

  10. Numerical studies on the electromagnetic properties of the nonlinear Lorentz Computational model for the dielectric media

    International Nuclear Information System (INIS)

    Abe, H.; Okuda, H.

    1994-06-01

    We study linear and nonlinear properties of a new computer simulation model developed to study the propagation of electromagnetic waves in a dielectric medium in the linear and nonlinear regimes. The model is constructed by combining a microscopic model used in the semi-classical approximation for the dielectric media and the particle model developed for the plasma simulations. It is shown that the model may be useful for studying linear and nonlinear wave propagation in the dielectric media

  11. An efficient and general numerical method to compute steady uniform vortices

    Science.gov (United States)

    Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.

    2011-07-01

    Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.

  12. Nonlinear Hyperbolic Equations - Theory, Computation Methods, and Applications. Volume 24. Note on Numerical Fluid Mechanics

    Science.gov (United States)

    1989-01-01

    IJ-1_1 - from which we deduce: H U 1/ f II Hu A//- + 2M AtAr , and indeed the expected estimate : // un+l //_ lluo/ + (2MT) Ax since nAt _9 T...the propa- gation of a planar premixed flame with one-step chemistry . In this case, diffusive and reactive terms are added to the energy and species...to use exceedingly fine computational scales, to resolve the chemistry and internal fluid layers fully (which would normally be prohibitive in a large

  13. Numerical studies on soliton propagation in the dielectric media by the nonlinear Lorentz computational model

    International Nuclear Information System (INIS)

    Abe, H.; Okuda, H.

    1994-06-01

    Soliton propagation in the dielectric media has been simulated by using the nonlinear Lorentz computational model, which was recently developed to study the propagation of electromagnetic waves in a linear and a nonlinear dielectric. The model is constructed by combining a microscopic model used in the semi-classical approximation for dielectric media and the particle model developed for the plasma simulations. The carrier wave frequency is retained in the simulation so that not only the envelope of the soliton but also its phase can be followed in time. It is shown that the model may be useful for studying pulse propagation in the dielectric media

  14. Elasto-plastic benchmark calculations. Step 1: verification of the numerical accuracy of the computer programs

    International Nuclear Information System (INIS)

    Corsi, F.

    1985-01-01

    In connection with the design of nuclear reactors components operating at elevated temperature, design criteria need a level of realism in the prediction of inelastic structural behaviour. This concept leads to the necessity of developing non linear computer programmes, and, as a consequence, to the problems of verification and qualification of these tools. Benchmark calculations allow to carry out these two actions, involving at the same time an increased level of confidence in complex phenomena analysis and in inelastic design calculations. With the financial and programmatic support of the Commission of the European Communities (CEE) a programme of elasto-plastic benchmark calculations relevant to the design of structural components for LMFBR has been undertaken by those Member States which are developing a fast reactor project. Four principal progressive aims were initially pointed out that brought to the decision to subdivide the Benchmark effort in a calculations series of four sequential steps: step 1 to 4. The present document tries to summarize Step 1 of the Benchmark exercise, to derive some conclusions on Step 1 by comparison of the results obtained with the various codes and to point out some concluding comments on the first action. It is to point out that even if the work was designed to test the capabilities of the computer codes, another aim was to increase the skill of the users concerned

  15. Comparison of FDTD numerical computations and analytical multipole expansion method for plasmonics-active nanosphere dimers.

    Science.gov (United States)

    Dhawan, Anuj; Norton, Stephen J; Gerhold, Michael D; Vo-Dinh, Tuan

    2009-06-08

    This paper describes a comparative study of finite-difference time-domain (FDTD) and analytical evaluations of electromagnetic fields in the vicinity of dimers of metallic nanospheres of plasmonics-active metals. The results of these two computational methods, to determine electromagnetic field enhancement in the region often referred to as "hot spots" between the two nanospheres forming the dimer, were compared and a strong correlation observed for gold dimers. The analytical evaluation involved the use of the spherical-harmonic addition theorem to relate the multipole expansion coefficients between the two nanospheres. In these evaluations, the spacing between two nanospheres forming the dimer was varied to obtain the effect of nanoparticle spacing on the electromagnetic fields in the regions between the nanostructures. Gold and silver were the metals investigated in our work as they exhibit substantial plasmon resonance properties in the ultraviolet, visible, and near-infrared spectral regimes. The results indicate excellent correlation between the two computational methods, especially for gold nanosphere dimers with only a 5-10% difference between the two methods. The effect of varying the diameters of the nanospheres forming the dimer, on the electromagnetic field enhancement, was also studied.

  16. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    Science.gov (United States)

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  17. Numerical computation of underwater explosions due to fuel-coolant interactions

    International Nuclear Information System (INIS)

    Lee, J.H.S.; Frost, D.L.; Knystautas, R.; Teodorczyk, A.; Ciccarelli, G.; Thibault, P.; Penrose, J.

    1989-03-01

    If coarse molten material is released into a coolant the possibility exists for a violent steam explosion. A detailed quantitative description of the processes involved in steam explosions is currently beyond the capabilities of the scientific community. However, a conservative estimate of the pressure transients resulting from a steam explosion can be obtained by studying the dynamics of the shock associated with the expansion of a high-pressure vapour bubble. In this study, the hydrodynamic equations governing the shock propagation of an expanding bubble were integrated numerically using the Flux Corrected Transport code. Simpler acoustic models based on experience with underwater explosions were also developed and used to estimate pressure transients and to calculate the peak pressures for benchmark cases. The results were found to be an order of magnitude higher than the corresponding pressures obtained using a complex model developed by Henry. A simplified version of the Henry model was developed by neglecting the complex description of the two-phase flow inside the ruptured tube and the arbitrarily assumed heat transfer and condensation rates. Results from the simplified model were found to be generally similar to, but had higher peak pressures than those obtained using the Henry model. It is concluded that the results produced by simple acoustic models, or by a simplified Henry model, are more conservative than the corresponding results obtained with the original Henry model

  18. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  19. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  20. Computational Analysis of Igbo Numerals in a Number-to-text Conversion System

    Directory of Open Access Journals (Sweden)

    Olufemi Deborah NINAN

    2017-12-01

    Full Text Available System for converting Arabic numerals to their textual equivalence is an important tool in Natural Language processing (NLP especially in high-level speech processing and machine translation. Such system is scarcely available for most African languages including the Igbo language. This translation system is essential as Igbo language is one of the three major Nigerian languages feared to be among the endangered African languages. The system was designed using sequence as well as activity diagram and implemented using the python programming language and PyQt. The qualitative evaluation was done by administering questionnaires to selected native Igbo speakers and experts to provide preferred representation of some random numbers. The responses were compared with the output of the system. The result of the qualitative evaluation showed that the system was able to generate correct and accurate representations for Arabic numbers between 1-1000 in Igbo language being the scope of this study. The resulting system can serve as an effective teaching and learning tool of the Igbo language.

  1. Computer programs for the numerical modelling of water flow in rock masses

    International Nuclear Information System (INIS)

    Croney, P.; Richards, L.R.

    1985-08-01

    Water flow in rock joints provides a very important possible route for the migration of radio-nuclides from radio-active waste within a repository back to the biosphere. Two computer programs DAPHNE and FPM have been developed to model two dimensional fluid flow in jointed rock masses. They have been developed to run on microcomputer systems suitable for field locations. The fluid flows in a number of jointed rock systems have been examined and certain controlling functions identified. A methodology has been developed for assessing the anisotropic permeability of jointed rock. A number of examples of unconfined flow into surface and underground openings have been analysed and ground water lowering, pore water pressures and flow quantities predicted. (author)

  2. Stochastic processes, multiscale modeling, and numerical methods for computational cellular biology

    CERN Document Server

    2017-01-01

    This book focuses on the modeling and mathematical analysis of stochastic dynamical systems along with their simulations. The collected chapters will review fundamental and current topics and approaches to dynamical systems in cellular biology. This text aims to develop improved mathematical and computational methods with which to study biological processes. At the scale of a single cell, stochasticity becomes important due to low copy numbers of biological molecules, such as mRNA and proteins that take part in biochemical reactions driving cellular processes. When trying to describe such biological processes, the traditional deterministic models are often inadequate, precisely because of these low copy numbers. This book presents stochastic models, which are necessary to account for small particle numbers and extrinsic noise sources. The complexity of these models depend upon whether the biochemical reactions are diffusion-limited or reaction-limited. In the former case, one needs to adopt the framework of s...

  3. Numerical computation of the linear stability of the diffusion model for crystal growth simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, C.; Sorensen, D.C. [Rice Univ., Houston, TX (United States); Meiron, D.I.; Wedeman, B. [California Institute of Technology, Pasadena, CA (United States)

    1996-12-31

    We consider a computational scheme for determining the linear stability of a diffusion model arising from the simulation of crystal growth. The process of a needle crystal solidifying into some undercooled liquid can be described by the dual diffusion equations with appropriate initial and boundary conditions. Here U{sub t} and U{sub a} denote the temperature of the liquid and solid respectively, and {alpha} represents the thermal diffusivity. At the solid-liquid interface, the motion of the interface denoted by r and the temperature field are related by the conservation relation where n is the unit outward pointing normal to the interface. A basic stationary solution to this free boundary problem can be obtained by writing the equations of motion in a moving frame and transforming the problem to parabolic coordinates. This is known as the Ivantsov parabola solution. Linear stability theory applied to this stationary solution gives rise to an eigenvalue problem of the form.

  4. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

    Science.gov (United States)

    Moore, Amy Lawson; Miller, Terissa M

    2018-01-01

    The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

  5. Computational engineering

    CERN Document Server

    2014-01-01

    The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.

  6. Numerical analysis

    CERN Document Server

    Scott, L Ridgway

    2011-01-01

    Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from most textbooks. Using an inquiry-based learning approach, Numerical Analysis is written in a narrative style, provides historical background, and includes many of the proofs and technical details in exercises. Students will be able to go beyond an elementary understanding of numerical simulation and develop deep insights into the foundations of the subject. They will no longer have to accept the mathematical gaps that ex...

  7. METRIC CHARACTERISTICS OF VARIOUS METHODS FOR NUMERICAL DENSITY ESTIMATION IN TRANSMISSION LIGHT MICROSCOPY – A COMPUTER SIMULATION

    Directory of Open Access Journals (Sweden)

    Miroslav Kališnik

    2011-05-01

    Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.

  8. Numerical and Computational Analysis of a New Vertical Axis Wind Turbine, Named KIONAS

    Directory of Open Access Journals (Sweden)

    Eleni Douvi

    2017-01-01

    Full Text Available This paper concentrates on a new configuration for a wind turbine, named KIONAS. The main purpose is to determine the performance and aerodynamic behavior of KIONAS, which is a vertical axis wind turbine with a stator over the rotor and a special feature in that it can consist of several stages. Notably, the stator is shaped in such a way that it increases the velocity of the air impacting the rotor blades. Moreover, each stage’s performance can be increased with the increase of the total number of stages. The effects of wind velocity, the various numbers of inclined rotor blades, the rotor diameter, the stator’s shape and the number of stages on the performance of KIONAS were studied. A FORTRAN code was developed in order to predict the power in several cases by solving the equations of continuity and momentum. Subsequently, further knowledge on the flow field was obtained by using a commercial Computational Fluid Dynamics code. Based on the results, it can be concluded that higher wind velocities and a greater number of blades produce more power. Furthermore, higher performance was found for a stator with curved guide vanes and for a KIONAS configuration with more stages.

  9. Bringing numerous methods for expression and promoter analysis to a public cloud computing service.

    Science.gov (United States)

    Polanski, Krzysztof; Gao, Bo; Mason, Sam A; Brown, Paul; Ott, Sascha; Denby, Katherine J; Wild, David L

    2018-03-01

    Every year, a large number of novel algorithms are introduced to the scientific community for a myriad of applications, but using these across different research groups is often troublesome, due to suboptimal implementations and specific dependency requirements. This does not have to be the case, as public cloud computing services can easily house tractable implementations within self-contained dependency environments, making the methods easily accessible to a wider public. We have taken 14 popular methods, the majority related to expression data or promoter analysis, developed these up to a good implementation standard and housed the tools in isolated Docker containers which we integrated into the CyVerse Discovery Environment, making these easily usable for a wide community as part of the CyVerse UK project. The integrated apps can be found at http://www.cyverse.org/discovery-environment, while the raw code is available at https://github.com/cyversewarwick and the corresponding Docker images are housed at https://hub.docker.com/r/cyversewarwick/. info@cyverse.warwick.ac.uk or D.L.Wild@warwick.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  10. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    Science.gov (United States)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  11. Numerical studies on the interaction between atmosphere and ocean using different kinds of parallel computers

    International Nuclear Information System (INIS)

    Lee, Soon-Hwan; Chino, Masamichi

    2000-01-01

    The coupling between atmosphere and ocean model has physical and computational difficulties for short-term forecasting of weather and ocean current. In this research, a combination system between high-resolution meso-scale atmospheric model and ocean model has been constructed using a new message-passing library, called Stampi (Seamless Thinking Aid Message Passing Interface), for prediction of particle dispersion at emergency nuclear accident. Stampi, which is based on the MPI (Message Passing Interface) 2 specification, makes us carry out parallel calculations of combination system without parallelization skill to model code. And it realizes dynamic process creation on different machines and communication between spawned one within the scope of MPI semantics. The models included in this combination system are PHYSIC as an atmosphere model, and POM (Princeton Ocean Model) as an ocean model. We applied this combination system to predict sea surface current at Sea of Japan in winter season. Simulation results indicate that the wind stress near the sea surface tends to be a predominant factor to determine surface ocean currents and dispersion of radioactive contamination in the ocean. The surface ocean current is well correspondent with wind direction, induced by high mountains at North Korea. The satellite data of NSCAT (NASA-SCATterometer), which is an image of sea surface current, also agrees well with the results of this system. (author)

  12. Numerical computation of gravitational field of general extended body and its application to rotation curve study of galaxies

    Science.gov (United States)

    Fukushima, Toshio

    2017-06-01

    Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface

  13. Accuracy and reliability of a novel method for fusion of digital dental casts and Cone Beam Computed Tomography scans.

    Directory of Open Access Journals (Sweden)

    Frits A Rangel

    Full Text Available Several methods have been proposed to integrate digital models into Cone Beam Computed Tomography scans. Since all these methods have some drawbacks such as radiation exposure, soft tissue deformation and time-consuming digital handling processes, we propose a new method to integrate digital dental casts into Cone Beam Computed Tomography scans. Plaster casts of 10 patients were randomly selected and 5 titanium markers were glued to the upper and lower plaster cast. The plaster models were scanned, impressions were taken from the plaster models and the impressions were also scanned. Linear measurements were performed on all three models, to assess accuracy and reproducibility. Besides that, matching of the scanned plaster models and scanned impressions was done, to assess the accuracy of the matching procedure. Results show that all measurement errors are smaller than 0.2 mm, and that 81% is smaller than 0.1 mm. Matching of the scanned plaster casts and scanned impressions show a mean error between the two surfaces of the upper arch of 0.14 mm and for the lower arch of 0.18 mm. The time needed for reconstructing the CBCT scans to a digital patient, where the impressions are integrated into the CBCT scan of the patient takes about 15 minutes, with little variance between patients. In conclusion, we can state that this new method is a reliable method to integrate digital dental casts into CBCT scans. As far as radiation exposure, soft tissue deformation and digital handling processes are concerned, it is a significant improvement compared to the previously published methods.

  14. Accuracy and Reliability of a Novel Method for Fusion of Digital Dental Casts and Cone Beam Computed Tomography Scans

    Science.gov (United States)

    Rangel, Frits A.; Maal, Thomas J. J.; Bronkhorst, Ewald M.; Breuning, K. Hero; Schols, Jan G. J. H.; Bergé, Stefaan J.; Kuijpers-Jagtman, Anne Marie

    2013-01-01

    Several methods have been proposed to integrate digital models into Cone Beam Computed Tomography scans. Since all these methods have some drawbacks such as radiation exposure, soft tissue deformation and time-consuming digital handling processes, we propose a new method to integrate digital dental casts into Cone Beam Computed Tomography scans. Plaster casts of 10 patients were randomly selected and 5 titanium markers were glued to the upper and lower plaster cast. The plaster models were scanned, impressions were taken from the plaster models and the impressions were also scanned. Linear measurements were performed on all three models, to assess accuracy and reproducibility. Besides that, matching of the scanned plaster models and scanned impressions was done, to assess the accuracy of the matching procedure. Results show that all measurement errors are smaller than 0.2 mm, and that 81% is smaller than 0.1 mm. Matching of the scanned plaster casts and scanned impressions show a mean error between the two surfaces of the upper arch of 0.14 mm and for the lower arch of 0.18 mm. The time needed for reconstructing the CBCT scans to a digital patient, where the impressions are integrated into the CBCT scan of the patient takes about 15 minutes, with little variance between patients. In conclusion, we can state that this new method is a reliable method to integrate digital dental casts into CBCT scans. As far as radiation exposure, soft tissue deformation and digital handling processes are concerned, it is a significant improvement compared to the previously published methods. PMID:23527111

  15. Reduction of the performance of a noise screen due to screen-induced wind-speed gradients: numerical computations and wind-tunnel experiments

    NARCIS (Netherlands)

    Salomons, E.M.

    1999-01-01

    Downwind sound propagation over a noise screen is investigated by numerical computations and scale model experiments in a wind tunnel. For the computations, the parabolic equation method is used, with a range-dependent sound-speed profile based on wind-speed profiles measured in the wind tunnel and

  16. Test-retest reliability and comparability of paper and computer questionnaires for the Finnish version of the Tampa Scale of Kinesiophobia.

    Science.gov (United States)

    Koho, P; Aho, S; Kautiainen, H; Pohjolainen, T; Hurri, H

    2014-12-01

    To estimate the internal consistency, test-retest reliability and comparability of paper and computer versions of the Finnish version of the Tampa Scale of Kinesiophobia (TSK-FIN) among patients with chronic pain. In addition, patients' personal experiences of completing both versions of the TSK-FIN and preferences between these two methods of data collection were studied. Test-retest reliability study. Paper and computer versions of the TSK-FIN were completed twice on two consecutive days. The sample comprised 94 consecutive patients with chronic musculoskeletal pain participating in a pain management or individual rehabilitation programme. The group rehabilitation design consisted of physical and functional exercises, evaluation of the social situation, psychological assessment of pain-related stress factors, and personal pain management training in order to regain overall function and mitigate the inconvenience of pain and fear-avoidance behaviour. The mean TSK-FIN score was 37.1 [standard deviation (SD) 8.1] for the computer version and 35.3 (SD 7.9) for the paper version. The mean difference between the two versions was 1.9 (95% confidence interval 0.8 to 2.9). Test-retest reliability was 0.89 for the paper version and 0.88 for the computer version. Internal consistency was considered to be good for both versions. The intraclass correlation coefficient for comparability was 0.77 (95% confidence interval 0.66 to 0.85), indicating substantial reliability between the two methods. Both versions of the TSK-FIN demonstrated substantial intertest reliability, good test-retest reliability, good internal consistency and acceptable limits of agreement, suggesting their suitability for clinical use. However, subjects tended to score higher when using the computer version. As such, in an ideal situation, data should be collected in a similar manner throughout the course of rehabilitation or clinical research. Copyright © 2014 Chartered Society of Physiotherapy. Published

  17. The sensitivity of computed tomography (CT) scans in detecting trauma: are CT scans reliable enough for courtroom testimony?

    Science.gov (United States)

    Molina, D Kimberley; Nichols, Joanna J; Dimaio, Vincent J M

    2007-09-01

    Rapid and accurate recognition of traumatic injuries is extremely important in emergency room and surgical settings. Emergency departments depend on computed tomography (CT) scans to provide rapid, accurate injury assessment. We conducted an analysis of all traumatic deaths autopsied at the Bexar County Medical Examiner's Office in which perimortem medical imaging (CT scan) was performed to assess the reliability of the CT scan in detecting trauma with sufficient accuracy for courtroom testimony. Cases were included in the study if an autopsy was conducted, a CT scan was performed within 24 hours before death, and there was no surgical intervention. Analysis was performed to assess the correlation between the autopsy and CT scan results. Sensitivity, specificity, positive predictive value, and negative predictive value were defined for the CT scan based on the autopsy results. The sensitivity of the CT scan ranged from 0% for cerebral lacerations, cervical vertebral body fractures, cardiac injury, and hollow viscus injury to 75% for liver injury. This study reveals that CT scans are an inadequate detection tool for forensic pathologists, where a definitive diagnosis is required, because they have a low level of accuracy in detecting traumatic injuries. CT scans may be adequate for clinicians in the emergency room setting, but are inadequate for courtroom testimony. If the evidence of trauma is based solely on CT scan reports, there is a high possibility of erroneous accusations, indictments, and convictions.

  18. System-Reliability Cumulative-Binomial Program

    Science.gov (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.

    1989-01-01

    Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.

  19. EVALUATION OF SEISMIC PERFORMANCE OF RAMP TUNNEL STRUCTURE DURING LEVEL-2 EARTHQUAKE BY MASSIVE 3D NUMERICAL COMPUTATION

    Science.gov (United States)

    Yamada, Takemine; Ichimura, Tsuyoshi; Hori, Muneo; Dobashi, Hiroshi; Ohbo, Naoto

    Quasi non-linear 3D FEM earthquake response analysises with level-2 earthquake are conducted for a ramp tunnel structure of Tokyo metropolitan express way central circular line the Yamate tunnel. Large-scale numerical computation with solid elements is highly required for examination of seismic response of large tunnel in case of level-2 earthquake. The results are obtained as follows: i) In level-2 earthquake, stress concentration in ramp tunnel becomes great near geological interface between two layers of high impedance contrast. ii) The response is not obtained as a superposition of two-dimensional responses which is an assumption in conventional design methods because the distribution of displacements in the direction of tunnel axis at cross-section of ramp tunnel structure near geological interface does not linearly distribute. iii) Evaluation of stress in addition to section force is desirable for the correct evaluation of the three-dimensional response of tunnel structure.

  20. Numerical Study of Detonation Wave Propagation in the Variable Cross-Section Channel Using Unstructured Computational Grids

    Directory of Open Access Journals (Sweden)

    Alexander Lopato

    2018-01-01

    Full Text Available The work is dedicated to the numerical study of detonation wave initiation and propagation in the variable cross-section axisymmetric channel filled with the model hydrogen-air mixture. The channel models the large-scale device for the utilization of worn-out tires. Mathematical model is based on two-dimensional axisymmetric Euler equations supplemented by global chemical kinetics model. The finite volume computational algorithm of the second approximation order for the calculation of two-dimensional flows with detonation waves on fully unstructured grids with triangular cells is developed. Three geometrical configurations of the channel are investigated, each with its own degree of the divergence of the conical part of the channel from the point of view of the pressure from the detonation wave on the end wall of the channel. The problem in consideration relates to the problem of waste recycling in the devices based on the detonation combustion of the fuel.

  1. A new numerical modelling method for deformation behaviour of metallic porous materials using X-ray computed microtomography

    Energy Technology Data Exchange (ETDEWEB)

    Doroszko, M., E-mail: m.doroszko@pb.edu.pl; Seweryn, A., E-mail: a.seweryn@pb.edu.pl

    2017-03-24

    Microtomographic devices have limited imaging accuracy and are often insufficient for proper mapping of small details of real objects (e.g. elements of material mesostructures). This paper describes a new method developed to compensate the effect of X-ray computed microtomography (micro-CT) inaccuracy in numerical modelling of the deformation process of porous sintered 316 L steel. The method involves modification of microtomographic images where the pore shapes are separated. The modification consists of the reconstruction of fissures and small pores omitted by micro-CT scanning due to the limited accuracy of the measuring device. It enables proper modelling of the tensile deformation process of porous materials. In addition, the proposed approach is compared to methods described in the available literature. As a result of numerical calculations, stress and strain distributions were obtained in deformed sintered 316 L steel. Based on the results, macroscopic stress-strain curves were received. Maximum principal stress distributions obtained by the proposed calculation model, indicated specific locations, where the stress reached a critical value, and fracture initiation occurred. These are bridges with small cross sections and notches in the shape of pores. Based on calculation results, the influence of the deformation mechanism of the material porous mesostructures on their properties at the macroscale is described.

  2. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Terriberry, Timothy B. [Xiph.Org Foundation, Arlington, VA (United States); Kolla, Hemanth [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Bennett, Janine [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.

  3. A new numerical modelling method for deformation behaviour of metallic porous materials using X-ray computed microtomography

    International Nuclear Information System (INIS)

    Doroszko, M.; Seweryn, A.

    2017-01-01

    Microtomographic devices have limited imaging accuracy and are often insufficient for proper mapping of small details of real objects (e.g. elements of material mesostructures). This paper describes a new method developed to compensate the effect of X-ray computed microtomography (micro-CT) inaccuracy in numerical modelling of the deformation process of porous sintered 316 L steel. The method involves modification of microtomographic images where the pore shapes are separated. The modification consists of the reconstruction of fissures and small pores omitted by micro-CT scanning due to the limited accuracy of the measuring device. It enables proper modelling of the tensile deformation process of porous materials. In addition, the proposed approach is compared to methods described in the available literature. As a result of numerical calculations, stress and strain distributions were obtained in deformed sintered 316 L steel. Based on the results, macroscopic stress-strain curves were received. Maximum principal stress distributions obtained by the proposed calculation model, indicated specific locations, where the stress reached a critical value, and fracture initiation occurred. These are bridges with small cross sections and notches in the shape of pores. Based on calculation results, the influence of the deformation mechanism of the material porous mesostructures on their properties at the macroscale is described.

  4. Computer programs of information processing of nuclear physical methods as a demonstration material in studying nuclear physics and numerical methods

    Science.gov (United States)

    Bateev, A. B.; Filippov, V. P.

    2017-01-01

    The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.

  5. Validity, reliability, and reproducibility of linear measurements on digital models obtained from intraoral and cone-beam computed tomography scans of alginate impressions

    NARCIS (Netherlands)

    Wiranto, Matthew G.; Engelbrecht, W. Petrie; Nolthenius, Heleen E. Tutein; van der Meer, W. Joerd; Ren, Yijin

    INTRODUCTION: Digital 3-dimensional models are widely used for orthodontic diagnosis. The aim of this study was to assess the validity, reliability, and reproducibility of digital models obtained from the Lava Chairside Oral scanner (3M ESPE, Seefeld, Germany) and cone-beam computed tomography scans

  6. SIMON. A computer program for reliability and statistical analysis using Monte Carlo simulation. Program description and manual

    International Nuclear Information System (INIS)

    Kongsoe, H.E.; Lauridsen, K.

    1993-09-01

    SIMON is a program for calculation of reliability and statistical analysis. The program is of the Monte Carlo type, and it is designed with high flexibility, and has a large potential for application to complex problems like reliability analyses of very large systems and of systems, where complex modelling or knowledge of special details are required. Examples of application of the program, including input and output, for reliability and statistical analysis are presented. (au) (3 tabs., 3 ills., 5 refs.)

  7. Computed tomographic angiography criteria in the diagnosis of brain death - comparison of sensitivity and interobserver reliability of different evaluation scales

    International Nuclear Information System (INIS)

    Sawicki, Marcin; Walecka, A.; Bohatyrewicz, R.; Solek-Pastuszka, J.; Safranow, K.; Walecki, J.; Rowinski, O.; Czajkowski, Z.; Guzinski, M.; Burzynska, M.; Wojczal, J.

    2014-01-01

    The standardized diagnostic criteria for computed tomographic angiography (CTA) in diagnosis of brain death (BD) are not yet established. The aim of the study was to compare the sensitivity and interobserver agreement of the three previously used scales of CTA for the diagnosis of BD. Eighty-two clinically brain-dead patients underwent CTA with a delay of 40 s after contrast injection. Catheter angiography was used as the reference standard. CTA results were assessed by two radiologists, and the diagnosis of BD was established according to 10-, 7-, and 4-point scales. Catheter angiography confirmed the diagnosis of BD in all cases. Opacification of certain cerebral vessels as indicator of BD was highly sensitive: cortical segments of the middle cerebral artery (96.3 %), the internal cerebral vein (98.8 %), and the great cerebral vein (98.8 %). Other vessels were less sensitive: the pericallosal artery (74.4 %), cortical segments of the posterior cerebral artery (79.3 %), and the basilar artery (82.9 %). The sensitivities of the 10-, 7-, and 4-point scales were 67.1, 74.4, and 96.3 %, respectively (p < 0.001). Percentage interobserver agreement in diagnosis of BD reached 93 % for the 10-point scale, 89 % for the 7-point scale, and 95 % for the 4-point scale (p = 0.37). In the application of CTA to the diagnosis of BD, reducing the assessment of vascular opacification scale from a 10- to a 4-point scale significantly increases the sensitivity and maintains high interobserver reliability. (orig.)

  8. Computed tomographic angiography criteria in the diagnosis of brain death - comparison of sensitivity and interobserver reliability of different evaluation scales

    Energy Technology Data Exchange (ETDEWEB)

    Sawicki, Marcin; Walecka, A. [Pomeranian Medical University, Department of Diagnostic Imaging and Interventional Radiology, Szczecin (Poland); Bohatyrewicz, R.; Solek-Pastuszka, J. [Pomeranian Medical University, Clinic of Anesthesiology and Intensive Care, Szczecin (Poland); Safranow, K. [Pomeranian Medical University, Department of Biochemistry and Medical Chemistry, Szczecin (Poland); Walecki, J. [The Centre of Postgraduate Medical Education, Warsaw (Poland); Rowinski, O. [Medical University of Warsaw, 2nd Department of Clinical Radiology, Warsaw (Poland); Czajkowski, Z. [Regional Joint Hospital, Szczecin (Poland); Guzinski, M. [Wroclaw Medical University, Department of General Radiology, Interventional Radiology and Neuroradiology, Wroclaw (Poland); Burzynska, M. [Wroclaw Medical University, Department of Anesthesiology and Intensive Therapy, Wroclaw (Poland); Wojczal, J. [Medical University of Lublin, Department of Neurology, Lublin (Poland)

    2014-08-15

    The standardized diagnostic criteria for computed tomographic angiography (CTA) in diagnosis of brain death (BD) are not yet established. The aim of the study was to compare the sensitivity and interobserver agreement of the three previously used scales of CTA for the diagnosis of BD. Eighty-two clinically brain-dead patients underwent CTA with a delay of 40 s after contrast injection. Catheter angiography was used as the reference standard. CTA results were assessed by two radiologists, and the diagnosis of BD was established according to 10-, 7-, and 4-point scales. Catheter angiography confirmed the diagnosis of BD in all cases. Opacification of certain cerebral vessels as indicator of BD was highly sensitive: cortical segments of the middle cerebral artery (96.3 %), the internal cerebral vein (98.8 %), and the great cerebral vein (98.8 %). Other vessels were less sensitive: the pericallosal artery (74.4 %), cortical segments of the posterior cerebral artery (79.3 %), and the basilar artery (82.9 %). The sensitivities of the 10-, 7-, and 4-point scales were 67.1, 74.4, and 96.3 %, respectively (p < 0.001). Percentage interobserver agreement in diagnosis of BD reached 93 % for the 10-point scale, 89 % for the 7-point scale, and 95 % for the 4-point scale (p = 0.37). In the application of CTA to the diagnosis of BD, reducing the assessment of vascular opacification scale from a 10- to a 4-point scale significantly increases the sensitivity and maintains high interobserver reliability. (orig.)

  9. Reliability of implant placement with stereolithographic surgical guides generated from computed tomography: clinical data from 94 implants.

    Science.gov (United States)

    Ersoy, Ahmet Ersan; Turkyilmaz, Ilser; Ozan, Oguz; McGlumphy, Edwin A

    2008-08-01

    Dental implant placement requires precise planning with regard to anatomic limitations and restorative goals. The aim of this study was to evaluate the match between the positions and axes of the planned and placed implants using stereolithographic (SLA) surgical guides. Ninety-four implants were placed using SLA surgical guides generated from computed tomography (CT) between 2005 and 2006. Radiographic templates were used for all subjects during CT imaging. After obtaining three-dimensional CT images, each implant was virtually placed on the CT images. SLA surgical guides, fabricated using an SLA machine with a laser beam to polymerize the liquid photo-polymerized resin, were used during implant placement. A new CT scan was taken for each subject following implant placement. Special software was used to fuse the images of the planned and placed implants, and the locations and axes were compared. Compared to the planned implants, the placed implants showed angular deviation of 4.9 degrees+/-2.36 degrees, whereas the mean linear deviation was 1.22+/-0.85 mm at the implant neck and 1.51+/-1 mm at the implant apex. Compared to the implant planning, the angular deviation and linear deviation at the neck and apex of the placed maxillary implants were 5.31 degrees+/-0.36 degrees, 1.04+/-0.56 mm, and 1.57+/-0.97 mm, respectively, whereas corresponding figures for placed mandibular implants were 4.44 degrees+/-0.31 degrees, 1.42+/-1.05 mm, and 1.44+/-1.03 mm, respectively. SLA surgical guides using CT data may be reliable in implant placement and make flapless implant placement possible.

  10. Numerical modelling of elastic space tethers

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall; Palmer, P. L.; Roberts, R. M.

    2012-01-01

    In this paper the importance of the ill-posedness of the classical, non-dissipative massive tether model on an orbiting tether system is studied numerically. The computations document that via the regularisation of bending resistance a more reliable numerical integrator can be produced. Furthermo....... It is also shown that on the slow manifold the dynamics of the satellites are well-approximated by the finite dimensional slack-spring model....

  11. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan

    Directory of Open Access Journals (Sweden)

    Moore AL

    2018-02-01

    Full Text Available Amy Lawson Moore, Terissa M Miller Gibson Institute of Cognitive Research, Colorado Springs, CO, USA Purpose: The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills.Methods: This study included 2,737 participants aged 5–85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test–retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement.Results: Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test–retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93.Conclusion: The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan. Keywords: testing, cognitive skills, memory, processing speed, visual processing, auditory processing

  12. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  13. Computer numerical control (CNC) lithography: light-motion synchronized UV-LED lithography for 3D microfabrication

    International Nuclear Information System (INIS)

    Kim, Jungkwun; Allen, Mark G; Yoon, Yong-Kyu

    2016-01-01

    This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array. (paper)

  14. Numerical and analytical solutions for problems relevant for quantum computers; Numerische und analytische Loesungen fuer Quanteninformatisch-relevante Probleme

    Energy Technology Data Exchange (ETDEWEB)

    Spoerl, Andreas

    2008-06-05

    Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)

  15. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  16. PORFLO - a continuum model for fluid flow, heat transfer, and mass transport in porous media. Model theory, numerical methods, and computational tests

    International Nuclear Information System (INIS)

    Runchal, A.K.; Sagar, B.; Baca, R.G.; Kline, N.W.

    1985-09-01

    Postclosure performance assessment of the proposed high-level nuclear waste repository in flood basalts at Hanford requires that the processes of fluid flow, heat transfer, and mass transport be numerically modeled at appropriate space and time scales. A suite of computer models has been developed to meet this objective. The theory of one of these models, named PORFLO, is described in this report. Also presented are a discussion of the numerical techniques in the PORFLO computer code and a few computational test cases. Three two-dimensional equations, one each for fluid flow, heat transfer, and mass transport, are numerically solved in PORFLO. The governing equations are derived from the principle of conservation of mass, momentum, and energy in a stationary control volume that is assumed to contain a heterogeneous, anisotropic porous medium. Broad discrete features can be accommodated by specifying zones with distinct properties, or these can be included by defining an equivalent porous medium. The governing equations are parabolic differential equations that are coupled through time-varying parameters. Computational tests of the model are done by comparisons of simulation results with analytic solutions, with results from other independently developed numerical models, and with available laboratory and/or field data. In this report, in addition to the theory of the model, results from three test cases are discussed. A users' manual for the computer code resulting from this model has been prepared and is available as a separate document. 37 refs., 20 figs., 15 tabs

  17. Using standardized video cases for assessment of medical communication skills: reliability of an objective structured video examination by computer

    NARCIS (Netherlands)

    Hulsman, R. L.; Mollema, E. D.; Oort, F. J.; Hoos, A. M.; de Haes, J. C. J. M.

    2006-01-01

    OBJECTIVE: Using standardized video cases in a computerized objective structured video examination (OSVE) aims to measure cognitive scripts underlying overt communication behavior by questions on knowledge, understanding and performance. In this study the reliability of the OSVE assessment is

  18. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    Science.gov (United States)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  20. Assessing the influence of the rhizosphere on soil hydraulic properties using X-ray computed tomography and numerical modelling.

    Science.gov (United States)

    Daly, Keith R; Mooney, Sacha J; Bennett, Malcolm J; Crout, Neil M J; Roose, Tiina; Tracy, Saoirse R

    2015-04-01

    Understanding the dynamics of water distribution in soil is crucial for enhancing our knowledge of managing soil and water resources. The application of X-ray computed tomography (CT) to the plant and soil sciences is now well established. However, few studies have utilized the technique for visualizing water in soil pore spaces. Here this method is utilized to visualize the water in soil in situ and in three-dimensions at successive reductive matric potentials in bulk and rhizosphere soil. The measurements are combined with numerical modelling to determine the unsaturated hydraulic conductivity, providing a complete picture of the hydraulic properties of the soil. The technique was performed on soil cores that were sampled adjacent to established roots (rhizosphere soil) and from soil that had not been influenced by roots (bulk soil). A water release curve was obtained for the different soil types using measurements of their pore geometries derived from CT imaging and verified using conventional methods, such as pressure plates. The water, soil, and air phases from the images were segmented and quantified using image analysis. The water release characteristics obtained for the contrasting soils showed clear differences in hydraulic properties between rhizosphere and bulk soil, especially in clay soil. The data suggest that soils influenced by roots (rhizosphere soil) are less porous due to increased aggregation when compared with bulk soil. The information and insights obtained on the hydraulic properties of rhizosphere and bulk soil will enhance our understanding of rhizosphere biophysics and improve current water uptake models. © The Author 2015. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  1. Exact Dispersion Study of an Asymmetric Thin Planar Slab Dielectric Waveguide without Computing {d^2}β/{d{k^2}} Numerically

    Science.gov (United States)

    Raghuwanshi, Sanjeev Kumar; Palodiya, Vikram

    2017-08-01

    Waveguide dispersion can be tailored but not the material dispersion. Hence, the total dispersion can be shifted at any desired band by adjusting the waveguide dispersion. Waveguide dispersion is proportional to {d^2}β/d{k^2} and need to be computed numerically. In this paper, we have tried to compute analytical expression for {d^2}β/d{k^2} in terms of {d^2}β/d{k^2} accurately with numerical technique, ≈ 10^{-5} decimal point. This constraint sometimes generates the error in calculation of waveguide dispersion. To formulate the problem we will use the graphical method. Our study reveals that we can compute the waveguide dispersion enough accurately for various modes by knowing - β only.

  2. Applied and numerical partial differential equations scientific computing in simulation, optimization and control in a multidisciplinary context

    CERN Document Server

    Glowinski, R; Kuznetsov, Y A; Periaux, Jacques; Neittaanmaki, Pekka; Pironneau, Olivier

    2010-01-01

    Standing at the intersection of mathematics and scientific computing, this collection of state-of-the-art papers in nonlinear PDEs examines their applications to subjects as diverse as dynamical systems, computational mechanics, and the mathematics of finance.

  3. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Directory of Open Access Journals (Sweden)

    Julien Guevar

    Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  4. 3-D image-based numerical computations of snow permeability: links to specific surface area, density, and microstructural anisotropy

    Directory of Open Access Journals (Sweden)

    N. Calonne

    2012-09-01

    Full Text Available We used three-dimensional (3-D images of snow microstructure to carry out numerical estimations of the full tensor of the intrinsic permeability of snow (K. This study was performed on 35 snow samples, spanning a wide range of seasonal snow types. For several snow samples, a significant anisotropy of permeability was detected and is consistent with that observed for the effective thermal conductivity obtained from the same samples. The anisotropy coefficient, defined as the ratio of the vertical over the horizontal components of K, ranges from 0.74 for a sample of decomposing precipitation particles collected in the field to 1.66 for a depth hoar specimen. Because the permeability is related to a characteristic length, we introduced a dimensionless tensor K*=K/res2, where the equivalent sphere radius of ice grains (res is computed from the specific surface area of snow (SSA and the ice density (ρi as follows: res=3/(SSA×ρi. We define K and K* as the average of the diagonal components of K and K*, respectively. The 35 values of K* were fitted to snow density (ρs and provide the following regression: K = (3.0 ± 0.3 res2 exp((−0.0130 ± 0.0003ρs. We noted that the anisotropy of permeability does not affect significantly the proposed equation. This regression curve was applied to several independent datasets from the literature and compared to other existing regression curves or analytical models. The results show that it is probably the best currently available simple relationship linking the average value of permeability, K, to snow density and specific surface area.

  5. OCENER, a one-dimensional computer code for the numerical simulation of the mechanical effects of peaceful underground nuclear explosions in rocks

    International Nuclear Information System (INIS)

    Gupta, S.C.; Sikka, S.K.; Chidambaram, R.

    1979-01-01

    An account is given of a one-dimensional spherical symmetric computer code for the numerical simulation of the effects of peaceful underground nuclear explosions in rocks (OCENER). In the code, the nature of the stress field and response of the medium to this field are modelled numerically by finite difference form of the laws of continuum mechanics and the constitutive relations of the rock medium in which the detonation occurs. It enables to approximate well the cavity growth and fracturing of the surrounding rock for contained explosions and the events upto the time the spherical symmetry is valid for cratering-type explosions. (auth.)

  6. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    Science.gov (United States)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  7. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  8. A Simple and Efficient Numerical Method for Computing the Dynamics of Rotating Bose--Einstein Condensates via Rotating Lagrangian Coordinates

    KAUST Repository

    Bao, Weizhu; Marahrens, Daniel; Tang, Qinglin; Zhang, Yanzhi

    2013-01-01

    We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three

  9. Numerical analysis

    CERN Document Server

    Brezinski, C

    2012-01-01

    Numerical analysis has witnessed many significant developments in the 20th century. This book brings together 16 papers dealing with historical developments, survey papers and papers on recent trends in selected areas of numerical analysis, such as: approximation and interpolation, solution of linear systems and eigenvalue problems, iterative methods, quadrature rules, solution of ordinary-, partial- and integral equations. The papers are reprinted from the 7-volume project of the Journal of Computational and Applied Mathematics on '/homepage/sac/cam/na2000/index.html<

  10. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  12. Optimal design method for a digital human–computer interface based on human reliability in a nuclear power plant. Part 3: Optimization method for interface task layout

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Wang, Yiqun; Zhang, Li; Xie, Tian; Li, Min; Peng, Yuyuan; Wu, Daqing; Li, Peiyao; Ma, Congmin; Shen, Mengxu; Wu, Xing; Weng, Mengyun; Wang, Shiwei; Xie, Cen

    2016-01-01

    Highlights: • The authors present an optimization algorithm for interface task layout. • The performing process of the proposed algorithm was depicted. • The performance evaluation method adopted neural network method. • The optimization layouts of an event interface tasks were obtained by experiments. - Abstract: This is the last in a series of papers describing the optimal design for a digital human–computer interface of a nuclear power plant (NPP) from three different points based on human reliability. The purpose of this series is to propose different optimization methods from varying perspectives to decrease human factor events that arise from the defects of a human–computer interface. The present paper mainly solves the optimization method as to how to effectively layout interface tasks into different screens. The purpose of this paper is to decrease human errors by reducing the distance that an operator moves among different screens in each operation. In order to resolve the problem, the authors propose an optimization process of interface task layout for digital human–computer interface of a NPP. As to how to automatically layout each interface task into one of screens in each operation, the paper presents a shortest moving path optimization algorithm with dynamic flag based on human reliability. To test the algorithm performance, the evaluation method uses neural network based on human reliability. The less the human error probabilities are, the better the interface task layouts among different screens are. Thus, by analyzing the performance of each interface task layout, the optimization result is obtained. Finally, the optimization layouts of spurious safety injection event interface tasks of the NPP are obtained by an experiment, the proposed methods has a good accuracy and stabilization.

  13. Numerical analysis of resonances induced by s wave neutrons in transmission time-of-flight experiments with a computer IBM 7094 II

    International Nuclear Information System (INIS)

    Corge, Ch.

    1969-01-01

    Numerical analysis of transmission resonances induced by s wave neutrons in time-of-flight experiments can be achieved in a fairly automatic way on an IBM 7094/II computer. The involved computations are carried out following a four step scheme: 1 - experimental raw data are processed to obtain the resonant transmissions, 2 - values of experimental quantities for each resonance are derived from the above transmissions, 3 - resonance parameters are determined using a least square method to solve the over determined system obtained by equalling theoretical functions to the correspondent experimental values. Four analysis methods are gathered in the same code, 4 - graphical control of the results is performed. (author) [fr

  14. Interactions among biotic and abiotic factors affect the reliability of tungsten microneedles puncturing in vitro and in vivo peripheral nerves: A hybrid computational approach

    Energy Technology Data Exchange (ETDEWEB)

    Sergi, Pier Nicola, E-mail: p.sergi@sssup.it [Translational Neural Engineering Laboratory, The Biorobotics Institute, Scuola Superiore Sant' Anna, Viale Rinaldo Piaggio 34, Pontedera, 56025 (Italy); Jensen, Winnie [Department of Health Science and Technology, Fredrik Bajers Vej 7, 9220 Aalborg (Denmark); Yoshida, Ken [Department of Biomedical Engineering, Indiana University - Purdue University Indianapolis, 723 W. Michigan St., SL220, Indianapolis, IN 46202 (United States)

    2016-02-01

    Tungsten is an elective material to produce slender and stiff microneedles able to enter soft tissues and minimize puncture wounds. In particular, tungsten microneedles are used to puncture peripheral nerves and insert neural interfaces, bridging the gap between the nervous system and robotic devices (e.g., hand prostheses). Unfortunately, microneedles fail during the puncture process and this failure is not dependent on stiffness or fracture toughness of the constituent material. In addition, the microneedles' performances decrease during in vivo trials with respect to the in vitro ones. This further effect is independent on internal biotic effects, while it seems to be related to external biotic causes. Since the exact synergy of phenomena decreasing the in vivo reliability is still not known, this work explored the connection between in vitro and in vivo behavior of tungsten microneedles through the study of interactions between biotic and abiotic factors. A hybrid computational approach, simultaneously using theoretical relationships and in silico models of nerves, was implemented to model the change of reliability varying the microneedle diameter, and to predict in vivo performances by using in vitro reliability and local differences between in vivo and in vitro mechanical response of nerves. - Highlights: • We provide phenomenological Finite Element (FE) models of peripheral nerves to study the interactions with W microneedles • We provide a general interaction-based approach to model the reliability of slender microneedles • We evaluate the reliability of W microneedels to puncture in vivo nerves • We provide a novel synergistic hybrid approach (theory + simulations) involving interactions among biotic and abiotic factors • We validate the hybrid approach by using experimental data from literature.

  15. Interactions among biotic and abiotic factors affect the reliability of tungsten microneedles puncturing in vitro and in vivo peripheral nerves: A hybrid computational approach

    International Nuclear Information System (INIS)

    Sergi, Pier Nicola; Jensen, Winnie; Yoshida, Ken

    2016-01-01

    Tungsten is an elective material to produce slender and stiff microneedles able to enter soft tissues and minimize puncture wounds. In particular, tungsten microneedles are used to puncture peripheral nerves and insert neural interfaces, bridging the gap between the nervous system and robotic devices (e.g., hand prostheses). Unfortunately, microneedles fail during the puncture process and this failure is not dependent on stiffness or fracture toughness of the constituent material. In addition, the microneedles' performances decrease during in vivo trials with respect to the in vitro ones. This further effect is independent on internal biotic effects, while it seems to be related to external biotic causes. Since the exact synergy of phenomena decreasing the in vivo reliability is still not known, this work explored the connection between in vitro and in vivo behavior of tungsten microneedles through the study of interactions between biotic and abiotic factors. A hybrid computational approach, simultaneously using theoretical relationships and in silico models of nerves, was implemented to model the change of reliability varying the microneedle diameter, and to predict in vivo performances by using in vitro reliability and local differences between in vivo and in vitro mechanical response of nerves. - Highlights: • We provide phenomenological Finite Element (FE) models of peripheral nerves to study the interactions with W microneedles • We provide a general interaction-based approach to model the reliability of slender microneedles • We evaluate the reliability of W microneedels to puncture in vivo nerves • We provide a novel synergistic hybrid approach (theory + simulations) involving interactions among biotic and abiotic factors • We validate the hybrid approach by using experimental data from literature

  16. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  17. Use of computers at nuclear power plants

    International Nuclear Information System (INIS)

    Sen'kin, V.I.; Ozhigano, Yu.V.

    1974-01-01

    Applications of information and control computors in reacter central systems in Great Britain, Federal Republic of Germany, France, Canada, and the USA is surveyed. For the purpose of increasing the reliability of the computers effective means were designed for emergency operation and automatic computerized controls, and highly reliable micromodel modifications were developed. Numerical data units were handled along with development of methods and diagrams for converting analog values to numerical values, in accordance with modern requirements. Some data are presented on computer reliability in operating nuclear power plants both proposed and under construction. It is concluded that in foreign nuclear power stations the informational and calculational computers are finding increasingly wide distribution. Rapid action, the possibility of controlling large parameters, and operation of the computer in conjunction with increasing reliability are speeding up the process of introducing computers in atomic energy and broadenig their functions. (V.P.)

  18. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Science.gov (United States)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  19. Accuracy and reliability of a novel method for fusion of digital dental casts and cone beam computed tomography scans

    NARCIS (Netherlands)

    Rangel, F.A.; Maal, T.J.J.; Bronkhorst, E.M.; Breuning, K.H.; Schols, J.G.J.H.; Berge, S.J.; Kuijpers-Jagtman, A.M.

    2013-01-01

    Several methods have been proposed to integrate digital models into Cone Beam Computed Tomography scans. Since all these methods have some drawbacks such as radiation exposure, soft tissue deformation and time-consuming digital handling processes, we propose a new method to integrate digital dental

  20. The Reliability of Classifications of Proximal Femoral Fractures with 3-Dimensional Computed Tomography: The New Concept of Comprehensive Classification

    Directory of Open Access Journals (Sweden)

    Hiroaki Kijima

    2014-01-01

    Full Text Available The reliability of proximal femoral fracture classifications using 3DCT was evaluated, and a comprehensive “area classification” was developed. Eleven orthopedists (5–26 years from graduation classified 27 proximal femoral fractures at one hospital from June 2013 to July 2014 based on preoperative images. Various classifications were compared to “area classification.” In “area classification,” the proximal femur is divided into 4 areas with 3 boundary lines: Line-1 is the center of the neck, Line-2 is the border between the neck and the trochanteric zone, and Line-3 links the inferior borders of the greater and lesser trochanters. A fracture only in the first area was classified as a pure first area fracture; one in the first and second area was classified as a 1-2 type fracture. In the same way, fractures were classified as pure 2, 3-4, 1-2-3, and so on. “Area classification” reliability was highest when orthopedists with varying experience classified proximal femoral fractures using 3DCT. Other classifications cannot classify proximal femoral fractures if they exceed each classification’s particular zones. However, fractures that exceed the target zones are “dangerous” fractures. “Area classification” can classify such fractures, and it is therefore useful for selecting osteosynthesis methods.

  1. Conical : An extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions

    NARCIS (Netherlands)

    T.M. Dunster (Mark); A. Gil (Amparo); J. Segura (Javier); N.M. Temme (Nico)

    2017-01-01

    textabstractConical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module Conical (Gil et al., 2012) for the computation of conical functions. Specifically, the module includes now a routine for computing the function

  2. Numerical computation of space-charge fields of electron bunches in a beam pipe of elliptical shape

    International Nuclear Information System (INIS)

    Markovik, A.

    2005-01-01

    This work deals in particularly with 3D numerical simulations of space-charge fields from electron bunches in a beam pipe with elliptical cross-section. To obtain the space-charge fields it is necessary to calculate the Poisson equation with given boundary condition and space charge distribution. The discretization of the Poisson equation by the method of finite differences on a Cartesian grid, as well as setting up the coefficient matrix A for the elliptical domain are explained in the section 2. In the section 3 the properties of the coefficient matrix and possible numerical algorithms suitable for solving non-symmetrical linear systems of equations are introduced. In the following section 4, the applied solver algorithms are investigated by numerical tests with right hand side function for which the analytical solution is known. (orig.)

  3. Numerical computation of space-charge fields of electron bunches in a beam pipe of elliptical shape

    Energy Technology Data Exchange (ETDEWEB)

    Markovik, A.

    2005-09-28

    This work deals in particularly with 3D numerical simulations of space-charge fields from electron bunches in a beam pipe with elliptical cross-section. To obtain the space-charge fields it is necessary to calculate the Poisson equation with given boundary condition and space charge distribution. The discretization of the Poisson equation by the method of finite differences on a Cartesian grid, as well as setting up the coefficient matrix A for the elliptical domain are explained in the section 2. In the section 3 the properties of the coefficient matrix and possible numerical algorithms suitable for solving non-symmetrical linear systems of equations are introduced. In the following section 4, the applied solver algorithms are investigated by numerical tests with right hand side function for which the analytical solution is known. (orig.)

  4. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  5. Reliability of a coordinate system based on anatomical landmarks of the maxillofacial skeleton. An evaluation method for three-dimensional images obtained by cone-beam computed tomography

    International Nuclear Information System (INIS)

    Kimura, Momoko; Nawa, Hiroyuki; Yoshida, Kazuhito; Muramatsu, Atsushi; Fuyamada, Mariko; Goto, Shigemi; Ariji, Eiichiro; Tokumori, Kenji; Katsumata, Akitoshi

    2009-01-01

    We propose a method for evaluating the reliability of a coordinate system based on maxillofacial skeletal landmarks and use it to assess two coordinate systems. Scatter plots and 95% confidence ellipses of an objective landmark were defined as an index for demonstrating the stability of the coordinate system. A head phantom was positioned horizontally in reference to the Frankfurt horizontal and occlusal planes and subsequently scanned once in each position using cone-beam computed tomography. On the three-dimensional images created with a volume-rendering procedure, six dentists twice set two different coordinate systems: coordinate system 1 was defined by the nasion, sella, and basion, and coordinate system 2 was based on the left orbitale, bilateral porions, and basion. The menton was assigned as an objective landmark. The scatter plot and 95% ellipse of the menton indicated the high-level reliability of coordinate system 2. The patterns with the two coordinate systems were similar between data obtained in different head positions. The method presented here may be effective for evaluating the reliability (reproducibility) of coordinate systems based on skeletal landmarks. (author)

  6. On the analysis of glow curves with the general order kinetics: Reliability of the computed trap parameters

    Energy Technology Data Exchange (ETDEWEB)

    Ortega, F. [Facultad de Ingeniería (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Av. del Valle 5737, 7400 Olavarría (Argentina); Santiago, M.; Martinez, N.; Marcazzó, J.; Molina, P.; Caselli, E. [Instituto de Física Arroyo Seco (UNCPBA) and CIFICEN (UNCPBA – CICPBA – CONICET), Pinto 399, 7000 Tandil (Argentina)

    2017-04-15

    Nowadays the most employed kinetics for analyzing glow curves is the general order kinetics (GO) proposed by C. E. May and J. A. Partridge. As shown in many articles this kinetics might yield wrong parameters characterizing trap and recombination centers. In this article this kinetics is compared with the modified general order kinetics put forward by M. S. Rasheedy by analyzing synthetic glow curves. The results show that the modified kinetics gives parameters, which are more accurate than that yield by the original general order kinetics. A criterion is reported to evaluate the accuracy of the trap parameters found by deconvolving glow curves. This criterion was employed to assess the reliability of the trap parameters of the YVO{sub 4}: Eu{sup 3+} compounds.

  7. A Numerical Method for Computing Barge Impact Forces Based on Ultimate Strength of the Lashings between Barges

    National Research Council Canada - National Science Library

    Arroyo, Jose

    2004-01-01

    ... of the barge train, the approach velocity, the approach angle, the barge train moment of inertia, damage sustained by the barge structure, and friction between the barge and the wall. computation...

  8. Cardiac valve calcifications on low-dose unenhanced ungated chest computed tomography: inter-observer and inter-examination reliability, agreement and variability

    Energy Technology Data Exchange (ETDEWEB)

    Hamersvelt, Robbert W. van; Willemink, Martin J.; Takx, Richard A.P.; Eikendal, Anouk L.M.; Budde, Ricardo P.J.; Leiner, Tim; Jong, Pim A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Mol, Christian P.; Isgum, Ivana [University Medical Center Utrecht, Image Sciences Institute, Utrecht (Netherlands)

    2014-07-15

    To determine inter-observer and inter-examination variability for aortic valve calcification (AVC) and mitral valve and annulus calcification (MC) in low-dose unenhanced ungated lung cancer screening chest computed tomography (CT). We included 578 lung cancer screening trial participants who were examined by CT twice within 3 months to follow indeterminate pulmonary nodules. On these CTs, AVC and MC were measured in cubic millimetres. One hundred CTs were examined by five observers to determine the inter-observer variability. Reliability was assessed by kappa statistics (κ) and intra-class correlation coefficients (ICCs). Variability was expressed as the mean difference ± standard deviation (SD). Inter-examination reliability was excellent for AVC (κ = 0.94, ICC = 0.96) and MC (κ = 0.95, ICC = 0.90). Inter-examination variability was 12.7 ± 118.2 mm{sup 3} for AVC and 31.5 ± 219.2 mm{sup 3} for MC. Inter-observer reliability ranged from κ = 0.68 to κ = 0.92 for AVC and from κ = 0.20 to κ = 0.66 for MC. Inter-observer ICC was 0.94 for AVC and ranged from 0.56 to 0.97 for MC. Inter-observer variability ranged from -30.5 ± 252.0 mm{sup 3} to 84.0 ± 240.5 mm{sup 3} for AVC and from -95.2 ± 210.0 mm{sup 3} to 303.7 ± 501.6 mm{sup 3} for MC. AVC can be quantified with excellent reliability on ungated unenhanced low-dose chest CT, but manual detection of MC can be subject to substantial inter-observer variability. Lung cancer screening CT may be used for detection and quantification of cardiac valve calcifications. (orig.)

  9. Cardiac valve calcifications on low-dose unenhanced ungated chest computed tomography: inter-observer and inter-examination reliability, agreement and variability

    International Nuclear Information System (INIS)

    Hamersvelt, Robbert W. van; Willemink, Martin J.; Takx, Richard A.P.; Eikendal, Anouk L.M.; Budde, Ricardo P.J.; Leiner, Tim; Jong, Pim A. de; Mol, Christian P.; Isgum, Ivana

    2014-01-01

    To determine inter-observer and inter-examination variability for aortic valve calcification (AVC) and mitral valve and annulus calcification (MC) in low-dose unenhanced ungated lung cancer screening chest computed tomography (CT). We included 578 lung cancer screening trial participants who were examined by CT twice within 3 months to follow indeterminate pulmonary nodules. On these CTs, AVC and MC were measured in cubic millimetres. One hundred CTs were examined by five observers to determine the inter-observer variability. Reliability was assessed by kappa statistics (κ) and intra-class correlation coefficients (ICCs). Variability was expressed as the mean difference ± standard deviation (SD). Inter-examination reliability was excellent for AVC (κ = 0.94, ICC = 0.96) and MC (κ = 0.95, ICC = 0.90). Inter-examination variability was 12.7 ± 118.2 mm 3 for AVC and 31.5 ± 219.2 mm 3 for MC. Inter-observer reliability ranged from κ = 0.68 to κ = 0.92 for AVC and from κ = 0.20 to κ = 0.66 for MC. Inter-observer ICC was 0.94 for AVC and ranged from 0.56 to 0.97 for MC. Inter-observer variability ranged from -30.5 ± 252.0 mm 3 to 84.0 ± 240.5 mm 3 for AVC and from -95.2 ± 210.0 mm 3 to 303.7 ± 501.6 mm 3 for MC. AVC can be quantified with excellent reliability on ungated unenhanced low-dose chest CT, but manual detection of MC can be subject to substantial inter-observer variability. Lung cancer screening CT may be used for detection and quantification of cardiac valve calcifications. (orig.)

  10. Interactive reliability assessment using an integrated reliability data bank

    International Nuclear Information System (INIS)

    Allan, R.N.; Whitehead, A.M.

    1986-01-01

    The logical structure, techniques and practical application of a computer-aided technique based on a microcomputer using floppy disc Random Access Files is described. This interactive computational technique is efficient if the reliability prediction program is coupled directly to a relevant source of data to create an integrated reliability assessment/reliability data bank system. (DG)

  11. Establishment of computerized numerical databases on thermophysical and other properties of molten as well as solid materials and data evaluation and validation for generating recommended reliable reference data

    Science.gov (United States)

    Ho, C. Y.

    1993-01-01

    The Center for Information and Numerical Data Analysis and Synthesis, (CINDAS), measures and maintains databases on thermophysical, thermoradiative, mechanical, optical, electronic, ablation, and physical properties of materials. Emphasis is on aerospace structural materials especially composites and on infrared detector/sensor materials. Within CINDAS, the Department of Defense sponsors at Purdue several centers: the High Temperature Material Information Analysis Center (HTMIAC), the Ceramics Information Analysis Center (CIAC) and the Metals Information Analysis Center (MIAC). The responsibilities of CINDAS are extremely broad encompassing basic and applied research, measurement of the properties of thin wires and thin foils as well as bulk materials, acquisition and search of world-wide literature, critical evaluation of data, generation of estimated values to fill data voids, investigation of constitutive, structural, processing, environmental, and rapid heating and loading effects, and dissemination of data. Liquids, gases, molten materials and solids are all considered. The responsibility of maintaining widely used databases includes data evaluation, analysis, correlation, and synthesis. Material property data recorded on the literature are often conflicting, diverging, and subject to large uncertainties. It is admittedly difficult to accurately measure materials properties. Systematic and random errors both enter. Some errors result from lack of characterization of the material itself (impurity effects). In some cases assumed boundary conditions corresponding to a theoretical model are not obtained in the experiments. Stray heat flows and losses must be accounted for. Some experimental methods are inappropriate and in other cases appropriate methods are carried out with poor technique. Conflicts in data may be resolved by curve fitting of the data to theoretical or empirical models or correlation in terms of various affecting parameters. Reasons (e.g. phase

  12. [The Computer Book of the Internal Medicine resident: validity and reliability of a questionnaire for self-assessment of competences in internal medicine residents].

    Science.gov (United States)

    Oristrell, J; Casanovas, A; Jordana, R; Comet, R; Gil, M; Oliva, J C

    2012-12-01

    There are no simple and validated instruments for evaluating the training of specialists. To analyze the reliability and validity of a computerized self-assessment method to quantify the acquisition of medical competences during the Internal Medicine residency program. All residents of our department participated in the study during a period of 28 months. Twenty-two questionnaires specific for each rotation (the Computer-Book of the Internal Medicine Resident) were constructed with items (questions) corresponding to three competence domains: clinical skills competence, communication skills and teamwork. Reliability was analyzed by measuring the internal consistency of items in each competence domain using Cronbach's alpha index. Validation was performed by comparing mean scores in each competence domain between senior and junior residents. Cut-off levels of competence scores were established in order to identify the strengths and weaknesses of our training program. Finally, self-assessment values were correlated with the evaluations of the medical staff. There was a high internal consistency of the items of clinical skills competences, communication skills and teamwork. Higher scores of clinical skills competence and communication skills, but not in those of teamwork were observed in senior residents than in junior residents. The Computer-Book of the Internal Medicine Resident identified the strengths and weaknesses of our training program. We did not observe any correlation between the results of the self- evaluations and the evaluations made by staff physicians. The items of Computer-Book of the Internal Medicine Resident showed high internal consistency and made it possible to measure the acquisition of medical competences in a team of Internal Medicine residents. This self-assessment method should be complemented with other evaluation methods in order to assess the acquisition of medical competences by an individual resident. Copyright © 2012 Elsevier Espa

  13. The pathological basis of dementia in the aged and reliability of computed tomograms in the diagnosis of dementia

    International Nuclear Information System (INIS)

    Tohgi, Hideo

    1981-01-01

    Pathological findings of demented (89 cases) and non-demented, control subjects (74 cases) in the aged were compared. The reliability of CT in the diagnosis was also studied. 1) Brain weight and the degree of ventricular dilatation were related to dementia, but the degree of convolutional atrophy showed no correlation with dementia. 2) Among various types of cerebrovascular lesions, only diffuse white matter lesions can be the cause of dementia. 3) Cases with dementia were classified into 4 groups as regards to which of cerebrovascular lesions and senile plaques was more prominent histologically. 4) CT evaluations coincided with pathological findings in only 17.9% in the degree of ventricular dilatation and 57.1% in the degree of convolutional atrophy. Ninty-three percent of cases without periventricular lucency did not show diffuse white matter lesions at autopsy, while only 50% of cases with periventricular lucency were confirmed to have diffuse white matter lesions. 5) The degree of ventricular dilatation, conventional atrophy, periventricular lucency, and subarachnoid free space in the cerebral convexity were studied in relation to dementia. The sum of the evaluations of these indices had a significant correlation with dementia. (J.P.N.)

  14. Introduction to numerical analysis

    CERN Document Server

    Hildebrand, F B

    1987-01-01

    Well-known, respected introduction, updated to integrate concepts and procedures associated with computers. Computation, approximation, interpolation, numerical differentiation and integration, smoothing of data, other topics in lucid presentation. Includes 150 additional problems in this edition. Bibliography.

  15. Computation of Green function of the Schroedinger-like partial differential equations by the numerical functional integration

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.; Shahbagian, R.R.; Zhidkov, E.P.

    1991-01-01

    A new method for numerical solution of the boundary problem for Schroedinger-like partial differential equations in R n is elaborated. The method is based on representation of multidimensional Green function in the form of multiple functional integral and on the use of approximation formulas which are constructed for such integrals. The convergence of approximations to the exact value is proved, the remainder of the formulas is estimated. Method reduces the initial differential problem to quadratures. 16 refs.; 7 tabs

  16. Reliability analysis and computation of computer-based safety instrumentation and control used in German nuclear power plant. Final report; Zuverlaessigkeitsuntersuchung und -berechnung rechnerbasierter Sicherheitsleittechnik zum Einsatz in deutschen Kernkraftwerken. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Yongjian [Hochschule Magdeburg-Stendal, Magdeburg (Germany). Inst. fuer Elektrotechnik; Krause, Ulrich [Magdeburg Univ. (Germany). Inst. fuer Apparate- und Umwelttechnik; Gu, Chunlei

    2014-08-21

    The trend of technological advancement in the field of safety instrumentation and control (I and C) leads to increasingly frequent use of computer-based (digital) control systems which consisting of distributed, connected bus communications computers and their functionalities are freely programmable by qualified software. The advantages of the new I and C system over the old I and C system with hard-wired technology are e.g. in the higher flexibility, cost-effective procurement of spare parts, higher hardware reliability (through higher integration density, intelligent self-monitoring mechanisms, etc.). On the other hand, skeptics see the new technology with the computer-based I and C a higher potential by influences of common cause failures (CCF), and the easier manipulation by sabotage (IT Security). In this joint research project funded by the Federal Ministry for Economical Affaires and Energy (BMWi) (2011-2014, FJZ 1501405) the Otto-von-Guericke-University Magdeburg and Magdeburg-Stendal University of Applied Sciences are therefore trying to develop suitable methods for the demonstration of the reliability of the new instrumentation and control systems with the focus on the investigation of CCF. This expertise of both houses shall be extended to this area and a scientific contribution to the sound reliability judgments of the digital safety I and C in domestic and foreign nuclear power plants. First, the state of science and technology will be worked out through the study of national and international standards in the field of functional safety of electrical and I and C systems and accompanying literature. On the basis of the existing nuclear Standards the deterministic requirements on the structure of the new digital I and C system will be determined. The possible methods of reliability modeling will be analyzed and compared. A suitable method called multi class binomial failure rate (MCFBR) which was successfully used in safety valve applications will be

  17. Virtual non-contrast in second-generation, dual-energy computed tomography: Reliability of attenuation values

    International Nuclear Information System (INIS)

    Toepker, Michael; Moritz, Thomas; Krauss, Bernhard; Weber, Michael; Euller, Gordon; Mang, Thomas; Wolf, Florian; Herold, Christian J.; Ringl, Helmut

    2012-01-01

    Purpose: To evaluate the reliability of attenuation values in virtual non-contrast images (VNC) reconstructed from contrast-enhanced, dual-energy scans performed on a second-generation dual-energy CT scanner, compared to single-energy, non-contrast images (TNC). Materials and methods: Sixteen phantoms containing a mixture of contrast agent and water at different attenuations (0–1400 HU) were investigated on a Definition Flash-CT scanner using a single-energy scan at 120 kV and a DE-CT protocol (100 kV/SN140 kV). For clinical assessment, 86 patients who received a dual-phase CT, containing an unenhanced single-energy scan at 120 kV and a contrast enhanced (110 ml Iomeron 400 mg/ml; 4 ml/s) DE-CT (100 kV/SN140 kV) in an arterial (n = 43) or a venous phase, were retrospectively analyzed. Mean attenuation was measured within regions of interest of the phantoms and in different tissue types of the patients within the corresponding VNC and TNC images. Paired t-tests and Pearson correlation were used for statistical analysis. Results: For all phantoms, mean attenuation in VNC was 5.3 ± 18.4 HU, with respect to water. In 86 patients overall, 2637 regions were measured in TNC and VNC images, with a mean difference between TNC and VNC of −3.6 ± 8.3 HU. In 91.5% (n = 2412) of all cases, absolute differences between TNC and VNC were under 15 HU, and, in 75.3% (n = 1986), differences were under 10 HU. Conclusions: Second-generation dual-energy CT based VNC images provide attenuation values close to those of TNC. To avoid possible outliers multiple measurements are recommended especially for measurements in the spleen, the mesenteric fat, and the aorta.

  18. Virtual non-contrast in second-generation, dual-energy computed tomography: reliability of attenuation values.

    Science.gov (United States)

    Toepker, Michael; Moritz, Thomas; Krauss, Bernhard; Weber, Michael; Euller, Gordon; Mang, Thomas; Wolf, Florian; Herold, Christian J; Ringl, Helmut

    2012-03-01

    To evaluate the reliability of attenuation values in virtual non-contrast images (VNC) reconstructed from contrast-enhanced, dual-energy scans performed on a second-generation dual-energy CT scanner, compared to single-energy, non-contrast images (TNC). Sixteen phantoms containing a mixture of contrast agent and water at different attenuations (0-1400 HU) were investigated on a Definition Flash-CT scanner using a single-energy scan at 120 kV and a DE-CT protocol (100 kV/SN140 kV). For clinical assessment, 86 patients who received a dual-phase CT, containing an unenhanced single-energy scan at 120 kV and a contrast enhanced (110 ml Iomeron 400 mg/ml; 4 ml/s) DE-CT (100 kV/SN140 kV) in an arterial (n=43) or a venous phase, were retrospectively analyzed. Mean attenuation was measured within regions of interest of the phantoms and in different tissue types of the patients within the corresponding VNC and TNC images. Paired t-tests and Pearson correlation were used for statistical analysis. For all phantoms, mean attenuation in VNC was 5.3±18.4 HU, with respect to water. In 86 patients overall, 2637 regions were measured in TNC and VNC images, with a mean difference between TNC and VNC of -3.6±8.3 HU. In 91.5% (n=2412) of all cases, absolute differences between TNC and VNC were under 15HU, and, in 75.3% (n=1986), differences were under 10 HU. Second-generation dual-energy CT based VNC images provide attenuation values close to those of TNC. To avoid possible outliers multiple measurements are recommended especially for measurements in the spleen, the mesenteric fat, and the aorta. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. LHC@Home: A Volunteer computing system for Massive Numerical Simulations of Beam Dynamics and High Energy Physics Events

    CERN Document Server

    Giovannozzi, M; Høimyr, N; Jones, PL; Karneyeu, A; Marquina, MA; McIntosh, E; Segal, B; Skands, P; Grey, F; Lombraña González, D; Rivkin, L; Zacharov, I

    2012-01-01

    Recently, the LHC@home system has been revived at CERN. It is a volunteer computing system based on BOINC which boosts the available CPU-power in institutional computer centres with the help of individuals that donate the CPU-time of their PCs. Currently two projects are hosted on the system, namely SixTrack and Test4Theory. The first is aimed at performing beam dynamics simulations, while the latter deals with the simulation of high-energy events. In this paper the details of the global system, as well a discussion of the capabilities of each project will be presented.

  20. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    Science.gov (United States)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.