Research on assembly reliability control technology for computer numerical control machine tools
Yan Ran
2017-01-01
Full Text Available Nowadays, although more and more companies focus on improving the quality of computer numerical control machine tools, its reliability control still remains as an unsolved problem. Since assembly reliability control is very important in product reliability assurance in China, a new key assembly processes extraction method based on the integration of quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory for computer numerical control machine tools is proposed. Firstly, assembly faults and assembly reliability control flow of computer numerical control machine tools are studied. Secondly, quality function deployment; failure mode, effects, and criticality analysis; and fuzzy theory are integrated to build a scientific extraction model, by which the key assembly processes meeting both customer functional demands and failure data distribution can be extracted, also an example is given to illustrate the correctness and effectiveness of the method. Finally, the assembly reliability monitoring system is established based on key assembly processes to realize and simplify this method.
1983-09-01
Industry Australian Atomic Energy Commission, Director CSIROj Materials Science Division, Library Trans-Australia Airlines, Library Qantas Airways ...designed to evaluate the reliability functions that result from the application of reliability analysis to the fatigue of aircraft structures, in particular...Messages 60+ A.4. Program Assembly 608 DISTRIBUTION DOCUMENT CONTROL DATA II 1. INTRODUCTION The application of reliability analysis to the fatigue
Reliability computation from reliability block diagrams
Chelson, P. O.; Eckstein, E. Y.
1975-01-01
Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.
Enlightenment on Computer Network Reliability From Transportation Network Reliability
Hu Wenjun; Zhou Xizhao
2011-01-01
Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.
Preskill, J
1997-01-01
The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)
Reliability in the utility computing era: Towards reliable Fog computing
Madsen, Henrik; Burtschy, Bernard; Albeanu, G.
2013-01-01
This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....
Numerical computations with GPUs
Kindratenko, Volodymyr
2014-01-01
This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to
Computing the Alexander Polynomial Numerically
Hansen, Mikael Sonne
2006-01-01
Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically.......Explains how to construct the Alexander Matrix and how this can be used to compute the Alexander polynomial numerically....
Essential numerical computer methods
Johnson, Michael L
2010-01-01
The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface
Program Verification of Numerical Computation
Pantelis, Garry
2014-01-01
These notes outline a formal method for program verification of numerical computation. It forms the basis of the software package VPC in its initial phase of development. Much of the style of presentation is in the form of notes that outline the definitions and rules upon which VPC is based. The initial motivation of this project was to address some practical issues of computation, especially of numerically intensive programs that are commonplace in computer models. The project evolved into a...
Numerical Analysis of Multiscale Computations
Engquist, Björn; Tsai, Yen-Hsi R
2012-01-01
This book is a snapshot of current research in multiscale modeling, computations and applications. It covers fundamental mathematical theory, numerical algorithms as well as practical computational advice for analysing single and multiphysics models containing a variety of scales in time and space. Complex fluids, porous media flow and oscillatory dynamical systems are treated in some extra depth, as well as tools like analytical and numerical homogenization, and fast multipole method.
Introduction to numerical computation in Pascal
Dew, P M
1983-01-01
Our intention in this book is to cover the core material in numerical analysis normally taught to students on degree courses in computer science. The main emphasis is placed on the use of analysis and programming techniques to produce well-designed, reliable mathematical software. The treatment should be of interest also to students of mathematics, science and engineering who wish to learn how to write good programs for mathematical computations. The reader is assumed to have some acquaintance with Pascal programming. Aspects of Pascal particularly relevant to numerical computation are revised and developed in the first chapter. Although Pascal has some drawbacks for serious numerical work (for example, only one precision for real numbers), the language has major compensating advantages: it is a widely used teaching language that will be familiar to many students and it encourages the writing of clear, well structured programs. By careful use of structure and documentation, we have produced codes that we be...
Computer system reliability safety and usability
Dhillon, BS
2013-01-01
Computer systems have become an important element of the world economy, with billions of dollars spent each year on development, manufacture, operation, and maintenance. Combining coverage of computer system reliability, safety, usability, and other related topics into a single volume, Computer System Reliability: Safety and Usability eliminates the need to consult many different and diverse sources in the hunt for the information required to design better computer systems.After presenting introductory aspects of computer system reliability such as safety, usability-related facts and figures,
Reliability and Availability of Cloud Computing
Bauer, Eric
2012-01-01
A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le
Numerical and symbolic scientific computing
Langer, Ulrich
2011-01-01
The book presents the state of the art and results and also includes articles pointing to future developments. Most of the articles center around the theme of linear partial differential equations. Major aspects are fast solvers in elastoplasticity, symbolic analysis for boundary problems, symbolic treatment of operators, computer algebra, and finite element methods, a symbolic approach to finite difference schemes, cylindrical algebraic decomposition and local Fourier analysis, and white noise analysis for stochastic partial differential equations. Further numerical-symbolic topics range from
Reliability of numerical wind tunnels for VAWT simulation
Raciti Castelli, M.; Masi, M.; Battisti, L.; Benini, E.; Brighenti, A.; Dossena, V.; Persico, G.
2016-09-01
Computational Fluid Dynamics (CFD) based on the Unsteady Reynolds Averaged Navier Stokes (URANS) equations have long been widely used to study vertical axis wind turbines (VAWTs). Following a comprehensive experimental survey on the wakes downwind of a troposkien-shaped rotor, a campaign of bi-dimensional simulations is presented here, with the aim of assessing its reliability in reproducing the main features of the flow, also identifying areas needing additional research. Starting from both a well consolidated turbulence model (k-ω SST) and an unstructured grid typology, the main simulation settings are here manipulated in a convenient form to tackle rotating grids reproducing a VAWT operating in an open jet wind tunnel. The dependence of the numerical predictions from the selected grid spacing is investigated, thus establishing the less refined grid size that is still capable of capturing some relevant flow features such as integral quantities (rotor torque) and local ones (wake velocities).
Reliability history of the Apollo guidance computer
Hall, E. C.
1972-01-01
The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.
Reliable computer systems design and evaluatuion
Siewiorek, Daniel
2014-01-01
Enhance your hardware/software reliabilityEnhancement of system reliability has been a major concern of computer users and designers ¦ and this major revision of the 1982 classic meets users' continuing need for practical information on this pressing topic. Included are case studies of reliablesystems from manufacturers such as Tandem, Stratus, IBM, and Digital, as well as coverage of special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching processors.
Effect of Maintenance on Computer Network Reliability
Rima Oudjedi Damerdji
2014-08-01
Full Text Available At the time of the new information technologies, computer networks are inescapable in any large organization, where they are organized so as to form powerful internal means of communication. In a context of dependability, the reliability parameter proves to be fundamental to evaluate the performances of such systems. In this paper, we study the reliability evaluation of a real computer network, through three reliability models. The computer network considered (set of PCs and server interconnected is localized in a company established in the west of Algeria and dedicated to the production of ammonia and fertilizers. The result permits to compare between the three models to determine the most appropriate reliability model to the studied network, and thus, contribute to improving the quality of the network. In order to anticipate system failures as well as improve the reliability and availability of the latter, we must put in place a policy of adequate and effective maintenance based on a new model of the most common competing risks in maintenance, Alert-Delay model. At the end, dependability measures such as MTBF and reliability are calculated to assess the effectiveness of maintenance strategies and thus, validate the alert delay model.
Computing Numerical Singular Points of Plane Algebraic Curves
LUO ZHONG-XUAN; FENG ER-BAO; HU WEN-YU
2012-01-01
Given an irreducible plane algebraic curve of degree d ≥ 3,we compute its numerical singular points,determine their multiplicities,and count the number of distinct tangents at each to decide whether the singular points are ordinary.The numerical procedures rely on computing numerical solutions of polynomial systems by homotopy continuation method and a reliable method that calculates multiple roots of the univariate polynomials accurately using standard machine precision.It is completely different from the traditional symbolic computation and provides singular points and their related properties of some plane algebraic curves that the symbolic software Maple cannot work out.Without using multiprecision arithmetic,extensive numerical experiments show that our numerical procedures are accurate,efficient and robust,even if the coefficients of plane algebraic curves are inexact.
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
A History of Computer Numerical Control.
Haggen, Gilbert L.
Computer numerical control (CNC) has evolved from the first significant counting method--the abacus. Babbage had perhaps the greatest impact on the development of modern day computers with his analytical engine. Hollerith's functioning machine with punched cards was used in tabulating the 1890 U.S. Census. In order for computers to become a…
A Numerical Simulation Approach for Reliability Evaluation of CFRP Composite
Liu, D. S.-C.; Jenab, K.
2013-02-01
Due to the superior mechanical properties of carbon fiber reinforced plastic (CFRP) materials, they are vastly used in industries such as aircraft manufacturers. The aircraft manufacturers are switching metal to composite structures while studying reliability (R-value) of CFRP. In this study, a numerical simulation method to determine the reliability of Multiaxial Warp Knitted (MWK) textiles used to make CFRP composites is proposed. This method analyzes the distribution of carbon fiber angle misalignments, from a chosen 0° direction, caused by the sewing process of the textile, and finds the R-value, a value between 0 and 1. The application of this method is demonstrated by an illustrative example.
Fluid dynamics theory, computation, and numerical simulation
Pozrikidis, C
2001-01-01
Fluid Dynamics Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes Two distinguishing features of the discourse are solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty Matlab codes are presented and discussed for a broad...
Fluid Dynamics Theory, Computation, and Numerical Simulation
Pozrikidis, Constantine
2009-01-01
Fluid Dynamics: Theory, Computation, and Numerical Simulation is the only available book that extends the classical field of fluid dynamics into the realm of scientific computing in a way that is both comprehensive and accessible to the beginner. The theory of fluid dynamics, and the implementation of solution procedures into numerical algorithms, are discussed hand-in-hand and with reference to computer programming. This book is an accessible introduction to theoretical and computational fluid dynamics (CFD), written from a modern perspective that unifies theory and numerical practice. There are several additions and subject expansions in the Second Edition of Fluid Dynamics, including new Matlab and FORTRAN codes. Two distinguishing features of the discourse are: solution procedures and algorithms are developed immediately after problem formulations are presented, and numerical methods are introduced on a need-to-know basis and in increasing order of difficulty. Matlab codes are presented and discussed for ...
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
Probabilistic numerics and uncertainty in computations.
Hennig, Philipp; Osborne, Michael A; Girolami, Mark
2015-07-08
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.
Program Verification of Numerical Computation - Part 2
Pantelis, Garry
2014-01-01
These notes present some extensions of a formal method introduced in an earlier paper. The formal method is designed as a tool for program verification of numerical computation and forms the basis of the software package VPC. Included in the extensions that are presented here are disjunctions and methods for detecting non-computable programs. A more comprehensive list of the construction rules as higher order constructs is also presented.
Smooth structures on Eschenburg spaces: numerical computations
Butler, Leo T
2009-01-01
This paper numerically computes the topological and smooth invariants of Eschenburg spaces with small fourth cohomology group, following Kruggel's determination of the Kreck-Stolz invariants of Eschenburg spaces that satisfy condition C. The GNU GMP arbitrary-precision library is utilised.
Multiaxis Computer Numerical Control Internship Report
Rouse, Sharon M.
2012-01-01
(Purpose) The purpose of this paper was to examine the issues associated with bringing new technology into the classroom, in particular, the vocational/technical classroom. (Methodology) A new Haas 5 axis vertical Computer Numerical Control machining center was purchased to update the CNC machining curriculum at a community college and the process…
Fluid dynamics theory, computation, and numerical simulation
Pozrikidis, C
2017-01-01
This book provides an accessible introduction to the basic theory of fluid mechanics and computational fluid dynamics (CFD) from a modern perspective that unifies theory and numerical computation. Methods of scientific computing are introduced alongside with theoretical analysis and MATLAB® codes are presented and discussed for a broad range of topics: from interfacial shapes in hydrostatics, to vortex dynamics, to viscous flow, to turbulent flow, to panel methods for flow past airfoils. The third edition includes new topics, additional examples, solved and unsolved problems, and revised images. It adds more computational algorithms and MATLAB programs. It also incorporates discussion of the latest version of the fluid dynamics software library FDLIB, which is freely available online. FDLIB offers an extensive range of computer codes that demonstrate the implementation of elementary and advanced algorithms and provide an invaluable resource for research, teaching, classroom instruction, and self-study. This ...
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Reliable Damping of Free Surface Waves in Numerical Simulations
Peric, Robinson
2015-01-01
This paper generalizes existing approaches for free-surface wave damping via momentum sinks for flow simulations based on the Navier-Stokes equations. It is shown in 2D flow simulations that, to obtain reliable wave damping, the coefficients in the damping functions must be adjusted to the wave parameters. A scaling law for selecting these damping coefficients is presented, which enables similarity of the damping in model- and full-scale. The influence of the thickness of the damping layer, the wave steepness, the mesh fineness and the choice of the damping coefficients are examined. An efficient approach for estimating the optimal damping setup is presented. Results of 3D ship resistance computations show that the scaling laws apply to such simulations as well, so the damping coefficients should be adjusted for every simulation to ensure convergence of the solution in both model and full scale. Finally, practical recommendations for the setup of reliable damping in flow simulations with regular and irregular...
Adapting Inspection Data for Computer Numerical Control
Hutchison, E. E.
1986-01-01
Machining time for repetitive tasks reduced. Program converts measurements of stub post locations by coordinate-measuring machine into form used by numerical-control computer. Work time thus reduced by 10 to 15 minutes for each post. Since there are 600 such posts on each injector, time saved per injector is 100 to 150 hours. With modifications this approach applicable to machining of many precise holes on large machine frames and similar objects.
NUMERICAL VALIDATION OF COMPUTATIONAL MODEL FOR SHEET CAVITATING FLOWS
无
2006-01-01
A computational modeling for the sheet cavitating flows is presented. The cavitation model is implemented in a viscous Navier-Stokes solver. The cavity interface and shape are determined using an iterative procedure matching the cavity surface to a constant pressure boundary. The pressure distribution, as well as its gradient on the wall, is taken into account in updating the cavity shape iteratively. Numerical computations are performed for the sheet cavitating flows at a range of cavitation numbers across the hemispheric headform/cylinder body with different grid numbers. The influence of the relaxation factor in the cavity shape updating scheme for the algorithm accuracy and reliability is conducted through comparison with other two cavity shape updating numerical schemes.The results obtained are reasonable and the iterative procedure of cavity shape updating is quite stable, which demonstrate the superiority of the proposed cavitation model and algorithms.
An introduction to reliable quantum computation
Aliferis, Panos
2011-01-01
This is an introduction to software methods of quantum fault tolerance. Broadly speaking, these methods describe strategies for using the noisy hardware components of a quantum computer to perform computations while continually monitoring and actively correcting the hardware faults. We discuss parallels and differences with similar methods for ordinary digital computation, we discuss some of the noise models used in designing and analyzing noisy quantum circuits, and we sketch the logic of some of the central results in this area of research.
The Application of Visual Basic Computer Programming Language to Simulate Numerical Iterations
Abdulkadir Baba HASSAN; Matthew Sunday ABOLARIN; Onawola Hassan JIMOH
2006-01-01
This paper examines the application of Visual Basic Computer Programming Language to Simulate Numerical Iterations, the merit of Visual Basic as a Programming Language and the difficulties faced when solving numerical iterations analytically, this research paper encourage the uses of Computer Programming methods for the execution of numerical iterations and finally fashion out and develop a reliable solution using Visual Basic package to write a program for some selected iteration problems.
The Application of Visual Basic Computer Programming Language to Simulate Numerical Iterations
Abdulkadir Baba HASSAN
2006-06-01
Full Text Available This paper examines the application of Visual Basic Computer Programming Language to Simulate Numerical Iterations, the merit of Visual Basic as a Programming Language and the difficulties faced when solving numerical iterations analytically, this research paper encourage the uses of Computer Programming methods for the execution of numerical iterations and finally fashion out and develop a reliable solution using Visual Basic package to write a program for some selected iteration problems.
Reliability Distribution of Numerical Control Lathe Based on Correlation Analysis
Xiaoyan Qi; Guixiang Shen; Yingzhi Zhang; Shuguang Sun; Bingkun Chen
2016-01-01
Combined Reliability distribution with correlation analysis, a new method has been proposed to make Reliability distribution where considering the elements about structure correlation and failure correlation of subsystems. Firstly, we make a sequence for subsystems by means of TOPSIS which comprehends the considerations of Reliability allocation, and introducing a Copula connecting function to set up a distribution model based on structure correlation, failure correlation and target correlation, and then acquiring reliability target area of all subsystems by Matlab. In this method, not only the traditional distribution considerations are concerned, but also correlation influences are involved, to achieve supplementing information and optimizing distribution.
Computer System Reliability Allocation Method and Supporting Tool
无
2001-01-01
This paper presents a computer system reliability allocationmethod that is based on the theory of statistic and Markovian chain,which can be used to allocate reliability to subsystem, to hybrid system and software modules. Arele vant supporting tool built by us is introduced.
The process group approach to reliable distributed computing
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
Reliability of computer memories in radiation environment
Fetahović Irfan S.
2016-01-01
Full Text Available The aim of this paper is examining a radiation hardness of the magnetic (Toshiba MK4007 GAL and semiconductor (AT 27C010 EPROM and AT 28C010 EEPROM computer memories in the field of radiation. Magnetic memories have been examined in the field of neutron radiation, and semiconductor memories in the field of gamma radiation. The obtained results have shown a high radiation hardness of magnetic memories. On the other side, it has been shown that semiconductor memories are significantly more sensitive and a radiation can lead to an important damage of their functionality. [Projekat Ministarstva nauke Republike Srbije, br. 171007
Numerical computations of explosions in gases
Chushkin, P. I.; Shurshalov, L. V.
The development and the present-day state of the problem on numerical computations of explosions in gases are reviewed. In the first part, different one-dimensional cases are discussed: point explosion with counterpressure, blast-like expansion of volumes filled with a compressed hot gas, blast of charges of condensed explosive, explosion processes in real high-temperature air, in combustible detonating media and under action of other physical-chemical factors. In the second part devoted to two-dimensional flows, we consider explosion in the non-homogeneous atmosphere, blast of asymmetric charges, detonation in gas, explosion modelling of some cosmic phenomena (solar flares, the Tunguska meteorite). The survey includes about 110 works beginning with the first publications on the subject.
International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics
DEVELOPMENTS IN RELIABLE COMPUTING
1999-01-01
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...
Numerical methods for reliability and safety assessment multiscale and multiphysics systems
Hami, Abdelkhalak
2015-01-01
This book offers unique insight on structural safety and reliability by combining computational methods that address multiphysics problems, involving multiple equations describing different physical phenomena, and multiscale problems, involving discrete sub-problems that together describe important aspects of a system at multiple scales. The book examines a range of engineering domains and problems using dynamic analysis, nonlinear methods, error estimation, finite element analysis, and other computational techniques. This book also: · Introduces novel numerical methods · Illustrates new practical applications · Examines recent engineering applications · Presents up-to-date theoretical results · Offers perspective relevant to a wide audience, including teaching faculty/graduate students, researchers, and practicing engineers
Reliability and safety analysis of redundant vehicle management computer system
Shi Jian; Meng Yixuan; Wang Shaoping; Bian Mengmeng; Yan Dungong
2013-01-01
Redundant techniques are widely adopted in vehicle management computer (VMC) to ensure that VMC has high reliability and safety. At the same time, it makes VMC have special char-acteristics, e.g., failure correlation, event simultaneity, and failure self-recovery. Accordingly, the reliability and safety analysis to redundant VMC system (RVMCS) becomes more difficult. Aimed at the difficulties in RVMCS reliability modeling, this paper adopts generalized stochastic Petri nets to establish the reliability and safety models of RVMCS. Then this paper analyzes RVMCS oper-ating states and potential threats to flight control system. It is verified by simulation that the reli-ability of VMC is not the product of hardware reliability and software reliability, and the interactions between hardware and software faults can reduce the real reliability of VMC obviously. Furthermore, the failure undetected states and false alarming states inevitably exist in RVMCS due to the influences of limited fault monitoring coverage and false alarming probability of fault mon-itoring devices (FMD). RVMCS operating in some failure undetected states will produce fatal threats to the safety of flight control system. RVMCS operating in some false alarming states will reduce utility of RVMCS obviously. The results abstracted in this paper can guide reliable VMC and efficient FMD designs. The methods adopted in this paper can also be used to analyze other intelligent systems’ reliability.
High-reliability computing for the smarter planet
Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is
NINJA: Java for High Performance Numerical Computing
José E. Moreira
2002-01-01
Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.
Krause, M.; Camenzind, M.
2001-12-01
In the present paper, we examine the convergence behavior and inter-code reliability of astrophysical jet simulations in axial symmetry. We consider both pure hydrodynamic jets and jets with a dynamically significant magnetic field. The setups were chosen to match the setups of two other publications, and recomputed with the MHD code NIRVANA. We show that NIRVANA and the two other codes give comparable, but not identical results. We explain the differences by the different application of artificial viscosity in the three codes and numerical details, which can be summarized in a resolution effect, in the case without magnetic field: NIRVANA turns out to be a fair code of medium efficiency. It needs approximately twice the resolution as the code by Lind (Lind et al. 1989) and half the resolution as the code by Kössl (Kössl & Müller 1988). We find that some global properties of a hydrodynamical jet simulation, like e.g. the bow shock velocity, converge at 100 points per beam radius (ppb) with NIRVANA. The situation is quite different after switching on the toroidal magnetic field: in this case, global properties converge even at 10 ppb. In both cases, details of the inner jet structure and especially the terminal shock region are still insufficiently resolved, even at our highest resolution of 70 ppb in the magnetized case and 400 ppb for the pure hydrodynamic jet. The magnetized jet even suffers from a fatal retreat of the Mach disk towards the inflow boundary, which indicates that this simulation does not converge, in the end. This is also in definite disagreement with earlier simulations, and challenges further studies of the problem with other codes. In the case of our highest resolution simulation, we can report two new features: first, small scale Kelvin-Helmholtz instabilities are excited at the contact discontinuity next to the jet head. This slows down the development of the long wavelength Kelvin-Helmholtz instability and its turbulent cascade to smaller
Multi-hop routing mechanism for reliable sensor computing.
Chen, Jiann-Liang; Ma, Yi-Wei; Lai, Chia-Ping; Hu, Chia-Cheng; Huang, Yueh-Min
2009-01-01
Current research on routing in wireless sensor computing concentrates on increasing the service lifetime, enabling scalability for large number of sensors and supporting fault tolerance for battery exhaustion and broken nodes. A sensor node is naturally exposed to various sources of unreliable communication channels and node failures. Sensor nodes have many failure modes, and each failure degrades the network performance. This work develops a novel mechanism, called Reliable Routing Mechanism (RRM), based on a hybrid cluster-based routing protocol to specify the best reliable routing path for sensor computing. Table-driven intra-cluster routing and on-demand inter-cluster routing are combined by changing the relationship between clusters for sensor computing. Applying a reliable routing mechanism in sensor computing can improve routing reliability, maintain low packet loss, minimize management overhead and save energy consumption. Simulation results indicate that the reliability of the proposed RRM mechanism is around 25% higher than that of the Dynamic Source Routing (DSR) and ad hoc On-demand Distance Vector routing (AODV) mechanisms.
The NUMLAB numerical laboratory for computation and visualisation
Maubach, J.; Telea, A.
2005-01-01
A large range of software environments addresses numerical simulation, interactive visualisation and computational steering. Most such environments are designed to cover a limited application domain, such as finite element or finite difference packages, symbolic or linear algebra computations or ima
Recent advances in computational structural reliability analysis methods
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Research in applied mathematics, numerical analysis, and computer science
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.
Some Computer Algorithms to Implement a Reliability Shorthand.
1982-10-01
AD-A123 781 SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIAILITY /I I SHORTHAND(U) N VAL POSTGRADUATE SCHOOL MONTEREY CA UNCLASSIFIED SGREOC82F/G 12...California THESIS SOME COMPUTER ALGORITHMS TO IMPLEMENT A RELIABILITY SHORTHAND Sadan Gursel October 1982 JAN 26I A :: Thesis Advisor: J. D. Esary...DOCMEWTATION PAGE ISSFORK COMPLZT’Nc FORM .REPORTNMU1EUGW CKO N.3 19IiNI CATALOG mao d. TMTE (od Sid"Ifte) $. ?’V9E OF 1119000 & PEUoOŔ COVERED Some Computer
A Research Roadmap for Computation-Based Human Reliability Analysis
Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.
Modeling and Simulation Reliable Spacecraft On-Board Computing
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
Higher order numerical differentiation on the Infinity Computer
Sergeyev, Yaroslav D
2012-01-01
There exist many applications where it is necessary to approximate numerically derivatives of a function which is given by a computer procedure. In particular, all the fields of optimization have a special interest in such a kind of information. In this paper, a new way to do this is presented for a new kind of a computer -- the Infinity Computer -- able to work numerically with finite, infinite, and infinitesimal numbers. It is proved that the Infinity Computer is able to calculate values of derivatives of a higher order for a wide class of functions represented by computer procedures. It is shown that the ability to compute derivatives of arbitrary order automatically and accurate to working precision is an intrinsic property of the Infinity Computer related to its way of functioning. Numerical examples illustrating the new concepts and numerical tools are given.
Krause, M.; M. Camenzind
2001-01-01
In the present paper, we examine the convergence behavior and inter-code reliability of astrophysical jet simulations in axial symmetry. We consider both, pure hydrodynamic jets and jets with a dynamically significant magnetic field. The setups were chosen to match the setups of two other publications, and recomputed with the MHD code NIRVANA. We show that NIRVANA and the two other codes give comparable, but not identical results. We find that some global properties of a hydrodynamical jet si...
Topics in numerical partial differential equations and scientific computing
2016-01-01
Numerical partial differential equations (PDEs) are an important part of numerical simulation, the third component of the modern methodology for science and engineering, besides the traditional theory and experiment. This volume contains papers that originated with the collaborative research of the teams that participated in the IMA Workshop for Women in Applied Mathematics: Numerical Partial Differential Equations and Scientific Computing in August 2014.
Introduction to numerical analysis and scientific computing
Nassif, Nabil
2013-01-01
Computer Number Systems and Floating Point Arithmetic Introduction Conversion from Base 10 to Base 2Conversion from Base 2 to Base 10Normalized Floating Point SystemsFloating Point OperationsComputing in a Floating Point SystemFinding Roots of Real Single-Valued Functions Introduction How to Locate the Roots of a Function The Bisection Method Newton's Method The Secant MethodSolving Systems of Linear Equations by Gaussian Elimination Mathematical Preliminaries Computer Storage for Matrices. Data Structures Back Substitution for Upper Triangular Systems Gauss Reduction LU DecompositionPolynomia
Evaluation of Network Reliability for Computer Networks with Multiple Sources
Yi-Kuei Lin
2012-01-01
Full Text Available Evaluating the reliability of a network with multiple sources to multiple sinks is a critical issue from the perspective of quality management. Due to the unrealistic definition of paths of network models in previous literature, existing models are not appropriate for real-world computer networks such as the Taiwan Advanced Research and Education Network (TWAREN. This paper proposes a modified stochastic-flow network model to evaluate the network reliability of a practical computer network with multiple sources where data is transmitted through several light paths (LPs. Network reliability is defined as being the probability of delivering a specified amount of data from the sources to the sink. It is taken as a performance index to measure the service level of TWAREN. This paper studies the network reliability of the international portion of TWAREN from two sources (Taipei and Hsinchu to one sink (New York that goes through a submarine and land surface cable between Taiwan and the United States.
A Sensitivity Analysis on Component Reliability from Fatigue Life Computations
1992-02-01
AD-A247 430 MTL TR 92-5 AD A SENSITIVITY ANALYSIS ON COMPONENT RELIABILITY FROM FATIGUE LIFE COMPUTATIONS DONALD M. NEAL, WILLIAM T. MATTHEWS, MARK G...HAGI OR GHANI NUMBI:H(s) Donald M. Neal, William T. Matthews, Mark G. Vangel, and Trevor Rudalevige 9. PERFORMING ORGANIZATION NAME AND ADDRESS lU...Technical Information Center, Cameron Station, Building 5, 5010 Duke Street, Alexandria, VA 22304-6145 2 ATTN: DTIC-FDAC I MIAC/ CINDAS , Purdue
The reliability of tablet computers in depicting maxillofacial radiographic landmarks
2015-01-01
Purpose This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). Materials and Methods A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radio...
Numerical computation of homogeneous slope stability.
Xiao, Shuangshuang; Li, Kemin; Ding, Xiaohua; Liu, Tong
2015-01-01
To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS) to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM) and particle swarm optimization algorithm (PSO) to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759) were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS).
Numerical Computation of Homogeneous Slope Stability
Shuangshuang Xiao
2015-01-01
Full Text Available To simplify the computational process of homogeneous slope stability, improve computational accuracy, and find multiple potential slip surfaces of a complex geometric slope, this study utilized the limit equilibrium method to derive expression equations of overall and partial factors of safety. This study transformed the solution of the minimum factor of safety (FOS to solving of a constrained nonlinear programming problem and applied an exhaustive method (EM and particle swarm optimization algorithm (PSO to this problem. In simple slope examples, the computational results using an EM and PSO were close to those obtained using other methods. Compared to the EM, the PSO had a small computation error and a significantly shorter computation time. As a result, the PSO could precisely calculate the slope FOS with high efficiency. The example of the multistage slope analysis indicated that this slope had two potential slip surfaces. The factors of safety were 1.1182 and 1.1560, respectively. The differences between these and the minimum FOS (1.0759 were small, but the positions of the slip surfaces were completely different than the critical slip surface (CSS.
Numerical characteristics of quantum computer simulation
Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.
2016-12-01
The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.
Wimmer, Thomas, E-mail: thomas.wimmer@medunigraz.at; Srimathveeravalli, Govindarajan; Gutta, Narendra [Memorial Sloan-Kettering Cancer Center, Interventional Radiology Service, Department of Radiology (United States); Ezell, Paula C. [The Rockefeller University, Research Animal Resource Center, Memorial Sloan-Kettering Cancer Center, Weill Cornell Medical College (United States); Monette, Sebastien [The Rockefeller University, Laboratory of Comparative Pathology, Memorial Sloan-Kettering Cancer Center, Weill Cornell Medical College (United States); Maybody, Majid; Erinjery, Joseph P.; Durack, Jeremy C. [Memorial Sloan-Kettering Cancer Center, Interventional Radiology Service, Department of Radiology (United States); Coleman, Jonathan A. [Memorial Sloan-Kettering Cancer Center, Urology Service, Department of Surgery (United States); Solomon, Stephen B. [Memorial Sloan-Kettering Cancer Center, Interventional Radiology Service, Department of Radiology (United States)
2015-02-15
PurposeNumerical simulations are used for treatment planning in clinical applications of irreversible electroporation (IRE) to determine ablation size and shape. To assess the reliability of simulations for treatment planning, we compared simulation results with empiric outcomes of renal IRE using computed tomography (CT) and histology in an animal model.MethodsThe ablation size and shape for six different IRE parameter sets (70–90 pulses, 2,000–2,700 V, 70–100 µs) for monopolar and bipolar electrodes was simulated using a numerical model. Employing these treatment parameters, 35 CT-guided IRE ablations were created in both kidneys of six pigs and followed up with CT immediately and after 24 h. Histopathology was analyzed from postablation day 1.ResultsAblation zones on CT measured 81 ± 18 % (day 0, p ≤ 0.05) and 115 ± 18 % (day 1, p ≤ 0.09) of the simulated size for monopolar electrodes, and 190 ± 33 % (day 0, p ≤ 0.001) and 234 ± 12 % (day 1, p ≤ 0.0001) for bipolar electrodes. Histopathology indicated smaller ablation zones than simulated (71 ± 41 %, p ≤ 0.047) and measured on CT (47 ± 16 %, p ≤ 0.005) with complete ablation of kidney parenchyma within the central zone and incomplete ablation in the periphery.ConclusionBoth numerical simulations for planning renal IRE and CT measurements may overestimate the size of ablation compared to histology, and ablation effects may be incomplete in the periphery.
Algorithmic mechanisms for reliable crowdsourcing computation under collusion.
Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A; Pareja, Daniel
2015-01-01
We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task) or not (return a bogus result to save the computation cost) as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.
Algorithmic mechanisms for reliable crowdsourcing computation under collusion.
Antonio Fernández Anta
Full Text Available We consider a computing system where a master processor assigns a task for execution to worker processors that may collude. We model the workers' decision of whether to comply (compute the task or not (return a bogus result to save the computation cost as a game among workers. That is, we assume that workers are rational in a game-theoretic sense. We identify analytically the parameter conditions for a unique Nash Equilibrium where the master obtains the correct result. We also evaluate experimentally mixed equilibria aiming to attain better reliability-profit trade-offs. For a wide range of parameter values that may be used in practice, our simulations show that, in fact, both master and workers are better off using a pure equilibrium where no worker cheats, even under collusion, and even for colluding behaviors that involve deviating from the game.
Diagnostic reliability of MMPI-2 computer-based test interpretations.
Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E
2014-09-01
Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Numerical computations and mathematical modelling with infinite and infinitesimal numbers
Sergeyev, Yaroslav D.
2012-01-01
Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this paper, a recently introduced computational methodology (that is not related to the non-standard analysis) is used to work with finite, infinite, and infinitesimal numbers \\textit{numerically}. This can be done on a new kind of a computer - the Infinity Computer - able to work with all these types of numbers. The new computatio...
Numerical computation for teaching quantum statistics
Price, Tyson; Swendsen, Robert H.
2013-11-01
The study of ideal quantum gases reveals surprising quantum effects that can be observed in macroscopic systems. The properties of bosons are particularly unusual because a macroscopic number of particles can occupy a single quantum state. We describe a computational approach that supplements the usual analytic derivations applicable in the thermodynamic limit. The approach involves directly summing over the quantum states for finite systems and avoids the need for doing difficult integrals. The results display the unusual behavior of quantum gases even for relatively small systems.
Numerical computation of special functions with applications to physics
Motsepe, K
2008-09-01
Full Text Available Students of mathematical physics, engineering, natural and biological sciences sometimes need to use special functions that are not found in ordinary mathematical software. In this paper a simple universal numerical algorithm is developed to compute...
Computation and Visualisation in the NumLab Numerical Laboratory
Maubach, J.M.L.; Telea, A.C.
2002-01-01
A large range of software environments addresses numerical simulation, interactive visualisation and computational steering. Most such environments are designed to cover a limited application domain, such as Finite Elements, Finite Differences, or image processing. Their software structure rarely
Computation and Visualisation in the NumLab Numerical Laboratory
Maubach, J.M.L.; Telea, A.C.
2002-01-01
A large range of software environments addresses numerical simulation, interactive visualisation and computational steering. Most such environments are designed to cover a limited application domain, such as Finite Elements, Finite Differences, or image processing. Their software structure rarely pr
Numerical computations and mathematical modelling with infinite and infinitesimal numbers
Sergeyev, Yaroslav D
2012-01-01
Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this paper, a recently introduced computational methodology (that is not related to the non-standard analysis) is used to work with finite, infinite, and infinitesimal numbers \\textit{numerically}. This can be done on a new kind of a computer - the Infinity Computer - able to work with all these types of numbers. The new computational tools both give possibilities to execute computations of a new type and open new horizons for creating new mathematical models where a computational usage of infinite and/or infinitesimal numbers can be useful. A number of numerical examples showing the potential of the new approach and dealing with divergent series, limits, probability theory, linear algebra, and calculation of volumes of objects consisting of parts of different dimensions are given.
Numerical aspects for efficient welding computational mechanics
Aburuga Tarek Kh.S.
2014-01-01
Full Text Available The effect of the residual stresses and strains is one of the most important parameter in the structure integrity assessment. A finite element model is constructed in order to simulate the multi passes mismatched submerged arc welding SAW which used in the welded tensile test specimen. Sequentially coupled thermal mechanical analysis is done by using ABAQUS software for calculating the residual stresses and distortion due to welding. In this work, three main issues were studied in order to reduce the time consuming during welding simulation which is the major problem in the computational welding mechanics (CWM. The first issue is dimensionality of the problem. Both two- and three-dimensional models are constructed for the same analysis type, shell element for two dimension simulation shows good performance comparing with brick element. The conventional method to calculate residual stress is by using implicit scheme that because of the welding and cooling time is relatively high. In this work, the author shows that it could use the explicit scheme with the mass scaling technique, and time consuming during the analysis will be reduced very efficiently. By using this new technique, it will be possible to simulate relatively large three dimensional structures.
Reliability of an interactive computer program for advance care planning.
Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J
2012-06-01
Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.
Peer-to-Peer Secure Multi-Party Numerical Computation
Bickson, Danny; Dolev, Danny; Pinkas, Benny
2008-01-01
We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and numerous other tasks, where the computing nodes would like to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we examine several possible approaches and discuss their feasibility. Among the possible approaches, we identify a single approach which is both scalable and theoretically secure. An additional novel contribution is that we show how to compute the neighborhood based collaborative filtering, a state-of-the-art collaborative filtering algorithm, winner of the Netflix progress ...
The reliability of tablet computers in depicting maxillofacial radiographic landmarks
Tadinada, Aditya; Mahdian, Mina; Sheth, Sonam; Chandhoke, Taranpreet K.; Gopalakrishna, Aadarsh; Potluri, Anitha; Yadav, Sumit [University of Connecticut School of Dental Medicine, Farmington (United States)
2015-09-15
This study was performed to evaluate the reliability of the identification of anatomical landmarks in panoramic and lateral cephalometric radiographs on a standard medical grade picture archiving communication system (PACS) monitor and a tablet computer (iPad 5). A total of 1000 radiographs, including 500 panoramic and 500 lateral cephalometric radiographs, were retrieved from the de-identified dataset of the archive of the Section of Oral and Maxillofacial Radiology of the University Of Connecticut School Of Dental Medicine. Major radiographic anatomical landmarks were independently reviewed by two examiners on both displays. The examiners initially reviewed ten panoramic and ten lateral cephalometric radiographs using each imaging system, in order to verify interoperator agreement in landmark identification. The images were scored on a four-point scale reflecting the diagnostic image quality and exposure level of the images. Statistical analysis showed no significant difference between the two displays regarding the visibility and clarity of the landmarks in either the panoramic or cephalometric radiographs. Tablet computers can reliably show anatomical landmarks in panoramic and lateral cephalometric radiographs.
Numerical Computation of High Dimensional Solitons Via Drboux Transformation
ZixiangZHOU
1997-01-01
Darboux transformation gives explicit soliton solutions of nonlinear partial differential equations.Using numerical computation in each step of constructing Darboux transformation,one can get the graphs of the solitons practically,In n dimensions(n≥3),this method greatly increases the speed and deduces the memory usage of computation comparing to the software for algebraic computation.A technical problem concerning floating overflow is discussed.
Chelson, P. O.; Eckstein, R. E.
1971-01-01
The computer program listing for the reliability block diagram computation program described in Reliability Computation From Reliability Block Diagrams is given. The program is written in FORTRAN 4 and is currently running on a Univac 1108. Each subroutine contains a description of its function.
Computational uncertainty principle in nonlinear ordinary differential equations--Numerical results
无
2000-01-01
In a majority of cases of long-time numerical integration for initial-value problems, round-off error has received little attention. Using twenty-nine numerical methods, the influence of round-off error on numerical solutions is generally studied through a large number of numerical experiments. Here we find that there exists a strong dependence on machine precision (which is a new kind of dependence different from the sensitive dependence on initial conditions), maximally effective computation time (MECT) and optimal stepsize (OS) in solving nonlinear ordinary differential equations (ODEs) in finite machine precision. And an optimal searching method for evaluating MECT and OS under finite machine precision is presented. The relationships between MECT, OS, the order of numerical method and machine precision are found. Numerical results show that round-off error plays a significant role in the above phenomena. Moreover, we find two universal relations which are independent of the types of ODEs, initial values and numerical schemes. Based on the results of numerical experiments, we present a computational uncertainty principle, which is a great challenge to the reliability of long-time numerical integration for nonlinear ODEs.
Numerical methods design, analysis, and computer implementation of algorithms
Greenbaum, Anne
2012-01-01
Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
A textbook of computer based numerical and statistical techniques
Jaiswal, AK
2009-01-01
About the Book: Application of Numerical Analysis has become an integral part of the life of all the modern engineers and scientists. The contents of this book covers both the introductory topics and the more advanced topics such as partial differential equations. This book is different from many other books in a number of ways. Salient Features: Mathematical derivation of each method is given to build the students understanding of numerical analysis. A variety of solved examples are given. Computer programs for almost all numerical methods discussed have been presented in `C` langu
Computer-Aided Numerical Inversion of Laplace Transform
Umesh Kumar
2000-01-01
This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly be...
Computer-Numerical-Control and the EMCO Compact 5 Lathe.
Mullen, Frank M.
This laboratory manual is intended for use in teaching computer-numerical-control (CNC) programming using the Emco Maier Compact 5 Lathe. Developed for use at the postsecondary level, this material contains a short introduction to CNC machine tools. This section covers CNC programs, CNC machine axes, and CNC coordinate systems. The following…
Numerical computation of a nonlocal double obstacle problem
Bhowmik, S.K.
2009-01-01
We consider a nonlocal double obstacle problem. This type of problems comes in various biological and physical situations, e.g., in phase transition models. We focus on numerical approximations and fast computation of such a model. We start with considering piece-wise basis functions for spatial app
Introduction to Numerical Computation - analysis and Matlab illustrations
Elden, Lars; Wittmeyer-Koch, Linde; Nielsen, Hans Bruun
In a modern programming environment like eg MATLAB it is possible by simple commands to perform advanced calculations on a personal computer. In order to use such a powerful tool efiiciently it is necessary to have an overview of available numerical methods and algorithms and to know about...... are illustrated by examples in MATLAB....
Methodology of Numerical Computations with Infinities and Infinitesimals
Sergeyev, Yaroslav D
2012-01-01
A recently developed computational methodology for executing numerical calculations with infinities and infinitesimals is described in this paper. The developed approach has a pronounced applied character and is based on the principle `The part is less than the whole' introduced by Ancient Greeks. This principle is used with respect to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The point of view on infinities and infinitesimals (and in general, on Mathematics) presented in this paper uses strongly physical ideas emphasizing interrelations holding between a mathematical object under the observation and tools used for this observation. It is shown how a new numeral system allowing one to express different infinite and infinitesimal quantities in a unique framework can be used for theoretical and computational purposes. Numerous examples dealing with infinite sets, divergent series, limits, and probability theory are given.
Numerical methods for solving ODEs on the infinity computer
Mazzia, F.; Sergeyev, Ya. D.; Iavernaro, F.; Amodio, P.; Mukhametzhanov, M. S.
2016-10-01
New algorithms for the numerical solution of Ordinary Differential Equations (ODEs) with initial conditions are proposed. They are designed for working on a new kind of a supercomputer - the Infinity Computer - that is able to deal numerically with finite, infinite and infinitesimal numbers. Due to this fact, the Infinity Computer allows one to calculate the exact derivatives of functions using infinitesimal values of the stepsize. As a consequence, the new methods are able to work with the exact values of the derivatives, instead of their approximations. Within this context, variants of one-step multi-point methods closely related to the classical Taylor formulae and to the Obrechkoff methods are considered. To get numerical evidence of the theoretical results, test problems are solved by means of the new methods and the results compared with the performance of classical methods.
Reliability analysis and design of on-board computer system for small stereo mapping satellite
马秀娟; 曹喜滨; 马兴瑞
2002-01-01
The on-board computer system for a small satellite is required to be high in reliability, light in weight, small in volume and low in power consumption. This paper describes the on-board computer system with the advantages of both centralized and distributed systems, analyzes its reliability, and briefs the key techniques used to improve its reliability.
Pratibha Singh
2014-08-01
Full Text Available A reliability evaluation system for the recognition of Devanagri Numerals is proposed in this paper. Reliability of classification is very important in applications of optical character recognition. As we know that the outliers and ambiguity may affect the performance of recognition system, a rejection measure must be there for the reliable recognition of the pattern. For each character image pre-processing steps like normalization, binarization, noise removal and boundary extraction is performed. After calculating the bounding box features are extracted for each partition of the numeral image. Features are calculated on three different zoning methods. Directional feature is considered which is obtained using chain code and gradient direction quantization of the orientations. The Zoning firstly, is considered made up of uniform partitions and secondly of non-uniform compartments based on the density of the pixels. For classification 1-nearest neighbor based classifier, quadratic bayes classifier and linear bayes classifier are chosen as base classifier. The base classifiers are combined using four decision combination rules namely maximum, Median, Average and Majority Voting. The framework is used to test the reliability of recognition system against ambiguity.
Grid computing for the numerical reconstruction of digital holograms
Nebrensky, J. J.; Hobson, P. R.; Fryer, P. C.
2005-02-01
Digital holography has the potential to greatly extend holography's applications and move it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical processing and readily allows further processing and visualization of the holographic image. The steady increase in sensor pixel count and resolution leads to the possibilities of larger sample volumes and of higher spatial resolution sampling, enabling the practical use of digital off-axis holography. However, this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing -- a recent innovation in large-scale distributed processing -- provides a convenient means of harnessing significant computing resources in ad-hoc fashion that might match the field deployment of a holographic instrument. In this paper we consider the computational needs of digital holography and discuss the deployment of numerical reconstruction software over an existing Grid testbed. The analysis of marine organisms is used as an exemplar for work flow and job execution of in-line digital holography.
Malinowski, Jacek
2004-05-01
A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)
Numeric computation and statistical data analysis on the Java platform
Chekanov, Sergei V
2016-01-01
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...
Numerical computation of constant mean curvature surfaces using finite elements
Metzger, J
2004-01-01
This paper presents a method for computing two-dimensional constant mean curvature surfaces. The method in question uses the variational aspect of the problem to implement an efficient algorithm. In principle it is a flow like method in that it is linked to the gradient flow for the area functional, which gives reliable convergence properties. In the background a preconditioned conjugate gradient method works, that gives the speed of a direct elliptic multigrid method.
The validity and reliability of computed tomography orbital volume measurements.
Diaconu, Silviu C; Dreizin, David; Uluer, Mehmet; Mossop, Corey; Grant, Michael P; Nam, Arthur J
2017-09-01
Orbital volume calculations allow surgeons to design patient-specific implants to correct volume deficits. It is estimated that changes as small as 1 ml in orbital volume can lead to enophthalmos. Awareness of the limitations of orbital volume computed tomography (CT) measurements is critical to differentiate between true volume differences and measurement error. The aim of this study is to analyze the validity and reliability of CT orbital volume measurements. A total of 12 cadaver orbits were scanned using a standard CT maxillofacial protocol. Each orbit was dissected to isolate the extraocular muscles, fatty tissue, and globe. The empty bony orbital cavity was then filled with sculpting clay. The volumes of the muscle, fat, globe, and clay (i.e., bony orbital cavity) were then individually measured via water displacement. The CT-derived volumes, measured by manual segmentation, were compared to the direct measurements to determine validity. The difference between CT orbital volume measurements and physically measured volumes is not negligible. Globe volumes have the highest agreement with 95% of differences between -0.5 and 0.5 ml, bony volumes are more likely to be overestimated with 95% of differences between -1.8 and 2.6 ml, whereas extraocular muscle volumes have poor validity and should be interpreted with caution. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Numerical computation of gravitational field for general axisymmetric objects
Fukushima, Toshio
2016-10-01
We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (i) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (ii) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (i) finite uniform objects covering rhombic spindles and circular toroids, (ii) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (iii) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.
Introduction to Numerical Computation - analysis and Matlab illustrations
Elden, Lars; Wittmeyer-Koch, Linde; Nielsen, Hans Bruun
their properties. The book describes and analyses numerical methods for error analysis, differentiation, integration, interpolation and approximation, and the solution of nonlinear equations, linear systems of algebraic equations and systems of ordinary differential equations. Principles and algorithms......In a modern programming environment like eg MATLAB it is possible by simple commands to perform advanced calculations on a personal computer. In order to use such a powerful tool efiiciently it is necessary to have an overview of available numerical methods and algorithms and to know about...
Integrating Numerical Computation into the Modeling Instruction Curriculum
Caballero, Marcos D; Aiken, John M; Douglas, Scott S; Scanlon, Erin M; Thoms, Brian; Schatz, Michael F
2012-01-01
We describe a way to introduce physics high school students with no background in programming to computational problem-solving experiences. Our approach builds on the great strides made by the Modeling Instruction reform curriculum. This approach emphasizes the practices of "Developing and using models" and "Computational thinking" highlighted by the NRC K-12 science standards framework. We taught 9th-grade students in a Modeling-Instruction-based physics course to construct computational models using the VPython programming environment. Numerical computation within the Modeling Instruction curriculum provides coherence among the curriculum's different force and motion models, links the various representations which the curriculum employs, and extends the curriculum to include real-world problems that are inaccessible to a purely analytic approach.
Numerical Computation of Dynamical Schwingerlike Pair Production in Graphene
Fillion-Gourdeau, F.; Blain, P.; Gagnon, D.; Lefebvre, C.; Maclean, S.
2017-03-01
The density of electron-hole pairs produced in a graphene sample immersed in a homogeneous time-dependent electric field is evaluated. Because low energy charge carriers in graphene are described by relativistic quantum mechanics, the calculation is performed within the strong field quantum electrodynamics formalism, requiring a solution of the Dirac equation in momentum space. The equation is solved using a split-operator numerical scheme on parallel computers, allowing for the investigation of several field configurations. The strength of the method is illustrated by computing the electron momentum density generated from a realistic laser pulse model. We observe quantum interference patterns reminiscent of Landau-Zener-Stückelberg interferometry.
Computer-Aided Numerical Inversion of Laplace Transform
Umesh Kumar
2000-01-01
Full Text Available This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly because of the difficulty in finding the necessary poles and residues. The numerical inversion technique mentioned above does away with the poles and residues but uses precomputed numbers to find the time response. This technique is applicable to the solution of partially differentiable equations and certain classes of linear systems with time varying components.
Multi-pattern Matching Methods Based on Numerical Computation
Lu Jun
2013-01-01
Full Text Available Multi-pattern matching methods based on numerical computation are advanced in this paper. Firstly it advanced the multiple patterns matching algorithm based on added information. In the process of accumulating of information, the select method of byte-accumulate operation will affect the collision odds , which means that the methods or bytes involved in the different matching steps should have greater differences as much as possible. In addition, it can use balanced binary tree to manage index to reduce the average searching times, and use the characteristics of a given pattern set by setting the collision field to eliminate collision further. In order to reduce the collision odds in the initial step, the information splicing method is advanced, which has greater value space than added information method, thus greatly reducing the initial collision odds. Multiple patterns matching methods based on numerical computation fits for large multi-pattern matching.
阚英男; 杨兆军; 李国发; 何佳龙; 王彦鹍; 李洪洲
2016-01-01
A new problem that classical statistical methods are incapable of solving is reliability modeling and assessment when multiple numerical control machine tools (NCMTs) reveal zero failures after a reliability test. Thus, the zero-failure data form and corresponding Bayesian model are developed to solve the zero-failure problem of NCMTs, for which no previous suitable statistical model has been developed. An expert−judgment process that incorporates prior information is presented to solve the difficulty in obtaining reliable prior distributions of Weibull parameters. The equations for the posterior distribution of the parameter vector and the Markov chain Monte Carlo (MCMC) algorithm are derived to solve the difficulty of calculating high-dimensional integration and to obtain parameter estimators. The proposed method is applied to a real case; a corresponding programming code and trick are developed to implement an MCMC simulation in WinBUGS, and a mean time between failures (MTBF) of 1057.9 h is obtained. Given its ability to combine expert judgment, prior information, and data, the proposed reliability modeling and assessment method under the zero failure of NCMTs is validated.
A new approach to numerical analysis of reliability indices in electronics
Geniy Kuznetsov
2015-01-01
Full Text Available Spatial modeling of unsteady temperature fields is conducted in a microelectronic printed circuit board (PCB with an account of convective and radiation heat transfer with the environment. The data for numerical modeling of temperature fields serve as a basis for determining the aging characteristics of the polymer material as a structural component of electronic engineering products. The obtained results allow concluding on the necessity to consider spatial nonuniform temperature fields when estimating the degree of polymeric materials degradation at the continuous service of products, as well as on the impact of polymer aging on reliability features of microelectronic devices.
Lu, Jinshu; Xu, Zhenfeng; Xu, Song; Xie, Sensen; Wu, Haoxiao; Yang, Zhenbo; Liu, Xueqiang
2015-06-15
Air barriers have been recently developed and employed as a new type of oil containment boom. This paper presents systematic investigations on the reliability of air barriers on oil containments with the involvement of flowing water, which represents the commonly-seen shearing current in reality, by using both laboratory experiments and numerical simulations. Both the numerical and experimental investigations are carried out in a model scale. In the investigations, a submerged pipe with apertures is installed near the bottom of a tank to generate the air bubbles forming the air curtain; and, the shearing water flow is introduced by a narrow inlet near the mean free surface. The effects of the aperture configurations (including the size and the spacing of the aperture) and the location of the pipe on the effectiveness of the air barrier on preventing oil spreading are discussed in details with consideration of different air discharges and velocities of the flowing water. The research outcome provides a foundation for evaluating and/or improve the reliability of a air barrier on preventing spilled oil from further spreading.
A New Language Design for Prototyping Numerical Computation
Thomas Derby
1996-01-01
Full Text Available To naturally and conveniently express numerical algorithms, considerable expressive power is needed in the languages in which they are implemented. The language Matlab is widely used by numerical analysts for this reason. Expressiveness or ease-of-use can also result in a loss of efficiency, as is the case with Matlab. In particular, because numerical analysts are highly interested in the performance of their algorithms, prototypes are still often implemented in languages such as Fortran. In this article we describe a language design that is intended to both provide expressiveness for numerical computation, and at the same time provide performance guarantees. In our language, EQ, we attempt to include both syntactic and semantic features that correspond closely to the programmer's model of the problem, including unordered equations, large-granularity state transitions, and matrix notation. The resulting language does not fit into standard language categories such as functional or imperative but has features of both paradigms. We also introduce the notion of language dependability, which is the idea that a language should guarantee that certain program transformations are performed by all implementations. We first describe the interesting features of EQ, and then present three examples of algorithms written using it. We also provide encouraging performance results from an initial implementation of our language.
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
Numerical Computation of Two-loop Box Diagrams with Masses
Yuasa, F; Hamaguchi, N; Ishikawa, T; Kato, K; Kurihara, Y; Fujimoto, J; Shimizu, Y
2011-01-01
A new approach is presented to evaluate multi-loop integrals, which appear in the calculation of cross-sections in high-energy physics. It relies on a fully numerical method and is applicable to a wide class of integrals with various mass configurations. As an example, the computation of two-loop planar and non-planar box diagrams is shown. The results are confirmed by comparisons with other techniques, including the reduction method, and by a consistency check using the dispersion relation.
Learning SciPy for numerical and scientific computing
Silva
2013-01-01
A step-by-step practical tutorial with plenty of examples on research-based problems from various areas of science, that prove how simple, yet effective, it is to provide solutions based on SciPy. This book is targeted at anyone with basic knowledge of Python, a somewhat advanced command of mathematics/physics, and an interest in engineering or scientific applications---this is broadly what we refer to as scientific computing.This book will be of critical importance to programmers and scientists who have basic Python knowledge and would like to be able to do scientific and numerical computatio
A computational model for the numerical simulation of FSW processes
Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo
2010-01-01
In this paper a computational model for the numerical simulation of Friction Stir Welding (FSW) processes is presented. FSW is a new method of welding in solid state in which a shouldered tool with a profile probe is rotated and slowly plunged into the joint line between two pieces of sheet or plate material which are butted together. Once the probe has been completely inserted, it is moved with a small tilt angle in the welding direction. Here a quasi-static, thermal transient, mixed mult...
Design for reliability information and computer-based systems
Bauer, Eric
2010-01-01
"System reliability, availability and robustness are often not well understood by system architects, engineers and developers. They often don't understand what drives customer's availability expectations, how to frame verifiable availability/robustness requirements, how to manage and budget availability/robustness, how to methodically architect and design systems that meet robustness requirements, and so on. The book takes a very pragmatic approach of framing reliability and robustness as a functional aspect of a system so that architects, designers, developers and testers can address it as a concrete, functional attribute of a system, rather than an abstract, non-functional notion"--Provided by publisher.
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
RIOT: I/O-Efficient Numerical Computing without SQL
Zhang, Yi; Yang, Jun
2009-01-01
R is a numerical computing environment that is widely popular for statistical data analysis. Like many such environments, R performs poorly for large datasets whose sizes exceed that of physical memory. We present our vision of RIOT (R with I/O Transparency), a system that makes R programs I/O-efficient in a way transparent to the users. We describe our experience with RIOT-DB, an initial prototype that uses a relational database system as a backend. Despite the overhead and inadequacy of generic database systems in handling array data and numerical computation, RIOT-DB significantly outperforms R in many large-data scenarios, thanks to a suite of high-level, inter-operation optimizations that integrate seamlessly into R. While many techniques in RIOT are inspired by databases (and, for RIOT-DB, realized by a database system), RIOT users are insulated from anything database related. Compared with previous approaches that require users to learn new languages and rewrite their programs to interface with a datab...
Numerical computation of travelling breathers in Klein Gordon chains
Sire, Yannick; James, Guillaume
2005-05-01
We numerically study the existence of travelling breathers in Klein-Gordon chains, which consist of one-dimensional networks of nonlinear oscillators in an anharmonic on-site potential, linearly coupled to their nearest neighbors. Travelling breathers are spatially localized solutions having the property of being exactly translated by p sites along the chain after a fixed propagation time T (these solutions generalize the concept of solitary waves for which p=1). In the case of even on-site potentials, the existence of small amplitude travelling breathers superposed on a small oscillatory tail has been proved recently [G. James, Y. Sire, Travelling breathers with exponentially small tails in a chain of nonlinear oscillators, Commun. Math. Phys., 2005, in press (available online at http://www.springerlink.com)], the tail being exponentially small with respect to the central oscillation size. In this paper, we compute these solutions numerically and continue them into the large amplitude regime for different types of even potentials. We find that Klein-Gordon chains can support highly localized travelling breather solutions superposed on an oscillatory tail. We provide examples where the tail can be made very small and is difficult to detect at the scale of central oscillations. In addition, we numerically observe the existence of these solutions in the case of non-even potentials.
Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems
Cai, Wei
2014-05-15
Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.
Computer-based numerical simulations of adsorption in nanostructures
Khashimova, Diana
2014-08-01
Zeolites are crystalline oxides with uniform, molecular-pore diameters of 3-14Å. Significant developments since 1950 made production of synthetic zeolites with high purity and controlled chemical composition possible. In powder-form, zeolites are major role-players in high-tech, industrial catalysis, adsorption, and ion exchange applications. Understanding properties of thin-film zeolites has been a focus of recent research. The ability to fine-tune desired macroscopic properties by controlled alteration at the molecular level is paramount. The relationships between macroscopic and molecular-level properties are established by experimental research. Because generating macroscopic, experimental data in a controlled laboratory can be prohibitively costly and time-consuming, reliable numerical simulations, which remove such difficulties, are an attractive alternative. Using a Configurational Biased Monte Carlo (CBMC) approach in grand canonical ensemble, numerical models for pure component and multicomponent adsorption processes were developed. Theoretical models such as ideal (IAST) and real adsorbed solution theory (RAST) to predict mixture adsorption in nanopores were used for comparison. Activity coefficients used in RAST calculations were determined from the Wilson, spreading pressure and COSMO-RS models. Investigative testing of the method on known materials, represented by all-silica zeolites such as MFI (channel type) and DDR (cage type), proved successful in replicating experimental data on adsorption of light hydrocarbons - alkanes, such as methane, ethane, propane and butane. Additionally, adsorption of binary and ternary mixtures was simulated. The given numerical approach developed can be a powerful, cost and time saving tool to predict process characteristics for different molecular-structure configurations. The approach used here for simulating adsorption properties of nanopore materials including process characteristics, may have great potential for
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Reliability in Warehouse-Scale Computing: Why Low Latency Matters
Nannarelli, Alberto
2015-01-01
, the limiting factor of these warehouse-scale data centers is the power dissipation. Power is dissipated not only in the computation itself, but also in heat removal (fans, air conditioning, etc.) to keep the temperature of the devices within the operating ranges. The need to keep the temperature low within......Warehouse sized buildings are nowadays hosting several types of large computing systems: from supercomputers to large clusters of servers to provide the infrastructure to the cloud. Although the main target, especially for high-performance computing, is still to achieve high throughput...
Fault-tolerant search algorithms reliable computation with unreliable information
Cicalese, Ferdinando
2013-01-01
Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr
The Process Group Approach to Reliable Distributed Computing
1991-07-01
under DARPA/NASA grant NAG-2-593, and by grants from EBM , HP, Siemens, GTE and Hitachi. I Ir in I a i gress SW Shwnu i Pnc" IBU r 00 8 133-1/4 1BM...system, but could make it harder to administer and less reliable. A theme of the paper will be that one overcomes this intrinsic problem by standardizing
Workload, Performance and Reliability of Digital Computing Systems.
1981-04-06
Proschan. Mathematical Theory of Reliability. John Wiley & Sons, 1965. [ Bazaraa 79] M.S. Bazaraa and C.M. Shetty. Nonlinear Programming. Theory and...exercised. System software failures are due to: a) the (static) input data to a progam module presents some peculiarities that the program is not able of...available, this is a typical nonlinear programming problem, subject to nonlinear inequality constraints. Since this problem will have to be solved
Reliable Provisioning of Spot Instances for Compute-intensive Applications
Voorsluys, William
2011-01-01
Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning ...
Reliable, Memory Speed Storage for Cluster Computing Frameworks
2014-06-16
specification API that can capture computations in many of today’s popular data-parallel computing models, e.g., MapReduce and SQL. We also ported the Hadoop ...runs on. We present solutions for priority and weighted fair sharing, the most common policies in systems like Hadoop and Dryad [45, 27]. Priority Based...understand its own configuration. For example, in Hadoop , configurations are kept in HadoopConf, while Spark stores these in SparkEnv. Therefore, their wrap
Reliable High Performance Peta- and Exa-Scale Computing
Bronevetsky, G
2012-04-02
As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty
Numerical simulation of NQR/NMR: Applications in quantum computing.
Possa, Denimar; Gaudio, Anderson C; Freitas, Jair C C
2011-04-01
A numerical simulation program able to simulate nuclear quadrupole resonance (NQR) as well as nuclear magnetic resonance (NMR) experiments is presented, written using the Mathematica package, aiming especially applications in quantum computing. The program makes use of the interaction picture to compute the effect of the relevant nuclear spin interactions, without any assumption about the relative size of each interaction. This makes the program flexible and versatile, being useful in a wide range of experimental situations, going from NQR (at zero or under small applied magnetic field) to high-field NMR experiments. Some conditions specifically required for quantum computing applications are implemented in the program, such as the possibility of use of elliptically polarized radiofrequency and the inclusion of first- and second-order terms in the average Hamiltonian expansion. A number of examples dealing with simple NQR and quadrupole-perturbed NMR experiments are presented, along with the proposal of experiments to create quantum pseudopure states and logic gates using NQR. The program and the various application examples are freely available through the link http://www.profanderson.net/files/nmr_nqr.php.
Summary of research in applied mathematics, numerical analysis, and computer sciences
1986-01-01
The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.
Reliable Date-Replication Using Grid Computing Tools
Sonnick, D
2009-01-01
The LHCb detector at CERN is a physical experiment to measure rare b-decays after the collision of protons in the Large Hadron Collider ring. The measured collisions are called “Events”. These events are containing the data which are necessary to analyze and reconstruct the decays. The events are send to speed optimized writer processes which are writing the events into files on a local hard disk cluster. Because the space is limited on the hard disk cluster, the data needs to be replicated to a long term storage system. This diploma thesis will present the design and implementation of a software which replicates the data in a reliable manner. In addition this software registers the data in special databases to prepare the following analyzes and reconstructions. Because the software which is used in the LHCb experiment is still under development, there is a special need for reliability to deal with error situations or inconsistent data. The subject of this diploma thesis was also presented at the “17th ...
Symbolic coding for noninvertible systems: uniform approximation and numerical computation
Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre
2016-11-01
It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.
Numerical computations of the dynamics of fluidic membranes and vesicles
Barrett, John W; Nürnberg, Robert
2015-01-01
Vesicles and many biological membranes are made of two monolayers of lipid molecules and form closed lipid bilayers. The dynamical behaviour of vesicles is very complex and a variety of forms and shapes appear. Lipid bilayers can be considered as a surface fluid and hence the governing equations for the evolution include the surface (Navier--)Stokes equations, which in particular take the membrane viscosity into account. The evolution is driven by forces stemming from the curvature elasticity of the membrane. In addition, the surface fluid equations are coupled to bulk (Navier--)Stokes equations. We introduce a parametric finite element method to solve this complex free boundary problem, and present the first three dimensional numerical computations based on the full (Navier--)Stokes system for several different scenarios. For example, the effects of the membrane viscosity, spontaneous curvature and area difference elasticity (ADE) are studied. In particular, it turns out, that even in the case of no viscosit...
Singularities of robot mechanisms numerical computation and avoidance path planning
Bohigas, Oriol; Ros, Lluís
2017-01-01
This book presents the singular configurations associated with a robot mechanism, together with robust methods for their computation, interpretation, and avoidance path planning. Having such methods is essential as singularities generally pose problems to the normal operation of a robot, but also determine the workspaces and motion impediments of its underlying mechanical structure. A distinctive feature of this volume is that the methods are applicable to nonredundant mechanisms of general architecture, defined by planar or spatial kinematic chains interconnected in an arbitrary way. Moreover, singularities are interpreted as silhouettes of the configuration space when seen from the input or output spaces. This leads to a powerful image that explains the consequences of traversing singular configurations, and all the rich information that can be extracted from them. The problems are solved by means of effective branch-and-prune and numerical continuation methods that are of independent interest in themselves...
Nested Transactions: An Approach to Reliable Distributed Computing.
1981-04-01
Undoubtedly such universal use of computers and rapid exchange of information will have a dramatic impact: social , economic, and political. Distributed...level tiansaction, these committed inferiors are SLJ C.e’,ssfulI inferiors of the top-level transaction, too. Therefore q will indeed get a commint
Numerical simulation of landfill aeration using computational fluid dynamics.
Fytanidis, Dimitrios K; Voudrias, Evangelos A
2014-04-01
The present study is an application of Computational Fluid Dynamics (CFD) to the numerical simulation of landfill aeration systems. Specifically, the CFD algorithms provided by the commercial solver ANSYS Fluent 14.0, combined with an in-house source code developed to modify the main solver, were used. The unsaturated multiphase flow of air and liquid phases and the biochemical processes for aerobic biodegradation of the organic fraction of municipal solid waste were simulated taking into consideration their temporal and spatial evolution, as well as complex effects, such as oxygen mass transfer across phases, unsaturated flow effects (capillary suction and unsaturated hydraulic conductivity), temperature variations due to biochemical processes and environmental correction factors for the applied kinetics (Monod and 1st order kinetics). The developed model results were compared with literature experimental data. Also, pilot scale simulations and sensitivity analysis were implemented. Moreover, simulation results of a hypothetical single aeration well were shown, while its zone of influence was estimated using both the pressure and oxygen distribution. Finally, a case study was simulated for a hypothetical landfill aeration system. Both a static (steadily positive or negative relative pressure with time) and a hybrid (following a square wave pattern of positive and negative values of relative pressure with time) scenarios for the aeration wells were examined. The results showed that the present model is capable of simulating landfill aeration and the obtained results were in good agreement with corresponding previous experimental and numerical investigations.
Windows and Fieldbus Based Software Computer Numerical Control System
WU Hongen; ZHANG Chengrui; LI Guili; WANG Baoren
2006-01-01
Computer numerical control (CNC) system is the base of modern digital and intelligent manufacturing technology. And opened its architecture and constituted based on PC and Windows operating system (OS) is the main trend of CNC system. However, even if the highest system priority is used in user mode, real-time capability of Windows (2000, NT, XP) for applications is not guaranteed. By using a device driver, which is running in kernel mode, the real time performance of Windows can be enhanced greatly. The acknowledgment performance of Windows to peripheral interrupts was evaluated. Harmonized with an intelligent real-time serial communication bus (RTSB), strict real-time performance can be achieved in Windows platform. An opened architecture software CNC system which is hardware independence is proposed based on PC and RTSB. A numerical control real time kernel (NCRTK), which is implemented as a device driver on Windows, is used to perform the NC tasks. Tasks are divided into real-time and non real-time. Real-time task is running in kernel mode and non real-time task is running in user mode. Data are exchanged between kernel and user mode by DMA and Windows Messages.
A Computational Model for the Numerical Simulation of FSW Processes
Agelet de Saracibar, C.; Chiumenti, M.; Santiago, D.; Cervera, M.; Dialami, N.; Lombera, G.
2010-06-01
In this paper a computational model for the numerical simulation of Friction Stir Welding (FSW) processes is presented. FSW is a new method of welding in solid state in which a shouldered tool with a profile probe is rotated and slowly plunged into the joint line between two pieces of sheet or plate material which are butted together. Once the probe has been completely inserted, it is moved with a small tilt angle in the welding direction. Here a quasi-static, thermal transient, mixed multiscale stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermo-viscoplastic material models have been considered. A staggered solution algorithm is defined such that for any time step, the mechanical problem is solved at constant temperature and then the thermal problem is solved keeping constant the mechanical variables. A pressure multiscale stabilized mixed linear velocity/linear pressure finite element interpolation formulation is used to solve the mechanical problem and a convection multiscale stabilized linear temperature interpolation formulation is used to solve the thermal problem. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are compared to other numerical results or experimental results, when available.
Mohamed Kenawey
2016-12-01
Conclusion: Computer assisted lower limb alignment analysis is reliable whether using graphics editing program or specialized planning software. However slight higher variability for angles away from the knee joint can be expected.
Test–retest reliability and validity of self-reported duration of computer use at work
IJmker, S.; Leijssen, J.N.M.; Blatter, B.M.; Beek, A.J. van der; Mechelen, W. van; Bongers, P.M.
2008-01-01
This study evaluated the test–retest reliability and the validity of self-reported duration of computer use at work. Test–retest reliability was studied among 81 employees of a research department of a university medical center. The employees filled out a web-based questionnaire twice with an in-bet
RELIABILITY, AVAILABILITY, AND SERVICEABILITY FOR PETASCALE HIGH-END COMPUTING AND BEYOND
Chokchai " Box" Leangsuksun
2011-05-31
Our project is a multi-institutional research effort that adopts interplay of RELIABILITY, AVAILABILITY, and SERVICEABILITY (RAS) aspects for solving resilience issues in highend scientific computing in the next generation of supercomputers. results lie in the following tracks: Failure prediction in a large scale HPC; Investigate reliability issues and mitigation techniques including in GPGPU-based HPC system; HPC resilience runtime & tools.
Numerical observer for atherosclerotic plaque classification in spectral computed tomography.
Lorsakul, Auranuch; Fakhri, Georges El; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; Laine, Andrew F; Li, Quanzheng
2016-07-01
Spectral computed tomography (SCT) generates better image quality than conventional computed tomography (CT). It has overcome several limitations for imaging atherosclerotic plaque. However, the literature evaluating the performance of SCT based on objective image assessment is very limited for the task of discriminating plaques. We developed a numerical-observer method and used it to assess performance on discrimination vulnerable-plaque features and compared the performance among multienergy CT (MECT), dual-energy CT (DECT), and conventional CT methods. Our numerical observer was designed to incorporate all spectral information and comprised two-processing stages. First, each energy-window domain was preprocessed by a set of localized channelized Hotelling observers (CHO). In this step, the spectral image in each energy bin was decorrelated using localized prewhitening and matched filtering with a set of Laguerre-Gaussian channel functions. Second, the series of the intermediate scores computed from all the CHOs were integrated by a Hotelling observer with an additional prewhitening and matched filter. The overall signal-to-noise ratio (SNR) and the area under the receiver operating characteristic curve (AUC) were obtained, yielding an overall discrimination performance metric. The performance of our new observer was evaluated for the particular binary classification task of differentiating between alternative plaque characterizations in carotid arteries. A clinically realistic model of signal variability was also included in our simulation of the discrimination tasks. The inclusion of signal variation is a key to applying the proposed observer method to spectral CT data. Hence, the task-based approaches based on the signal-known-exactly/background-known-exactly (SKE/BKE) framework and the clinical-relevant signal-known-statistically/background-known-exactly (SKS/BKE) framework were applied for analytical computation of figures of merit (FOM). Simulated data of a
Computing the SKT Reliability of Acyclic Directed Networks Using Factoring Method
KONG Fanjia; WANG Guangxing
1999-01-01
This paper presents a factoringalgorithm for computing source-to-K terminal (SKT) reliability, the probability that a source s can send message to a specified set of terminals K, in acyclic directed networks (AD-networks) in which bothnodes and edges can fail. Based on Pivotal decomposition theorem, a newformula is derived for computing the SKT reliability of AD-networks. By establishing a topological property of AD-networks, it is shown that the SKT reliability of AD-networks can be computed by recursively applying this formula. Two new Reliability-Preserving Reductions are alsointroduced. The recursion tree generated by the presented algorithm hasat most 2(|V| - |K|- |C|) leaf nodes, where |V| and |K| are the numbers of nodes and terminals, respectively, while |C| is the number of the nodes satisfying some specified conditions. The computation complexity of the new algorithm is O (|E||V|2(|V| -|K| -|C|)) in the worst case, where |E| is the number of edges. Forsource-to-all-terminal (SAT) reliability, its computation complexity is O (|E|). Comparison of the new algorithm with the existing ones indicates that the new algorithm is more efficient for computing the SKT reliability of AD-networks.
史训清; John HL Pang; 杨前进; 王志平; 聂景旭
2002-01-01
In the present study, a facility, i.e., a mechanical deflection system(MDS), was established and applied to assess the long-term reliability of the solder joints in plastic ball grid array (BGA) assembly. It was found that the MDS not only quickly assesses the long-term reliability of solder joints within days, but can also mimic similar failure mechanisms in accelerated thermal cycling (ATC) tests.Based on the MDS and ATC reliability experiments, the acceleration factors (AF)were obtained for different reliability testing conditions. Furthermore, by using the creep constitutive relation and fatigue life model developed in part I, a numerical approach was established for the purpose of virtual life prediction of solder joints.The simulation results were found to be in good agreement with the test results from the MDS. As a result, a new reliability assessment methodology was established as an alternative to ATC for the evaluation of long-term reliability of plastic BGA assembly.
Gourgoulhon, Eric
2011-04-01
clearly the research work of one of the authors, but it is also an opportunity to discuss the Cosmic Censorship conjecture and the Hoop conjecture. Chapter 11 presents the basics of hyperbolic systems and focuses on the famous BSSN formalism employed in most numerical codes. The electromagnetism analogy introduced in chapter 2 is developed, providing some very useful insight. The remainder of the book is devoted to the collapse of rotating stars (chapter 14) and to the coalescence of binary systems of compact objects, either neutron stars or black holes (chapters 12, 13, 15, 16 and 17). This is a unique introduction and review of results about the expected main sources of gravitational radiation. It includes a detailed presentation of the major triumph of numerical relativity: the successful computation of binary black hole merger. I think that Baumgarte and Shapiro have accomplished a genuine tour de force by writing such a comprehensive and self-contained textbook on a highly evolving subject. The primary value of the book is to be extremely pedagogical. The style is definitively at the textbook level and not that of a review article. One may point out the use of boxes to recap important results and the very instructive aspect of many figures, some of them in colour. There are also numerous exercises in the main text, to encourage the reader to find some useful results by himself. The pedagogical trend is manifest up to the book cover, with the subtitle explaining what the title means! Another great value of the book is indisputably its encyclopedic aspect, making it a very good starting point for research on many topics of modern relativity. I have no doubt that Baumgarte and Shapiro's monograph will quicken considerably the learning phase of any master or PhD student beginning numerical relativity. It will also prove to be very valuable for all researchers of the field and should become a major reference. Beyond numerical relativity, the richness and variety of
Kilov, Andrea M; Togher, Leanne; Power, Emma
2015-01-01
To determine test-re-test reliability of the 'Computer User Profile' (CUP) in people with and without TBI. The CUP was administered on two occasions to people with and without TBI. The CUP investigated the nature and frequency of participants' computer and Internet use. Intra-class correlation coefficients and kappa coefficients were conducted to measure reliability of individual CUP items. Descriptive statistics were used to summarize content of responses. Sixteen adults with TBI and 40 adults without TBI were included in the study. All participants were reliable in reporting demographic information, frequency of social communication and leisure activities and computer/Internet habits and usage. Adults with TBI were reliable in 77% of their responses to survey items. Adults without TBI were reliable in 88% of their responses to survey items. The CUP was practical and valuable in capturing information about social, leisure, communication and computer/Internet habits of people with and without TBI. Adults without TBI scored more items with satisfactory reliability overall in their surveys. Future studies may include larger samples and could also include an exploration of how people with/without TBI use other digital communication technologies. This may provide further information on determining technology readiness for people with TBI in therapy programmes.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
A new method for computing the reliability of consecutive k-out-of-n:F systems
Gökdere Gökhan
2016-01-01
Full Text Available In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
A fast, reliable algorithm for computing frequency responses of state space models
Wette, Matt
1991-01-01
Computation of frequency responses for large order systems described by time invariant state space systems often provides a bottleneck in control system analysis. It is shown that banding the A-matrix in the state space model can effectively reduce the computation time for such systems while maintaining reliability in the results produced.
Islam, Muhammad Faysal
2013-01-01
Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…
Islam, Muhammad Faysal
2013-01-01
Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…
Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers
H. S. Krishna
2004-07-01
Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.
LeoTask: a fast, flexible and reliable framework for computational research
Zhang, Changwang; Zhou, Shi; Chain, Benjamin M
2015-01-01
LeoTask is a Java library for computation-intensive and time-consuming research tasks. It automatically executes tasks in parallel on multiple CPU cores on a computing facility. It uses a configuration file to enable automatic exploration of parameter space and flexible aggregation of results, and therefore allows researchers to focus on programming the key logic of a computing task. It also supports reliable recovery from interruptions, dynamic and cloneable networks, and integration with th...
Chen, D.J.
1988-01-01
The literature is abundant with combinatorial reliability analysis of communication networks and fault-tolerant computer systems. However, it is very difficult to formulate reliability indexes using combinatorial methods. These limitations have led to the development of time-dependent reliability analysis using stochastic processes. In this research, time-dependent reliability-analysis techniques using Dataflow Graphs (DGF) are developed. The chief advantages of DFG models over other models are their compactness, structural correspondence with the systems, and general amenability to direct interpretation. This makes the verification of the correspondence of the data-flow graph representation to the actual system possible. Several DGF models are developed and used to analyze the reliability of communication networks and computer systems. Specifically, Stochastic Dataflow graphs (SDFG), both the discrete-time and the continuous time models are developed and used to compute time-dependent reliability of communication networks and computer systems. The repair and coverage phenomenon of communication networks is also analyzed using SDFG models.
Baker, Nancy A; Cook, James R; Redfern, Mark S
2009-01-01
This paper describes the inter-rater and intra-rater reliability, and the concurrent validity of an observational instrument, the Keyboard Personal Computer Style instrument (K-PeCS), which assesses stereotypical postures and movements associated with computer keyboard use. Three trained raters independently rated the video clips of 45 computer keyboard users to ascertain inter-rater reliability, and then re-rated a sub-sample of 15 video clips to ascertain intra-rater reliability. Concurrent validity was assessed by comparing the ratings obtained using the K-PeCS to scores developed from a 3D motion analysis system. The overall K-PeCS had excellent reliability [inter-rater: intra-class correlation coefficients (ICC)=.90; intra-rater: ICC=.92]. Most individual items on the K-PeCS had from good to excellent reliability, although six items fell below ICC=.75. Those K-PeCS items that were assessed for concurrent validity compared favorably to the motion analysis data for all but two items. These results suggest that most items on the K-PeCS can be used to reliably document computer keyboarding style.
An efficient numerical integral in three-dimensional electromagnetic field computations
Whetten, Frank L.; Liu, Kefeng; Balanis, Constantine A.
1990-01-01
An improved algorithm for efficiently computing a sinusoid and an exponential integral commonly encountered in method-of-moments solutions is presented. The new algorithm has been tested for accuracy and computer execution time against both numerical integration and other existing numerical algorithms, and has outperformed them. Typical execution time comparisons on several computers are given.
EVOLVE : a Bridge between Probability, Set Oriented Numerics, and Evolutionary Computation II
Coello, Carlos; Tantar, Alexandru-Adrian; Tantar, Emilia; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; EVOLVE 2012
2013-01-01
This book comprises a selection of papers from the EVOLVE 2012 held in Mexico City, Mexico. The aim of the EVOLVE is to build a bridge between probability, set oriented numerics and evolutionary computing, as to identify new common and challenging research aspects. The conference is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. EVOLVE is intended to unify theory-inspired methods and cutting-edge techniques ensuring performance guarantee factors. By gathering researchers with different backgrounds, a unified view and vocabulary can emerge where the theoretical advancements may echo in different domains. Summarizing, the EVOLVE focuses on challenging aspects arising at the passage from theory to new paradigms and aims to provide a unified view while raising questions related to reliability, performance guarantees and modeling. The papers of the EVOLVE 2012 make a contribution to this goal.
COMPUTER NUMERICAL SIMULATION OF MECHANICAL PROPERTIES OF TUNGSTEN HEAVY ALLOYS
无
1999-01-01
A microstructure model of tungsten heavy alloys has been developed. On the basis of the model and several assumptions, the macro-mechanical properties of 90 W heavy alloy under quasi-static tensile deformation and the effects of microstructural parameters (mechanical properties of the matrix phase and tungsten content) on them have been analyzed by computer numerical simulation. The mechanical properties of the alloy have been found to be dependent on the mechanical parameters of the matrix phase. As the elastic modulus and yield strength of the matrix phase increase, the tensile strength of the alloy increases, while the elongation decreases. If the mechanical parameters except the tensile strength of the matrix phase are constant, both the tensile strength and the elongation of the alloy increase linearly with the increase of tensile strength of the matrix phase. The properties of the alloy are very sensitive to the hardening modulus of the matrix phase. As the hardening modulus increases, both the tensile strength and the elongation of the alloy exponentially decrease. The elongation of the alloys monotonically decreases with the increase of tungsten content, while the decrease of tensile strength is not monotonic. When the tungsten content ＜ 85 %, the strength of tungsten heavy alloys increases with the increase of tungsten content, while decreases when the tungsten content ＞85 %. The maximum of tensile strength of the alloys appears at the tungsten content of 85 %. The results showed that the binder phase with a higher strength and a lower hardening modulus is advantageous to obtaining an optimum combination of mechanical properties of tungsten heavy alloys.
The reliable solution and computation time of variable parameters Logistic model
Pengfei, Wang
2016-01-01
The reliable computation time (RCT, marked as Tc) when applying a double precision computation of a variable parameters logistic map (VPLM) is studied. First, using the method proposed, the reliable solutions for the logistic map are obtained. Second, for a time-dependent non-stationary parameters VPLM, 10000 samples of reliable experiments are constructed, and the mean Tc is then computed. The results indicate that for each different initial value, the Tcs of the VPLM are generally different. However, the mean Tc trends to a constant value once the sample number is large enough. The maximum, minimum and probable distribution function of Tc is also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting while using the VPLM output. In addition, the Tc of the fixed parameter experiments of the logistic map was obtained, and the results suggested that this Tc matches the theoretical formula predicted value.
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Pearson, Adam M; Spratt, Kevin F; Genuario, James; McGough, William; Kosman, Katherine; Lurie, Jon; Sengupta, Dilip K
2011-04-01
Comparison of intra- and interobserver reliability of digitized manual and computer-assisted intervertebral motion measurements and classification of "instability." To determine if computer-assisted measurement of lumbar intervertebral motion on flexion-extension radiographs improves reliability compared with digitized manual measurements. Many studies have questioned the reliability of manual intervertebral measurements, although few have compared the reliability of computer-assisted and manual measurements on lumbar flexion-extension radiographs. Intervertebral rotation, anterior-posterior (AP) translation, and change in anterior and posterior disc height were measured with a digitized manual technique by three physicians and by three other observers using computer-assisted quantitative motion analysis (QMA) software. Each observer measured 30 sets of digital flexion-extension radiographs (L1-S1) twice. Shrout-Fleiss intraclass correlation coefficients for intra- and interobserver reliabilities were computed. The stability of each level was also classified (instability defined as >4 mm AP translation or 10° rotation), and the intra- and interobserver reliabilities of the two methods were compared using adjusted percent agreement (APA). Intraobserver reliability intraclass correlation coefficients were substantially higher for the QMA technique THAN the digitized manual technique across all measurements: rotation 0.997 versus 0.870, AP translation 0.959 versus 0.557, change in anterior disc height 0.962 versus 0.770, and change in posterior disc height 0.951 versus 0.283. The same pattern was observed for interobserver reliability (rotation 0.962 vs. 0.693, AP translation 0.862 vs. 0.151, change in anterior disc height 0.862 vs. 0.373, and change in posterior disc height 0.730 vs. 0.300). The QMA technique was also more reliable for the classification of "instability." Intraobserver APAs ranged from 87 to 97% for QMA versus 60% to 73% for digitized manual
An Algorithm for Optimized Time, Cost, and Reliability in a Distributed Computing System
Pankaj Saxena
2013-03-01
Full Text Available Distributed Computing System (DCS refers to multiple computer systems working on a single problem. A distributed system consists of a collection of autonomous computers, connected through a network which enables computers to coordinate their activities and to share the resources of the system. In distributed computing, a single problem is divided into many parts, and each part is solved by different computers. As long as the computers are networked, they can communicate with each other to solve the problem. DCS consists of multiple software components that are on multiple computers, but run as a single system. The computers that are in a distributed system can be physically close together and connected by a local network, or they can be geographically distant and connected by a wide area network. The ultimate goal of distributed computing is to maximize performance in a time effective, cost-effective, and reliability effective manner. In DCS the whole workload is divided into small and independent units, called tasks and it allocates onto the available processors. It also ensures fault tolerance and enables resource accessibility in the event that one of the components fails. The problem is addressed of assigning a task to a distributed computing system. The assignment of the modules of tasks is done statically. We have to give an algorithm to solve the problem of static task assignment in DCS, i.e. given a set of communicating tasks to be executed on a distributed system on a set of processors, to which processor should each task be assigned to get the more reliable results in lesser time and cost. In this paper an efficient algorithm for task allocation in terms of optimum time or optimum cost or optimum reliability is presented where numbers of tasks are more then the number of processors.
Optimal reliability allocation for large software projects through soft computing techniques
Madsen, Henrik; Albeanu, Grigore; Albu, Razvan-Daniel
2012-01-01
or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...
Reliable multicast for the Grid: a case study in experimental computer science.
Nekovee, Maziar; Barcellos, Marinho P; Daw, Michael
2005-08-15
In its simplest form, multicast communication is the process of sending data packets from a source to multiple destinations in the same logical multicast group. IP multicast allows the efficient transport of data through wide-area networks, and its potentially great value for the Grid has been highlighted recently by a number of research groups. In this paper, we focus on the use of IP multicast in Grid applications, which require high-throughput reliable multicast. These include Grid-enabled computational steering and collaborative visualization applications, and wide-area distributed computing. We describe the results of our extensive evaluation studies of state-of-the-art reliable-multicast protocols, which were performed on the UK's high-speed academic networks. Based on these studies, we examine the ability of current reliable multicast technology to meet the Grid's requirements and discuss future directions.
Agaoglu, Esmahan; Ceyhan, Esra; Ceyhan, Aykut; Simsek, Yucel
2008-01-01
This study aims at investigating the validity and reliability studies of the "Computer Anxiety Scale" (Ceyhan & Gurcan Namlu, 2000) on educational administrators. The data gathered from 143 educational administrators of state schools located in Eskisehir show that the scale consists of 2 factors. The first of these factors, affective anxiety…
Liudong Xing
2006-01-01
Full Text Available Imperfect coverage (IPC occurs when a malicious component failure causes extensive damage due to inadequate fault detection, fault location or fault recovery. Common-cause failures (CCF are multiple dependent component failures within a system due to a shared root cause. Both imperfect coverage and common-cause failures can exist in distributed computer systems and can contribute significantly to the overall system unreliability. Moreover they can complicate the reliability analysis. In this study, we propose an efficient approach to the reliability analysis of distributed computer systems (DCS with both IPC and CCF. The proposed methodology is to decouple the effects of IPC and CCF from the combinatorics of the solution. The resulting approach is applicable to the computationally efficient binary decision diagrams (BDD based method for the reliability analysis of DCS. We provide a concrete analysis of an example DCS to illustrate the application and advantages of our approach. Due to the consideration of IPC and CCF, our approach can evaluate a wider class of DCS as compared with existing approaches. Due to the nature of the BDD and the separation of IPC and CCF from the solution combinatorics, our approach has high computational efficiency and is easy to implement, which means that it can be easily applied to the accurate reliability analysis of large-scale DCS subject to IPC and CCF. The DCS without IPC or CCF appear to be special cases of our approach.
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
Computational Notes on the Numerical Analysis of Galactic Rotation Curves
Scelza, G
2014-01-01
In this paper we present a brief discussion on the salient points of the computational analysis that are at the basis of the paper \\cite{StSc}. The computational and data analysis have been made with the software Mathematica$^\\circledR$ and presented at Mathematica Italia User Group Meeting 2011.
Numerical Computation of Large Amplitude Internal Solitary Waves,
1981-03-20
provide adequate resolution. All computations were performed on a CDC Cyber 176 computer, and it takes slightly less than one CPU second to obtain a...H. Segur , Lgn Internal Waves in Fluids of Great Depth, Studies in Applied Math., 62 (1980), pp. 249-262. [3] E. Allgower and K. Georg, Simlicial -and
Numerical computation of nonlinear normal modes in mechanical engineering
Renson, L.; Kerschen, G.; Cochelin, B.
2016-03-01
This paper reviews the recent advances in computational methods for nonlinear normal modes (NNMs). Different algorithms for the computation of undamped and damped NNMs are presented, and their respective advantages and limitations are discussed. The methods are illustrated using various applications ranging from low-dimensional weakly nonlinear systems to strongly nonlinear industrial structures.
Benchmark Numerical Toolkits for High Performance Computing Project
National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...
Computational Fluid Dynamics. [numerical methods and algorithm development
1992-01-01
This collection of papers was presented at the Computational Fluid Dynamics (CFD) Conference held at Ames Research Center in California on March 12 through 14, 1991. It is an overview of CFD activities at NASA Lewis Research Center. The main thrust of computational work at Lewis is aimed at propulsion systems. Specific issues related to propulsion CFD and associated modeling will also be presented. Examples of results obtained with the most recent algorithm development will also be presented.
Pulse cleaning flow models and numerical computation of candle ceramic filters
无
2002-01-01
Analytical and numerical computed models are developed for reverse pulse cleaning system of candle ceramic filters. A standard turbulent model is demonstrated suitably to the designing computation of reverse pulse cleaning system from the experimental and onedimensional computational result. The computed results can be used to guide the designing of reverse pulse cleaning system, which is optimum Venturi geometry. From the computed results, the general conclusions and the designing methods are obtained.
Numerical computation of the critical energy constant for two-dimensional Boussinesq equations
Kolkovska, N.; Angelow, K.
2015-10-01
The critical energy constant is of significant interest for the theoretical and numerical analysis of Boussinesq type equations. In the one-dimensional case this constant is evaluated exactly. In this paper we propose a method for numerical evaluation of this constant in the multi-dimensional cases by computing the ground state. Aspects of the numerical implementation are discussed and many numerical results are demonstrated.
Fault and Defect Tolerant Computer Architectures: Reliable Computing with Unreliable Devices
2006-08-31
and polished using chemical- mechanical polishing ( CMP ) (Diagram 3). 5. Wet etching is done using hot H3PO4, then chemical dry etching is used to...modelled as a diode with a switchable threshold (i.e., turn-on) voltage. The switches are set or reset by electrochemical reduction or oxidation of the... characterizing the reliability of the overall system are examined. 2.4.1.1 Key Definitions. Error is a manifestation of a fault in the system, in
Advanced Numerical Methods for Computing Statistical Quantities of Interest
2014-07-10
illustrations. =⇒ White noise random fields are in ubiquitous use in practice for modeling uncertainty in complex systems, despite the fact that the...differential equations with jumps for a class of nonlocal diffusion problems; submitted. We developed a novel numerical approach for linear nonlocal ...differential equations (BSDEs) driven by Lèvy processes with jumps. The nonlocal diffusion problem under consideration was converted into a BSDE, for which
International Conference on Numerical Grid Generation in Computational Fluid Dynamics
1989-04-30
Grun Convex Computer Corporation Brunnenstr. 17 701 Piano Road 8049 Bachenhausen Richardson Germany TX 75081 Chunyuan Gu J. E.Holcomb Dept. of Gas...Lab System Dynamics Inc. L-95, PO Box 808 1211 N.W. 10th Avenue Livermore, CA 94550 Gainesville FL 32601 Bernadette Palmero Azine Renzo Universite de
Numerical Methods for Computing Turbulence-Induced Noise
2005-12-16
consider the finite dimensional subspace Vhl C Vh . Let vhi -= phlu be the optimal representation of u in Vhl and phi : V+_+ Vhl be the appropriate...mapping. We consider the following numerical method which is obtained by replacing h with hi in (2.4). Find uhl E Vhi , such that B(whi, uhl) + M(whUhl, f...the same functional form of the model that leads to the optimal solution on Vh, also leads to the optimal solution on Vhi . Thus, requiring uhl = vh
Ultra-reliable computer systems: an integrated approach for application in reactor safety systems
Chisholm, G.H.
1985-01-01
Improvements in operation and maintenance of nuclear reactors can be realized with the application of computers in the reactor control systems. In the context of this paper a reactor control system encompasses the control aspects of the Reactor Safety System (RSS). Equipment qualification for application in reactor safety systems requires a rigorous demonstration of reliability. For the purpose of this paper, the reliability demonstration will be divided into two categories. These categories are demonstrations of compliance with respect to (a) environmental; and (b) functional design constrains. This paper presents an approach for the determination of computer-based RSS respective to functional design constraints only. It is herein postulated that the design for compliance with environmental design constraints is a reasonably definitive problem and within the realm of available technology. The demonstration of compliance with design constraints respective to functionality, as described herein, is an extension of available technology and requires development.
Degtyarev Alexander
2016-01-01
Full Text Available The paper deals with the computer implementation of direct computational experiments in fluid mechanics, constructed on the basis of the approach developed by the authors. The proposed approach allows the use of explicit numerical scheme, which is an important condition for increasing the effciency of the algorithms developed by numerical procedures with natural parallelism. The paper examines the main objects and operations that let you manage computational experiments and monitor the status of the computation process. Special attention is given to a realization of tensor representations of numerical schemes for direct simulation; b realization of representation of large particles of a continuous medium motion in two coordinate systems (global and mobile; c computing operations in the projections of coordinate systems, direct and inverse transformation in these systems. Particular attention is paid to the use of hardware and software of modern computer systems.
Cosmic Reionization On Computers: Numerical and Physical Convergence
Gnedin, Nickolay Y
2016-01-01
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar m...
Exact Symbolic-Numeric Computation of Planar Algebraic Curves
Berberich, Eric; Kobel, Alexander; Sagraloff, Michael
2012-01-01
We present a novel certified and complete algorithm to compute arrangements of real planar algebraic curves. It provides a geometric-topological analysis of the decomposition of the plane induced by a finite number of algebraic curves in terms of a cylindrical algebraic decomposition. From a high-level perspective, the overall method splits into two main subroutines, namely an algorithm denoted Bisolve to isolate the real solutions of a zero-dimensional bivariate system, and an algorithm denoted GeoTop to analyze a single algebraic curve. Compared to existing approaches based on elimination techniques, we considerably improve the corresponding lifting steps in both subroutines. As a result, generic position of the input system is never assumed, and thus our algorithm never demands for any change of coordinates. In addition, we significantly limit the types of involved exact operations, that is, we only use resultant and gcd computations as purely symbolic operations. The latter results are achieved by combini...
Numerical Computational Technique for Scattering from Underwater Objects
T. Ratna Mani
2013-01-01
Full Text Available Normal 0 false false false EN-IN X-NONE X-NONE MicrosoftInternetExplorer4 This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD method, based on the superposition of reflections, from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric study has been carried out, by considering different pulse frequency, pulse length, pulse type (CW, LFM , SFM, sampling frequency, as well as different size , shape of the scatteringbody and grid size. It has been observed that increasing the pulse frequency, sampling frequency and number of elements leads to improved results. However, good amount of accuracy has been achieved with element size less than one third of wave length. The experimental result of the underwater object has been found very close to the`simulated result. This technique is useful for computing forward scatter for inverse scattering applications and as well as to generate forward scatter of very narrow and wide band signals of any pulse type and shape of body.Defence Science Journal, 2013, 63(1, pp.119-126, DOI:http://dx.doi.org/10.14429/dsj.63.779
Numerical Computational Technique for Scattering from Underwater Objects
T. Ratna Mani
2013-01-01
Full Text Available This paper presents a computational technique for mono-static and bi-static scattering from underwater objects of different shape such as submarines. The scatter has been computed using finite element time domain (FETD method, based on the superposition of reflections ,from the different elements reaching the receiver at a particular instant in time. The results calculated by this method has been verified with the published results based on ramp response technique. An in-depth parametric study has been carried out, by considering different pulse frequency, pulse length, pulse type (CW, LFM , SFM, sampling frequency, as well as different size , shape of the scattering body and grid size. It has been observed that increasing the pulse frequency, sampling frequency and number of elements leads to improved results. However, good amount of accuracy has been achieved with element size less than one third of wave length. The experimental result of the underwater object has been found very close to the `simulated result. This technique is useful for computing forward scatter for inverse scattering applications and as well as to generate forward scatter of very narrow and wide band signals of any pulse type and shape of body.
Katz, Jonathan E
2017-01-01
Laboratories tend to be amenable environments for long-term reliable operation of scientific measurement equipment. Indeed, it is not uncommon to find equipment 5, 10, or even 20+ years old still being routinely used in labs. Unfortunately, the Achilles heel for many of these devices is the control/data acquisition computer. Often these computers run older operating systems (e.g., Windows XP) and, while they might only use standard network, USB or serial ports, they require proprietary software to be installed. Even if the original installation disks can be found, it is a burdensome process to reinstall and is fraught with "gotchas" that can derail the process-lost license keys, incompatible hardware, forgotten configuration settings, etc. If you have running legacy instrumentation, the computer is the ticking time bomb waiting to put a halt to your operation.In this chapter, I describe how to virtualize your currently running control computer. This virtualized computer "image" is easy to maintain, easy to back up and easy to redeploy. I have used this multiple times in my own lab to greatly improve the robustness of my legacy devices.After completing the steps in this chapter, you will have your original control computer as well as a virtual instance of that computer with all the software installed ready to control your hardware should your original computer ever be decommissioned.
Donald D. Anderson
2012-01-01
Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Zhang Liang; Xue Songbai; Lu Fangyan; Han Zongjie; Wang Jianxin
2008-01-01
This paper deals with a study on SnPb and lead-free soldered joint reliability of PLCC devices with different lead counts under three kinds of temperature cycle profiles, which is based on non-linear finite element method. By analyzing the stress of soldered joints, it is found that the largest stress is at the area between the soldered joints and the leads, and analysis results indicate that the von Mises stress at the location slightly increases with the increase of lead counts. For PLCC with 84 leads the soldered joints was modeled for three typical loading (273-398 K, 218-398 K and 198-398 K) in order to study the influence of acceleration factors on the reliability of soldered joints. And the estimation of equivalent plastic strain of three different lead-free solder alloys (Sn3.8Ag0.7Cu, Sn3.5Ag and Sn37Pb) was also carried out.
1994-01-01
This report summarizes research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, fluid mechanics, and computer science during the period October 1, 1993 through March 31, 1994. The major categories of the current ICASE research program are: (1) applied and numerical mathematics, including numerical analysis and algorithm development; (2) theoretical and computational research in fluid mechanics in selected areas of interest to LaRC, including acoustics and combustion; (3) experimental research in transition and turbulence and aerodynamics involving LaRC facilities and scientists; and (4) computer science.
Graphic interface for numerical commands on the USB port of PC compatible computers
Popa Elena
2017-01-01
Full Text Available Computers are increasingly used in the present technological processes. Several numerical input/ output electronic modules were designed and made to support computer automated technological process in the wood industry. This paper presents the software for these modules, built in the Delphi language and aimed to obtain numerical commands by using the USB port of a computer. As modern computers are no longer provided with parallel ports, a K8055 USB Experiment Interface Board manufactured by VELLEMAN was used. The board includes a PIC16C745-IP microcontroller, which enables communication via specific software.
Numerical methods of computation of singular and hypersingular integrals
I. V. Boikov
2001-01-01
and technology one is faced with necessity of calculating different singular integrals. In analytical form calculation of singular integrals is possible only in unusual cases. Therefore approximate methods of singular integrals calculation are an active developing direction of computing in mathematics. This review is devoted to the optimal with respect to accuracy algorithms of the calculation of singular integrals with fixed singularity, Cauchy and Hilbert kernels, polysingular and many-dimensional singular integrals. The isolated section is devoted to the optimal with respect to accuracy algorithms of the calculation of the hypersingular integrals.
COSMIC REIONIZATION ON COMPUTERS: NUMERICAL AND PHYSICAL CONVERGENCE
Gnedin, Nickolay Y., E-mail: gnedin@fnal.gov [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637 (United States)
2016-04-10
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce a weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite-resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ∼20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, such as stellar masses and metallicities. Yet other properties of model galaxies, for example, their H i masses, are recovered in the weakly converged runs only within a factor of 2.
The NumPy array: a structure for efficient numerical computation
Van Der Walt, Stefan; Varoquaux, Gaël
2011-01-01
In the Python world, NumPy arrays are the standard representation for numerical data. Here, we show how these arrays enable efficient implementation of numerical computations in a high-level language. Overall, three techniques are applied to improve performance: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts. We first present the NumPy array structure, then show how to use it for efficient computation, and finally how to share array data with other libraries.
The NumPy array: a structure for efficient numerical computation
Van der Walt, Stefan; Colbert, S. Chris; Varoquaux, Gaël
2011-01-01
International audience; In the Python world, NumPy arrays are the standard representation for numerical data. Here, we show how these arrays enable efficient implementation of numerical computations in a high-level language. Overall, three techniques are applied to improve performance: vectorizing calculations, avoiding copying data in memory, and minimizing operation counts. We first present the NumPy array structure, then show how to use it for efficient computation, and finally how to shar...
An Efficient Method for Solving Spread Option Pricing Problem: Numerical Analysis and Computing
R. Company
2016-01-01
Full Text Available This paper deals with numerical analysis and computing of spread option pricing problem described by a two-spatial variables partial differential equation. Both European and American cases are treated. Taking advantage of a cross derivative removing technique, an explicit difference scheme is developed retaining the benefits of the one-dimensional finite difference method, preserving positivity, accuracy, and computational time efficiency. Numerical results illustrate the interest of the approach.
Reliability Analysis-Based Numerical Calculation of Metal Structure of Bridge Crane
Wenjun Meng
2013-01-01
Full Text Available The study introduced a finite element model of DQ75t-28m bridge crane metal structure and made finite element static analysis to obtain the stress response of the dangerous point of metal structure in the most extreme condition. The simulated samples of the random variable and the stress of the dangerous point were successfully obtained through the orthogonal design. Then, we utilized BP neural network nonlinear mapping function trains to get the explicit expression of stress in response to the random variable. Combined with random perturbation theory and first-order second-moment (FOSM method, the study analyzed the reliability and its sensitivity of metal structure. In conclusion, we established a novel method for accurately quantitative analysis and design of bridge crane metal structure.
Kako, T.; Watanabe, T. [eds.
1999-04-01
This is the proceeding of 'Study on Numerical Methods Related to Plasma Confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. These are also various talks on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. The 14 papers are indexed individually. (J.P.N.)
New numerical analysis method in computational mechanics: composite element method
无
2000-01-01
A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF's description after discretizing the structure, i.e. the nodal coordinate system UFEM(ξ) for employing the conventional FEM, and the field coordinate system UCT(ξ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ξ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.
Numerical Relativity As A Tool For Computational Astrophysics
Seidel, E; Seidel, Edward; Suen, Wai-Mo
1999-01-01
The astrophysics of compact objects, which requires Einstein's theory of general relativity for understanding phenomena such as black holes and neutron stars, is attracting increasing attention. In general relativity, gravity is governed by an extremely complex set of coupled, nonlinear, hyperbolic-elliptic partial differential equations. The largest parallel supercomputers are finally approaching the speed and memory required to solve the complete set of Einstein's equations for the first time since they were written over 80 years ago, allowing one to attempt full 3D simulations of such exciting events as colliding black holes and neutron stars. In this paper we review the computational effort in this direction, and discuss a new 3D multi-purpose parallel code called ``Cactus'' for general relativistic astrophysics. Directions for further work are indicated where appropriate.
New numerical analysis method in computational mechanics: composite element method
曾攀
2000-01-01
A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF’ s description after discretizing the structure, i.e. the nodal coordinate system UFEM(ζ) for employing the conventional FEM, and the field coordinate system UCT(ζ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ζ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.
Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial
Kevin A. Hallgren
2012-02-01
Full Text Available Many research designs require the assessment of inter-rater reliability (IRR to demonstrate consistency among observational ratings provided by multiple coders. However, many studies use incorrect statistical procedures, fail to fully report the information necessary to interpret their results, or do not address how IRR affects the power of their subsequent analyses for hypothesis testing. This paper provides an overview of methodological issues related to the assessment of IRR with a focus on study design, selection of appropriate statistics, and the computation, interpretation, and reporting of some commonly-used IRR statistics. Computational examples include SPSS and R syntax for computing Cohens kappa and intra-class correlations to assess IRR.
Mixing height computation from a numerical weather prediction model
Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science
2004-07-01
Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)
The reliable solution and computation time of variable parameters logistic model
Wang, Pengfei; Pan, Xinnong
2017-04-01
The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.
无
2006-01-01
An efficient importance sampling algorithm is presented to analyze reliability of complex structural system with multiple failure modes and fuzzy-random uncertainties in basic variables and failure modes. In order to improve the sampling efficiency, the simulated annealing algorithm is adopted to optimize the density center of the importance sampling for each failure mode, and results that the more significant contribution the points make to fuzzy failure probability, the higher occurrence possibility the points are sampled. For the system with multiple fuzzy failure modes, a weighted and mixed importance sampling function is constructed. The contribution of each fuzzy failure mode to the system failure probability is represented by the appropriate factors, and the efficiency of sampling is improved furthermore. The variances and the coefficients of variation are derived for the failure probability estimations. Two examples are introduced to illustrate the rationality of the present method. Comparing with the direct Monte-Carlo method, the improved efficiency and the precision of the method are verified by the examples.
Heuvel, D.A.V. den; Es, H.W. van; Heesewijk, J.P. van; Spee, M. [St. Antonius Hospital Nieuwegein, Department of Radiology, Nieuwegein (Netherlands); Jong, P.A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Zanen, P.; Grutters, J.C. [University Medical Center Utrecht, Division Heart and Lungs, Utrecht (Netherlands); St. Antonius Hospital Nieuwegein, Center of Interstitial Lung Diseases, Department of Pulmonology, Nieuwegein (Netherlands)
2015-09-15
To determine inter-rater reliability of sarcoidosis-related computed tomography (CT) findings that can be used for scoring of thoracic sarcoidosis. CT images of 51 patients with sarcoidosis were scored by five chest radiologists for various abnormal CT findings (22 in total) encountered in thoracic sarcoidosis. Using intra-class correlation coefficient (ICC) analysis, inter-rater reliability was analysed and reported according to the Guidelines for Reporting Reliability and Agreement Studies (GRRAS) criteria. A pre-specified sub-analysis was performed to investigate the effect of training. Scoring was trained in a distinct set of 15 scans in which all abnormal CT findings were represented. Median age of the 51 patients (36 men, 70 %) was 43 years (range 26 - 64 years). All radiographic stages were present in this group. ICC ranged from 0.91 for honeycombing to 0.11 for nodular margin (sharp versus ill-defined). The ICC was above 0.60 in 13 of the 22 abnormal findings. Sub-analysis for the best-trained observers demonstrated an ICC improvement for all abnormal findings and values above 0.60 for 16 of the 22 abnormalities. In our cohort, reliability between raters was acceptable for 16 thoracic sarcoidosis-related abnormal CT findings. (orig.)
Reliability of a method to conduct upper airway analysis in cone-beam computed tomography
Karen Regina Siqueira de Souza
2013-02-01
Full Text Available The aim of this study was to assess the reliability of a method to measure the following upper airway dimensions: total volume (TV, the nasopharyngeal narrowest areas (NNA, and the oropharyngeal narrowest areas (ONA. The sample consisted of 60 cone-beam computed tomography (CBCT scans, evaluated by two observers twice, using the Dolphin 3D software (Dolphin Imaging & Management solutions, Chatsworth, California, USA, which afforded image reconstruction, and measurement of the aforementioned dimensions. The data was submitted to reliability tests, by the intraclass correlation coefficient (ICC, and the Bland & Altman agreement tests, with their respective confidence intervals (CI set at 95%. Excellent intra- and interobserver reliability values were found for all variables assessed (TV, NNA and ONA, with ICC values ranging from 0.88 to 0.99. The data demonstrated an agreement between the two assessments of each observer and between the first evaluations of both observers, thus confirming the reliability of this methodology. The results suggest that this methodology can be used in further studies to investigate upper airway dimensions (TV, NNA, and ONA, thereby contributing to the diagnosis of upper airway obstructions.
Brandli, A. E.; Donham, C. F.
1974-01-01
This paper describes the application of a numerical differencing analyzer computer program to the thermal analyzation of a MIUS model. The MIUS model which was evaluated is one which would be required to support a 648-unit Garden Apartment Complex. This computer program was capable of predicting the thermal performance of this MIUS from the impressed electrical, heating, and cooling loads.
Numerical computing of elastic homogenized coefficients for periodic fibrous tissue
Roman S.
2009-06-01
Full Text Available The homogenization theory in linear elasticity is applied to a periodic array of cylindrical inclusions in rectangular pattern extending to infinity in the inclusions axial direction, such that the deformation of tissue along this last direction is negligible. In the plane of deformation, the homogenization scheme is based on the average strain energy whereas in the third direction it is based on the average normal stress along this direction. Namely, these average quantities have to be the same on a Repeating Unit Cell (RUC of heterogeneous and homogenized media when using a special form of boundary conditions forming by a periodic part and an affine part of displacement. It exists an infinity of RUCs generating the considered array. The computing procedure is tested with different choices of RUC to control that the results of the homogenization process are independent of the kind of RUC we employ. Then, the dependence of the homogenized coefficients on the microstructure can be studied. For instance, a special anisotropy and the role of the inclusion volume are investigated. In the second part of this work, mechanical traction tests are simulated. We consider two kinds of loading, applying a density of force or imposing a displacement. We test five samples of periodic array containing one, four, sixteen, sixty-four and one hundred of RUCs. The evolution of mean stresses, strains and energy with the numbers of inclusions is studied. Evolutions depend on the kind of loading, but not their limits, which could be predicted by simulating traction test of the homogenized medium.
Xue Xiang
2013-03-01
Full Text Available Finite difference method (FDM was applied to simulate thermal stress recently, which normally needs a long computational time and big computer storage. This study presents two techniques for improving computational speed in numerical simulation of casting thermal stress based on FDM, one for handling of nonconstant material properties and the other for dealing with the various coefficients in discretization equations. The use of the two techniques has been discussed and an application in wave-guide casting is given. The results show that the computational speed is almost tripled and the computer storage needed is reduced nearly half compared with those of the original method without the new technologies. The stress results for the casting domain obtained by both methods that set the temperature steps to 0.1 ℃ and 10 ℃, respectively are nearly the same and in good agreement with actual casting situation. It can be concluded that both handling the material properties as an assumption of stepwise profile and eliminating the repeated calculation are reliable and effective to improve computational speed, and applicable in heat transfer and fluid flow simulation.
Computational area measurement of orbital floor fractures: Reliability, accuracy and rapidity
Schouman, Thomas, E-mail: thomas.schouman@psl.aphp.fr [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Courvoisier, Delphine S., E-mail: delphine.courvoisier@hcuge.ch [Biostatistician - Service of Clinical Epidemiology, University Hospital and Faculty of Medicine of Geneva 1211 Genève (Switzerland); Imholz, Benoit, E-mail: benoit.imholz@hcuge.ch [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Van Issum, Christopher, E-mail: christopher.vanissum@hcuge.ch [Service of Ophthalmology, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland); Scolozzi, Paolo, E-mail: paolo.scolozzi@hcuge.ch [Service of Oral and Maxillofacial Surgery, Department of Surgery, University Hospital and Faculty of Medicine of Geneva, 1211 Genève (Switzerland)
2012-09-15
Objective: To evaluate the reliability, accuracy and rapidity of a specific computational method for assessing the orbital floor fracture area on a CT scan. Method: A computer assessment of the area of the fracture, as well as that of the total orbital floor, was determined on CT scans taken from ten patients. The ratio of the fracture's area to the orbital floor area was also calculated. The test–retest precision of measurement calculations was estimated using the Intraclass Correlation Coefficient (ICC) and Dahlberg's formula to assess the agreement across observers and across measures. The time needed for the complete assessment was also evaluated. Results: The Intraclass Correlation Coefficient across observers was 0.92 [0.85;0.96], and the precision of the measures across observers was 4.9%, according to Dahlberg's formula .The mean time needed to make one measurement was 2 min and 39 s (range, 1 min and 32 s to 4 min and 37 s). Conclusion: This study demonstrated that (1) the area of the orbital floor fracture can be rapidly and reliably assessed by using a specific computer system directly on CT scan images; (2) this method has the potential of being routinely used to standardize the post-traumatic evaluation of orbital fractures.
李守巨; 刘迎曦; 何翔; 周圆π
2001-01-01
A new numerical algorithm is presented to simulate the explosion reaction process of mine explosives based on the equation of state, the equation of mass conservation and thermodynamics balance equation of explosion products. With the affection of reversible reaction of explosion products to explosion reaction equations and thermodynamics parameters considered, the computer program has been developed. The computation values show that computer simulation results are identical with the testinq ones.
LI Shou-ju; LIU Ying-xi; HE Xiang; ZHOU Y uan-pai
2001-01-01
A new numerical algorithm is presented to simulate the explosion reacti on process of mine explosives based on the equation of state, the equation of ma ss conservation and thermodynamics balance equation of explosion products. With the affection of reversible reaction of explosion products to explosion reaction equations and thermodynamics parameters considered, the computer program has be en developed. The computation values show that computer simulation results are i dentical with the testing ones.
Li, Tiexiang; Huang, Tsung-Ming; Lin, Wen-Wei; Wang, Jenn-Nan
2017-03-01
We propose an efficient eigensolver for computing densely distributed spectra of the two-dimensional transmission eigenvalue problem (TEP), which is derived from Maxwell’s equations with Tellegen media and the transverse magnetic mode. The governing equations, when discretized by the standard piecewise linear finite element method, give rise to a large-scale quadratic eigenvalue problem (QEP). Our numerical simulation shows that half of the positive eigenvalues of the QEP are densely distributed in some interval near the origin. The quadratic Jacobi-Davidson method with a so-called non-equivalence deflation technique is proposed to compute the dense spectrum of the QEP. Extensive numerical simulations show that our proposed method processes the convergence efficiently, even when it needs to compute more than 5000 desired eigenpairs. Numerical results also illustrate that the computed eigenvalue curves can be approximated by nonlinear functions, which can be applied to estimate the denseness of the eigenvalues for the TEP.
Mishnaevsky, Leon
2014-01-01
, with modified, hybridor nanomodified structures. In this project, we seek to explore the potential of hybrid (carbon/glass),nanoreinforced and hierarchical composites (with secondary CNT, graphene or nanoclay reinforcement) as future materials for highly reliable large wind turbines. Using 3D multiscale...... computational models ofthe composites, we study the effect of hybrid structure and of nanomodifications on the strength, lifetime and service properties of the materials (see Figure 1). As a result, a series of recommendations toward the improvement of composites for structural applications under long term...
李建平[1; 曾庆存[2; 丑纪范[3
2000-01-01
In a majority of cases of long-time numerical integration for initial-value problems, roundoff error has received little attention. Using twenty-nine numerical methods, the influence of round-off error on numerical solutions is generally studied through a large number of numerical experiments. Here we find that there exists a strong dependence on machine precision (which is a new kind of dependence different from the sensitive dependence on initial conditions), maximally effective computation time (MECT) and optimal stepsize (OS) in solving nonlinear ordinary differential equations (ODEs) in finite machine precision. And an optimal searching method for evaluating MECT and OS under finite machine precision is presented. The relationships between MECT, OS, the order of numerical method and machine precision are found. Numerical results show that round-off error plays a significant role in the above phenomena. Moreover, we find two universal relations which are independent of the types of ODEs, initial val
Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL
2015-01-01
The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.
Farkas, Árpád; Balásházy, Imre
2015-04-01
A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
WUBTERSTEUBMSTEVEB R.; VEERS,PAUL S.
2000-01-01
Because the fatigue lifetime of wind turbine components depends on several factors that are highly variable, a numerical analysis tool called FAROW has been created to cast the problem of component fatigue life in a probabilistic framework. The probabilistic analysis is accomplished using methods of structural reliability (FORM/SORM). While the workings of the FAROW software package are defined in the user's manual, this theory manual outlines the mathematical basis. A deterministic solution for the time to failure is made possible by assuming analytical forms for the basic inputs of wind speed, stress response, and material resistance. Each parameter of the assumed forms for the inputs can be defined to be a random variable. The analytical framework is described and the solution for time to failure is derived.
Rahman, P. A.; Bobkova, E. Yu
2017-01-01
This paper deals with a reliability model of the restorable non-stop computing system with triple-modular redundancy based on independent computing nodes, taking into consideration the finite time for node activation and different node failure rates in the active and passive states. The obtained by authors generalized reliability model and calculation formulas for reliability indices for the system based on identical and independent computing nodes with the given threshold for quantity of active nodes, at which system is considered as operable, are also discussed. Finally, the application of the generalized model to the particular case of the non-stop restorable computing system with triple-modular redundancy based on independent nodes and calculation examples for reliability indices are also provided.
Faydide, B. [Commissariat a l`Energie Atomique, Grenoble (France)
1997-07-01
This paper presents the current and planned numerical development for improving computing performance in case of Cathare applications needing real time, like simulator applications. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the general characteristics of the code are presented, dealing with physical models, numerical topics, and validation strategy. Then, the current and planned applications of Cathare in the field of simulators are discussed. Some of these applications were made in the past, using a simplified and fast-running version of Cathare (Cathare-Simu); the status of the numerical improvements obtained with Cathare-Simu is presented. The planned developments concern mainly the Simulator Cathare Release (SCAR) project which deals with the use of the most recent version of Cathare inside simulators. In this frame, the numerical developments are related with the speed up of the calculation process, using parallel processing and improvement of code reliability on a large set of NPP transients.
Lundqvist, Eva; Segelsjoe, Monica; Magnusson, Anders [Uppsala Univ., Dept. of Radiology, Oncology and Radiation Science, Section of Radiology, Uppsala (Sweden)], E-mail: eva.lundqvist.8954@student.uu.se; Andersson, Anna; Biglarnia, Ali-Reza [Dept. of Surgical Sciences, Section of Transplantation Surgery, Uppsala Univ. Hospital, Uppsala (Sweden)
2012-11-15
Background Unlike other solid organ transplants, pancreas allografts can undergo a substantial decrease in baseline volume after transplantation. This phenomenon has not been well characterized, as there are insufficient data on reliable and reproducible volume assessments. We hypothesized that characterization of pancreatic volume by means of computed tomography (CT) could be a useful method for clinical follow-up in pancreas transplant patients. Purpose To evaluate the feasibility and reliability of pancreatic volume assessment using CT scan in transplanted patients. Material and Methods CT examinations were performed on 21 consecutive patients undergoing pancreas transplantation. Volume measurements were carried out by two observers tracing the pancreatic contours in all slices. The observers performed the measurements twice for each patient. Differences in volume measurement were used to evaluate intra- and inter-observer variability. Results The intra-observer variability for the pancreatic volume measurements of Observers 1 and 2 was found to be in almost perfect agreement, with an intraclass correlation coefficient (ICC) of 0.90 (0.77-0.96) and 0.99 (0.98-1.0), respectively. Regarding inter-observer validity, the ICCs for the first and second measurements were 0.90 (range, 0.77-0.96) and 0.95 (range, 0.85-0.98), respectively. Conclusion CT volumetry is a reliable and reproducible method for measurement of transplanted pancreatic volume.
Ping TAN; Wei-ting HE; Jia LIN; Hong-ming ZHAO; Jian CHU
2011-01-01
With the development of high-speed railways in China,more than 2000 high-speed trains will be put into use.Safety and efficiency of railway transportation is increasingly important.We have designed a high availability quadruple vital computer (HAQVC) system based on the analysis of the architecture of the traditional double 2-out-of-2 system and 2-out-of-3 system.The HAQVC system is a system with high availability and safety,with prominent characteristics such as fire-new internal architecture,high efficiency,reliable data interaction mechanism,and operation state change mechanism.The hardware of the vital CPU is based on ARM7 with the real-time embedded safe operation system (ES-OS).The Markov modeling method is designed to evaluate the reliability,availability,maintainability,and safety (RAMS) of the system.In this paper,we demonstrate that the HAQVC system is more reliable than the all voting triple modular redundancy (AVTMR) system and double 2-out-of-2 system.Thus,the design can be used for a specific application system,such as an airplane or high-speed railway system.
Numerical Simulation of Multi-phase Flow in Porous Media on Parallel Computers
Liu, Hui; Chen, Zhangxin; Luo, Jia; Deng, Hui; He, Yanfeng
2016-01-01
This paper is concerned with developing parallel computational methods for two-phase flow on distributed parallel computers; techniques for linear solvers and nonlinear methods are studied, and the standard and inexact Newton methods are investigated. A multi-stage preconditioner for two-phase flow is proposed and advanced matrix processing strategies are implemented. Numerical experiments show that these computational methods are scalable and efficient, and are capable of simulating large-scale problems with tens of millions of grid blocks using thousands of CPU cores on parallel computers. The nonlinear techniques, preconditioner and matrix processing strategies can also be applied to three-phase black oil, compositional and thermal models.
Reithmeier, Eduard
1991-01-01
Limit cycles or, more general, periodic solutions of nonlinear dynamical systems occur in many different fields of application. Although, there is extensive literature on periodic solutions, in particular on existence theorems, the connection to physical and technical applications needs to be improved. The bifurcation behavior of periodic solutions by means of parameter variations plays an important role in transition to chaos, so numerical algorithms are necessary to compute periodic solutions and investigate their stability on a numerical basis. From the technical point of view, dynamical systems with discontinuities are of special interest. The discontinuities may occur with respect to the variables describing the configuration space manifold or/and with respect to the variables of the vector-field of the dynamical system. The multiple shooting method is employed in computing limit cycles numerically, and is modified for systems with discontinuities. The theory is supported by numerous examples, mainly fro...
Numerical Computation of the Tau Approximation for the Delayed Burgers Equation
Khaksar, Haghani F.; Karimi, Vanani S.; Sedighi, Hafshejani J.
2013-02-01
We investigate an efficient extension of the operational Tau method for solving the delayed Burgers equation(DBE) arising in physical problems. This extension gives a useful numerical algorithm for the DBE including linear and nonlinear terms. The orthogonality of the Laguerre polynomials as the basis function is the main characteristic behind the method to decrease the volume of computations and runtime of the method. Numerical results are also presented for some experiments to demonstrate the usefulness and accuracy of the proposed algorithm.
Geometric invariants for initial data sets: analysis, exact solutions, computer algebra, numerics
Valiente Kroon, Juan A, E-mail: j.a.valiente-kroon@qmul.ac.uk [School of Mathematical Sciences, Queen Mary, University of London, Mile End Road, London, E1 4NS (United Kingdom)
2011-09-22
A personal perspective on the interaction of analytical, numerical and computer algebra methods in classical Relativity is given. This discussion is inspired by the problem of the construction of invariants that characterise key solutions to the Einstein field equations. It is claimed that this kind of ideas will be or importance in the analysis of dynamical black hole spacetimes by either analytical or numerical methods.
Numerical-Analytical Method for Magnetic Field Computation in Rotational Electric Machines
章跃进; 江建中; 屠关镇
2003-01-01
A numerical-analytical method is applied for the two-dimensional magnetic field computation in rotational electric machines in this paper. The analytical expressions for air gap magnetic field axe derived. The pole pairs in the expressions are taken into account so that the solution region can be reduced within one periodic range. The numerical and analytical magnetic field equations are linked with equal vector magnetic potential boundary conditions. The magnetic field of a brushless permanent magnet machine is computed by the proposed method. The result is compared to that obtained by finite element method so as to validate the correction of th method.
Computational experiment on the numerical solution of some inverse problems of mathematical physics
Vasil'ev, V. I.; Kardashevsky, A. M.; Sivtsev, PV
2016-11-01
In this article the computational experiment on the numerical solution of the most popular linear inverse problems for equations of mathematical physics are presented. The discretization of retrospective inverse problem for parabolic equation is performed using difference scheme with non-positive weight multiplier. Similar difference scheme is also used for the numerical solution of Cauchy problem for two-dimensional Laplace equation. The results of computational experiment, performed on model problems with exact solution, including ones with randomly perturbed input data are presented and discussed.
Numerical computation of soliton dynamics for NLS equations in a driving potential
Marco Caliari
2010-06-01
Full Text Available We provide numerical computations for the soliton dynamics of the nonlinear Schrodinger equation with an external potential. After computing the ground state solution r of a related elliptic equation we show that, in the semi-classical regime, the center of mass of the solution with initial datum built upon r is driven by the solution to $ddot x=- abla V(x$. Finally, we provide examples and analyze the numerical errors in the two dimensional case when V is a harmonic potential.
High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science
Florin Pop
2014-01-01
Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.
Sathyachandran, S. K.; Roy, D. P.; Boschetti, L.
2014-12-01
The Fire Radiative Power (FRP) [MW] is a measure of the rate of biomass combustion and can be retrieved from ground based and satellite observations using middle infra-red measurements. The temporal integral of FRP is the Fire Radiative Energy (FRE) [MJ] and is related linearly to the total biomass consumption and so pyrogenic emissions. Satellite derived biomass consumption and emissions estimates have been derived conventionally by computing the summed total FRP, or the average FRP (arithmetic average of FRP retrievals), over spatial geographic grids for fixed time periods. These two methods are prone to estimation bias, especially under irregular sampling conditions such as provided by polar-orbiting satellites, because the FRP can vary rapidly in space and time as a function of the fire behavior. Linear temporal integration of FRP taking into account when the FRP values were observed and using the trapezoidal rule for numerical integration has been suggested as an alternate FRE estimation method. In this study FRP data measured rapidly with a dual-band radiometer over eight prescribed fires are used to compute eight FRE values using the sum, mean and trapezoidal estimation approaches under a variety of simulated irregular sampling conditions. The estimated values are compared to biomass consumed measurements for each of the eight fires to provide insights into which method provides more accurate and precise biomass consumption estimates. The three methods are also applied to continental MODIS FRP data to study their differences using polar orbiting satellite data. The research findings indicate that trapezoidal FRP numerical integration provides the most reliable estimator.
An approach to first principles electronic structure calculation by symbolic-numeric computation
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
Development of numerical algorithms for practical computation of nonlinear normal modes
2008-01-01
When resorting to numerical algorithms, we show that nonlinear normal mode (NNM) computation is possible with limited implementation effort, which paves the way to a practical method for determining the NNMs of nonlinear mechanical systems. The proposed method relies on two main techniques, namely a shooting procedure and a method for the continuation of NNM motions. In addition, sensitivity analysis is used to reduce the computational burden of the algorithm. A simplified discrete model of a...
Seguí, María del Mar; Cabrero-García, Julio; Crespo, Ana; Verdú, José; Ronda, Elena
2015-06-01
To design and validate a questionnaire to measure visual symptoms related to exposure to computers in the workplace. Our computer vision syndrome questionnaire (CVS-Q) was based on a literature review and validated through discussion with experts and performance of a pretest, pilot test, and retest. Content validity was evaluated by occupational health, optometry, and ophthalmology experts. Rasch analysis was used in the psychometric evaluation of the questionnaire. Criterion validity was determined by calculating the sensitivity and specificity, receiver operator characteristic curve, and cutoff point. Test-retest repeatability was tested using the intraclass correlation coefficient (ICC) and concordance by Cohen's kappa (κ). The CVS-Q was developed with wide consensus among experts and was well accepted by the target group. It assesses the frequency and intensity of 16 symptoms using a single rating scale (symptom severity) that fits the Rasch rating scale model well. The questionnaire has sensitivity and specificity over 70% and achieved good test-retest repeatability both for the scores obtained [ICC = 0.802; 95% confidence interval (CI): 0.673, 0.884] and CVS classification (κ = 0.612; 95% CI: 0.384, 0.839). The CVS-Q has acceptable psychometric properties, making it a valid and reliable tool to control the visual health of computer workers, and can potentially be used in clinical trials and outcome research. Copyright © 2015 Elsevier Inc. All rights reserved.
Sled, Elizabeth A; Sheehy, Lisa M; Felson, David T; Costigan, Patrick A; Lam, Miu; Cooke, T Derek V
2011-01-01
The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. (1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. (2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977-0.999 for computer analysis; 0.820-0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839-0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers.
Dockrell, Sara; O'Grady, Eleanor; Bennett, Kathleen; Mullarkey, Clare; Mc Connell, Rachel; Ruddy, Rachel; Twomey, Seamus; Flannery, Colleen
2012-05-01
Rapid Upper Limb Assessment (RULA) is a quick observation method of posture analysis. RULA has been used to assess children's computer-related posture, but the reliability of RULA on a paediatric population has not been established. The purpose of this study was to investigate the inter-rater and intra-rater reliability of the use of RULA with children. Video recordings of 24 school children were independently viewed by six trained raters who assessed their postures using RULA, on two separate occasions. RULA demonstrated higher intra-rater reliability than inter-rater reliability although both were moderate to good. RULA was more reliable when used for assessing the older children (8-12 years) than with the younger children (4-7 years). RULA may prove useful as part of an ergonomic assessment, but its level of reliability warrants caution for its sole use when assessing children, and in particular, younger children.
Mishnaevsky, Leon; Freere, Peter; Sharma, Ranjan
2009-01-01
of experiments and computational investigations. Low cost testing machines have been designed, and employed for the systematic analysis of different sorts of Nepali wood, to be used for the wind turbine construction. At the same time, computational micromechanical models of deformation and strength of wood......This paper reports the latest results of the comprehensive program of experimental and computational analysis of strength and reliability of wooden parts of low cost wind turbines. The possibilities of prediction of strength and reliability of different types of wood are studied in the series...
Design and analysis of the reliability of on-board computer system based on Markov-model
MA Xiu-juan; CAO Xi-bin; ZHAO Guo-liang
2005-01-01
An on-board computer system should have such advantages as light weight, small volume and low power to meet the demand of micro-satellites. This paper, based on specific characteristics of Stereo Mapping Micro-Satellite ( SMMS), describes the on-board computer system with its advantage of having centralized and distributed control in the same system and analyzes its reliability based on a Markov model in order to provide a theoretical foundation for a reliable design. The on-board computer system has been put into use in principle prototype model of Stereo Mapping Micro-Satellite and has already been debugged. All indexes meet the requirements of the design.
Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.
2015-03-01
Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.
A Newly Developed Method for Computing Reliability Measures in a Water Supply Network
Jacek Malinowski
2016-01-01
Full Text Available A reliability model of a water supply network has beens examined. Its main features are: a topology that can be decomposed by the so-called state factorization into a (relativelysmall number of derivative networks, each having a series-parallel structure (1, binary-state components (either operative or failed with given flow capacities (2, a multi-state character of the whole network and its sub-networks - a network state is defined as the maximal flow between a source (sources and a sink (sinks (3, all capacities (component, network, and sub-network have integer values (4. As the network operates, its state changes due to component failures, repairs, and replacements. A newly developed method of computing the inter-state transition intensities has been presented. It is based on the so-called state factorization and series-parallel aggregation. The analysis of these intensities shows that the failure-repair process of the considered system is an asymptotically homogenous Markov process. It is also demonstrated how certain reliability parameters useful for the network maintenance planning can be determined on the basis of the asymptotic intensities. For better understanding of the presented method, an illustrative example is given. (original abstract
Securely Data Forwarding and Maintaining Reliability of Data in Cloud Computing
Sonali A.Wanjari
2015-02-01
Full Text Available Cloud works as an online storage servers and provides long term storage services over the internet. It is like a third party in whom we can store a data so they need data confidentiality, robustness and functionality. Encryption and encoding methods are used to solve such problems. After that divide proxy re-encryption scheme and integrating it with a decentralized erasure code such that a secure distributed storage system is formulated. The distributed storage system not only supports secure, robust data storage and retrieval but also lets the user forward his data to another user without retrieving the data. A concept of backup in same server allows users to retrieve failure data successfully in the storage server and also forward to another user without retrieving the data back. This is an attempt to provide light-weight approach which protects data access in distributed storage servers. User can implement all important concept i.e. Confidentiality for security, Robustness for healthy data, Reliability for flexible data, Availability for compulsory data will be achieved to another user which is store in cloud and easily overcome problem of “Securely data forwarding and maintaining, reliability of data in cloud computing “using different type of Methodology and Technology.
Skowronski, Steven D.
This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…
CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.
Skowronski, Steven D.; Tatum, Kenneth
This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…
Research in progress in applied mathematics, numerical analysis, and computer science
1990-01-01
Research conducted at the Institute in Science and Engineering in applied mathematics, numerical analysis, and computer science is summarized. The Institute conducts unclassified basic research in applied mathematics in order to extend and improve problem solving capabilities in science and engineering, particularly in aeronautics and space.
Stanton, Michael; And Others
1985-01-01
Three reports on the effects of high technology on the nature of work include (1) Stanton on applications and implications of computer-aided design for engineers, drafters, and architects; (2) Nardone on the outlook and training of numerical-control machine tool operators; and (3) Austin and Drake on the future of clerical occupations in automated…
Skowronski, Steven D.
This student guide provides materials for a course designed to instruct the student in the recommended procedures used when setting up tooling and verifying part programs for a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 discusses course content and reviews and demonstrates set-up procedures…
CNC Turning Center Advanced Operations. Computer Numerical Control Operator/Programmer. 444-332.
Skowronski, Steven D.; Tatum, Kenneth
This student guide provides materials for a course designed to introduce the student to the operations and functions of a two-axis computer numerical control (CNC) turning center. The course consists of seven units. Unit 1 presents course expectations and syllabus, covers safety precautions, and describes the CNC turning center components, CNC…
Verifying the error bound of numerical computation implemented in computer systems
Sawada, Jun
2013-03-12
A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.
A Numerical Scheme for Computing Stable and Unstable Manifolds in Nonautonomous Flows
Balasuriya, Sanjeeva
2016-12-01
There are many methods for computing stable and unstable manifolds in autonomous flows. When the flow is nonautonomous, however, difficulties arise since the hyperbolic trajectory to which these manifolds are anchored, and the local manifold emanation directions, are changing with time. This article utilizes recent results which approximate the time-variation of both these quantities to design a numerical algorithm which can obtain high resolution in global nonautonomous stable and unstable manifolds. In particular, good numerical approximation is possible locally near the anchor trajectory. Nonautonomous manifolds are computed for two examples: a Rossby wave situation which is highly chaotic, and a nonautonomus (time-aperiodic) Duffing oscillator model in which the manifold emanation directions are rapidly changing. The numerical method is validated and analyzed in these cases using finite-time Lyapunov exponent fields and exactly known nonautonomous manifolds.
Gao, James; Lee, Chen-Han; Li, Yingguagan
2015-01-01
The aim of this paper is to provide an introduction and overview of recent advances in the key technologies and the supporting computerized systems, and to indicate the trend of research and development in the area of computational numerical control machining. Three main themes of recent research in CNC machining are simulation, optimization and automation, which form the key aspects of intelligent manufacturing in the digital and knowledge based manufacturing era. As the information and know...
1984-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period October 1, 1983 through March 31, 1984 is summarized.
1989-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period October 1, 1988 through March 31, 1989 is summarized.
1987-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, and computer science during the period April, 1986 through September 30, 1986 is summarized.
Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)
1997-07-01
The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)
Senanayake, Chathuri; Senanayake, S M N Arosha
2011-10-01
In this paper, a gait event detection algorithm is presented that uses computer intelligence (fuzzy logic) to identify seven gait phases in walking gait. Two inertial measurement units and four force-sensitive resistors were used to obtain knee angle and foot pressure patterns, respectively. Fuzzy logic is used to address the complexity in distinguishing gait phases based on discrete events. A novel application of the seven-dimensional vector analysis method to estimate the amount of abnormalities detected was also investigated based on the two gait parameters. Experiments were carried out to validate the application of the two proposed algorithms to provide accurate feedback in rehabilitation. The algorithm responses were tested for two cases, normal and abnormal gait. The large amount of data required for reliable gait-phase detection necessitate the utilisation of computer methods to store and manage the data. Therefore, a database management system and an interactive graphical user interface were developed for the utilisation of the overall system in a clinical environment.
Mukhadiyev, Nurzhan
2017-05-01
Combustion at extreme conditions, such as a turbulent flame at high Karlovitz and Reynolds numbers, is still a vast and an uncertain field for researchers. Direct numerical simulation of a turbulent flame is a superior tool to unravel detailed information that is not accessible to most sophisticated state-of-the-art experiments. However, the computational cost of such simulations remains a challenge even for modern supercomputers, as the physical size, the level of turbulence intensity, and chemical complexities of the problems continue to increase. As a result, there is a strong demand for computational cost reduction methods as well as in acceleration of existing methods. The main scope of this work was the development of computational and numerical tools for high-fidelity direct numerical simulations of premixed planar flames interacting with turbulence. The first part of this work was KAUST Adaptive Reacting Flow Solver (KARFS) development. KARFS is a high order compressible reacting flow solver using detailed chemical kinetics mechanism; it is capable to run on various types of heterogeneous computational architectures. In this work, it was shown that KARFS is capable of running efficiently on both CPU and GPU. The second part of this work was numerical tools for direct numerical simulations of planar premixed flames: such as linear turbulence forcing and dynamic inlet control. DNS of premixed turbulent flames conducted previously injected velocity fluctuations at an inlet. Turbulence injected at the inlet decayed significantly while reaching the flame, which created a necessity to inject higher than needed fluctuations. A solution for this issue was to maintain turbulence strength on the way to the flame using turbulence forcing. Therefore, a linear turbulence forcing was implemented into KARFS to enhance turbulence intensity. Linear turbulence forcing developed previously by other groups was corrected with net added momentum removal mechanism to prevent mean
1992-01-01
Research conducted at the Institute for Computer Applications in Science and Engineering in applied mathematics, numerical analysis, fluid mechanics including fluid dynamics, acoustics, and combustion, aerodynamics, and computer science during the period 1 Apr. 1992 - 30 Sep. 1992 is summarized.
Peer-to-Peer Secure Multi-Party Numerical Computation Facing Malicious Adversaries
Bickson, Danny; Dolev, Danny; Pinkas, Benny
2009-01-01
We propose an efficient framework for enabling secure multi-party numerical computations in a Peer-to-Peer network. This problem arises in a range of applications such as collaborative filtering, distributed computation of trust and reputation, monitoring and other tasks, where the computing nodes is expected to preserve the privacy of their inputs while performing a joint computation of a certain function. Although there is a rich literature in the field of distributed systems security concerning secure multi-party computation, in practice it is hard to deploy those methods in very large scale Peer-to-Peer networks. In this work, we try to bridge the gap between theoretical algorithms in the security domain, and a practical Peer-to-Peer deployment. We consider two security models. The first is the semi-honest model where peers correctly follow the protocol, but try to reveal private information. We provide three possible schemes for secure multi-party numerical computation for this model and identify a singl...
Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.
2016-10-01
As the most powerful CW sources of coherent radiation in the sub-terahertz to terahertz frequency range the gyrotrons have demonstrated a remarkable potential for numerous novel and prospective applications in the fundamental physical research and the technologies. Among them are powerful gyrotrons for electron cyclotron resonance heating (ECRH) and current drive (ECCD) of magnetically confined plasma in various reactors for controlled thermonuclear fusion (e.g., tokamaks and most notably ITER), high-frequency gyrotrons for sub-terahertz spectroscopy (for example NMR-DNP, XDMR, study of the hyperfine structure of positronium, etc.), gyrotrons for thermal processing and so on. Modelling and simulation are indispensable tools for numerical studies, computer-aided design (CAD) and optimization of such sophisticated vacuum tubes (fast-wave devices) operating on a physical principle known as electron cyclotron resonance maser (ECRM) instability. During the recent years, our research team has been involved in the development of physical models and problem-oriented software packages for numerical analysis and CAD of different gyrotrons in the framework of a broad international collaboration. In this paper we present the current status of our simulation tools (GYROSIM and GYREOSS packages) and illustrate their functionality by results of numerical experiments carried out recently. Finally, we provide an outlook on the envisaged further development of the computer codes and the computational modules belonging to these packages and specialized to different subsystems of the gyrotrons.
Fazanaro, Filipe I.; Soriano, Diogo C.; Suyama, Ricardo; Madrid, Marconi K.; Oliveira, José Raimundo de; Muñoz, Ignacio Bravo; Attux, Romis
2016-08-01
The characterization of nonlinear dynamical systems and their attractors in terms of invariant measures, basins of attractions and the structure of their vector fields usually outlines a task strongly related to the underlying computational cost. In this work, the practical aspects related to the use of parallel computing - specially the use of Graphics Processing Units (GPUS) and of the Compute Unified Device Architecture (CUDA) - are reviewed and discussed in the context of nonlinear dynamical systems characterization. In this work such characterization is performed by obtaining both local and global Lyapunov exponents for the classical forced Duffing oscillator. The local divergence measure was employed by the computation of the Lagrangian Coherent Structures (LCSS), revealing the general organization of the flow according to the obtained separatrices, while the global Lyapunov exponents were used to characterize the attractors obtained under one or more bifurcation parameters. These simulation sets also illustrate the required computation time and speedup gains provided by different parallel computing strategies, justifying the employment and the relevance of GPUS and CUDA in such extensive numerical approach. Finally, more than simply providing an overview supported by a representative set of simulations, this work also aims to be a unified introduction to the use of the mentioned parallel computing tools in the context of nonlinear dynamical systems, providing codes and examples to be executed in MATLAB and using the CUDA environment, something that is usually fragmented in different scientific communities and restricted to specialists on parallel computing strategies.
M. Boumaza
2015-07-01
Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.
Numerical approximation on computing partial sum of nonlinear Schroedinger eigenvalue problems
JiachangSUN; DingshengWANG; 等
2001-01-01
In computing electronic structure and energy band in the system of multiparticles,quite a large number of problems are to obtain the partial sum of the densities and energies by using “First principle”。In the ordinary method,the so-called self-consistency approach,the procedure is limited to a small scale because of its high computing complexity.In this paper,the problem of computing the partial sum for a class of nonlinear Schroedinger eigenvalue equations is changed into the constrained functional minimization.By space decompostion and Rayleigh-Schroedinger method,one approximating formula for the minimal is provided.The numerical experiments show that this formula is more precise and its quantity of computation is smaller.
Piv Method and Numerical Computation for Prediction of Liquid Steel Flow Structure in Tundish
Cwudziński A.
2015-04-01
Full Text Available This paper presents the results of computer simulations and laboratory experiments carried out to describe the motion of steel flow in the tundish. The facility under investigation is a single-nozzle tundish designed for casting concast slabs. For the validation of the numerical model and verification of the hydrodynamic conditions occurring in the examined tundish furniture variants, obtained from the computer simulations, a physical model of the tundish was employed. State-of-the-art vector flow field analysis measuring systems developed by Lavision were used in the laboratory tests. Computer simulations of liquid steel flow were performed using the commercial program Ansys-Fluent¯. In order to obtain a complete hydrodynamic picture in the tundish furniture variants tested, the computer simulations were performed for both isothermal and non-isothermal conditions.
Hofland, G.S.; Barton, C.C.
1990-10-01
The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program`s results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig.
Hofland, G.S.; Barton, C.C.
1990-10-01
The computer program FREQFIT is designed to perform regression and statistical chi-squared goodness of fit analysis on one-dimensional or two-dimensional data. The program features an interactive user dialogue, numerous help messages, an option for screen or line printer output, and the flexibility to use practically any commercially available graphics package to create plots of the program`s results. FREQFIT is written in Microsoft QuickBASIC, for IBM-PC compatible computers. A listing of the QuickBASIC source code for the FREQFIT program, a user manual, and sample input data, output, and plots are included. 6 refs., 1 fig.
Lavarini, C.; Attal, M.; Kirstein, L. A.
2016-12-01
Detrital heavy minerals, particularly zircon, have been used to learn about the early evolution of the Earth and are often the only record we have of events that have affected the geological evolution of rocks at the Earth's surface (Watson and Harrison, 2005). Recently, many studies have focused on the investigation of natural and artificial bias in detrital mineral analysis in order to enhance the technique's reliability (e.g., Moecher and Samson, 2006). However, in spite of the widely-known influence of physical abrasion, no attempts have been made to quantitatively assess how it potentially biases abrasion-driven products such as the mineral assemblages on which geothermochronology relies. Here, through numerical modelling, we explore how varying a series of parameters (rock erodibility, zircon fertility, sediment travel distance, source area lithology, and initial bedload ratio) yields different river loads and zircon signatures over the fluvial system. We assume that pebbles are abraded according to the commonly used Sternberg's law (1875) and that fining due to selective sorting is negligible as well as all abrasion products are in the sand size fraction. Our results highlight that the spatial location of lithologies and variations in erodibility strongly influence the release of zircon grains into the sand fraction. Even in a scenario where a catchment is made of two lithologies with identical properties, the one further from the outlet contributes relatively more zircon to the sand fraction collected at the outlet. The experiments also show the strong influence of fertility on zircon abundance in sands, and that bias increases with basin size due to the exponential nature of abrasion. These results highlight that abrasion must be carefully accounted as a natural bias in the investigation of detrital zircons for provenance research.
Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.
2016-05-01
Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed.
EVOLVE : a Bridge between Probability, Set Oriented Numerics and Evolutionary Computation
Tantar, Alexandru-Adrian; Bouvry, Pascal; Moral, Pierre; Legrand, Pierrick; Coello, Carlos; Schütze, Oliver; EVOLVE 2011
2013-01-01
The aim of this book is to provide a strong theoretical support for understanding and analyzing the behavior of evolutionary algorithms, as well as for creating a bridge between probability, set-oriented numerics and evolutionary computation. The volume encloses a collection of contributions that were presented at the EVOLVE 2011 international workshop, held in Luxembourg, May 25-27, 2011, coming from invited speakers and also from selected regular submissions. The aim of EVOLVE is to unify the perspectives offered by probability, set oriented numerics and evolutionary computation. EVOLVE focuses on challenging aspects that arise at the passage from theory to new paradigms and practice, elaborating on the foundations of evolutionary algorithms and theory-inspired methods merged with cutting-edge techniques that ensure performance guarantee factors. EVOLVE is also intended to foster a growing interest for robust and efficient methods with a sound theoretical background. The chapters enclose challenging theoret...
Removing the Correlation Term in Option Pricing Heston Model: Numerical Analysis and Computing
R. Company
2013-01-01
Full Text Available This paper deals with the numerical solution of option pricing stochastic volatility model described by a time-dependent, two-dimensional convection-diffusion reaction equation. Firstly, the mixed spatial derivative of the partial differential equation (PDE is removed by means of the classical technique for reduction of second-order linear partial differential equations to canonical form. An explicit difference scheme with positive coefficients and only five-point computational stencil is constructed. The boundary conditions are adapted to the boundaries of the rhomboid transformed numerical domain. Consistency of the scheme with the PDE is shown and stepsize discretization conditions in order to guarantee stability are established. Illustrative numerical examples are included.
Stinis, Panagiotis
2010-01-01
We present numerical results for the solution of the 1D critical nonlinear Schrodinger with periodic boundary conditions and initial data that give rise to a finite time singularity. We construct, through the Mori-Zwanzig formalism, a reduced model which allows us to follow the solution after the formation of the singularity. The computed post-singularity solution exhibits the same characteristics as the post-singularity solutions constructed recently by Terence Tao.
Fretz Christian
2010-02-01
Full Text Available Abstract Background Radio Frequency Identification (RFID devices are becoming more and more essential for patient safety in hospitals. The purpose of this study was to determine patient safety, data reliability and signal loss wearing on skin RFID devices during magnetic resonance imaging (MRI and computed tomography (CT scanning. Methods Sixty RFID tags of the type I-Code SLI, 13.56 MHz, ISO 18000-3.1 were tested: Thirty type 1, an RFID tag with a 76 × 45 mm aluminum-etched antenna and 30 type 2, a tag with a 31 × 14 mm copper-etched antenna. The signal loss, material movement and heat tests were performed in a 1.5 T and a 3 T MR system. For data integrity, the tags were tested additionally during CT scanning. Standardized function tests were performed with all transponders before and after all imaging studies. Results There was no memory loss or data alteration in the RFID tags after MRI and CT scanning. Concerning heating (a maximum of 3.6°C and device movement (below 1 N/kg no relevant influence was found. Concerning signal loss (artifacts 2 - 4 mm, interpretability of MR images was impaired when superficial structures such as skin, subcutaneous tissues or tendons were assessed. Conclusions Patients wearing RFID wristbands are safe in 1.5 T and 3 T MR scanners using normal operation mode for RF-field. The findings are specific to the RFID tags that underwent testing.
Task analysis and computer aid development for human reliability analysis in nuclear power plants
Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)
2001-04-01
Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)
Accuracy and reliability of stitched cone-beam computed tomography images
Egbert, Nicholas [Private Practice, Reconstructive Dental Specialists of Utah, Salt Lake (United States); Cagna, David R.; Ahuja, Swati; Wicks, Russell A. [Dept. of rosthodontics, University of Tennessee Health Science Center College of Dentistry, Memphis (United States)
2015-03-15
This study was performed to evaluate the linear distance accuracy and reliability of stitched small field of view (FOV) cone-beam computed tomography (CBCT) reconstructed images for the fabrication of implant surgical guides. Three gutta percha points were fixed on the inferior border of a cadaveric mandible to serve as control reference points. Ten additional gutta percha points, representing fiduciary markers, were scattered on the buccal and lingual cortices at the level of the proposed complete denture flange. A digital caliper was used to measure the distance between the reference points and fiduciary markers, which represented the anatomic linear dimension. The mandible was scanned using small FOV CBCT, and the images were then reconstructed and stitched using the manufacturer's imaging software. The same measurements were then taken with the CBCT software. The anatomic linear dimension measurements and stitched small FOV CBCT measurements were statistically evaluated for linear accuracy. The mean difference between the anatomic linear dimension measurements and the stitched small FOV CBCT measurements was found to be 0.34 mm with a 95% confidence interval of +0.24 - +0.44 mm and a mean standard deviation of 0.30 mm. The difference between the control and the stitched small FOV CBCT measurements was insignificant within the parameters defined by this study. The proven accuracy of stitched small FOV CBCT data sets may allow image-guided fabrication of implant surgical stents from such data sets.
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Computational model for supporting SHM systems design: Damage identification via numerical analyses
Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei
2017-02-01
This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.
Phan, Ngoc Quan; Blome, Christine; Fritz, Fleur; Gerss, Joachim; Reich, Adam; Ebata, Toshiya; Augustin, Matthias; Szepietowski, Jacek C; Ständer, Sonja
2012-09-01
The most commonly used tool for self-report of pruritus intensity is the visual analogue scale (VAS). Similar tools are the numerical rating scale (NRS) and verbal rating scale (VRS). In the present study, initiated by the International Forum for the Study of Itch assessing reliability of these tools, 471 randomly selected patients with chronic itch (200 males, 271 females, mean age 58.44 years) recorded their pruritus intensity on VAS (100-mm line), NRS (0-10) and VRS (four-point) scales. Re-test reliability was analysed in a subgroup of 250 patients after one hour. Statistical analysis showed a high reliability and concurrent validity (r>0.8; pscales showed a high correlation. In conclusion, high reliability and concurrent validity was found for VAS, NRS and VRS. On re-test, higher correlation and less missing values were observed. A training session before starting a clinical trial is recommended.
Numerical model for computation of effective and ambient dose equivalent at flight altitudes
Mishev Alexander
2015-01-01
Full Text Available A numerical model for assessment of the effective dose and ambient dose equivalent produced by secondary cosmic ray particles of galactic and solar origin at commercial aircraft altitudes is presented. The model represents a full chain analysis based on ground-based measurements of cosmic rays, from particle spectral and angular characteristics to dose estimation. The model is based on newly numerically computed yield functions and realistic propagation of cosmic ray in the Earth magnetosphere. The yield functions are computed using a straightforward full Monte Carlo simulation of the atmospheric cascade induced by primary protons and α-particles and subsequent conversion of secondary particle fluence (neutrons, protons, gammas, electrons, positrons, muons and charged pions to effective dose or the ambient dose equivalent. The ambient dose equivalent is compared with reference data at various conditions such as rigidity cut-off and level of solar activity. The method is applied for computation of the effective dose rate at flight altitude during the ground level enhancement of 13 December 2006. The solar proton spectra are derived using neutron monitor data. The computation of the effective dose rate during the event explicitly considers the derived anisotropy i.e. the pitch angle distribution as well as the propagation of the solar protons in the magnetosphere of the Earth.
Reliability of Industrial Computer Management Platform%工控机可靠性管理平台
李春霞; 唐怀斌; 贺孝珍; 刘兴莉; 隆萍
2012-01-01
论述了工控机可靠性的特征量,从产品设计、研发、生产、管理等方面提出了保证工控机可靠性实现和持续增长的技术、方法和管理体系,并就建立企业可靠性管理平台问题提出了看法.%The characteristic quantities of the industrial computer reliability are discussed. From product design, deveopment, producting, management and other aspects, the technology, methods and management system are put forward, which can ensure reliability to achieve and continuous growth of industrial computer. The establishment of enterprise reliability management platform is discussed.
Huang, Yan; Dessel, Jeroen Van; Nicolielo, Laura; Van de Casteele, Elke; Slagmolen, Pieter; Jacobs, Reinhilde
2015-01-01
Huang Y., Van Dessel J., Nicolielo L., Van de Casteele E., Slagmolen P., Jacobs R., ''The reliability of cone-beam computed tomography to analyze trabecular and cortical bone structures: an in-vitro study'', 24th annual congress of the European Association for Osseointegration - EAO 2015, September 24-26, 2015, Stockholm, Sweden.
Li, Yiming
2007-12-01
This symposium is an open forum for discussion on the current trends and future directions of physical modeling, mathematical theory, and numerical algorithm in electrical and electronic engineering. The goal is for computational scientists and engineers, computer scientists, applied mathematicians, physicists, and researchers to present their recent advances and exchange experience. We welcome contributions from researchers of academia and industry. All papers to be presented in this symposium have carefully been reviewed and selected. They include semiconductor devices, circuit theory, statistical signal processing, design optimization, network design, intelligent transportation system, and wireless communication. Welcome to this interdisciplinary symposium in International Conference of Computational Methods in Sciences and Engineering (ICCMSE 2007). Look forward to seeing you in Corfu, Greece!
Malkov, Ewgenij A.; Poleshkin, Sergey O.; Kudryavtsev, Alexey N.; Shershnev, Anton A.
2016-10-01
The paper presents the software implementation of the Boltzmann equation solver based on the deterministic finite-difference method. The solver allows one to carry out parallel computations of rarefied flows on a hybrid computational cluster with arbitrary number of central processor units (CPU) and graphical processor units (GPU). Employment of GPUs leads to a significant acceleration of the computations, which enables us to simulate two-dimensional flows with high resolution in a reasonable time. The developed numerical code was validated by comparing the obtained solutions with the Direct Simulation Monte Carlo (DSMC) data. For this purpose the supersonic flow past a flat plate at zero angle of attack is used as a test case.
Numerical Study of Geometric Multigrid Methods on CPU--GPU Heterogeneous Computers
Feng, Chunsheng; Xu, Jinchao; Zhang, Chen-Song
2012-01-01
The geometric multigrid method (GMG) is one of the most efficient solving techniques for discrete algebraic systems arising from many types of partial differential equations. GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously. Graphics processing units (GPUs) have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements. A central challenge in implementing GMG on GPUs, though, is that computational work on coarse levels cannot fully utilize the capacity of a GPU. In this work, we perform numerical studies of GMG on CPU--GPU heterogeneous computers. Furthermore, we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver, Fast Fourier Transform, in the cuFFT library developed by NVIDIA.
Afanas’ev, Victor P., E-mail: afanasyevvip@gmail.com [Department of General Physics and Nuclear Fusion, National Research University “Moscow Power Engineering Institute”, Krasnokazarmennaya, 14, Moscow 111250 (Russian Federation); Efremenko, Dmitry S. [Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Methodik der Fernerkundung (IMF), 82234 Oberpfaffenhofen (Germany); Kaplya, Pavel S., E-mail: pavel@kaplya.com [Department of General Physics and Nuclear Fusion, National Research University “Moscow Power Engineering Institute”, Krasnokazarmennaya, 14, Moscow 111250 (Russian Federation)
2016-07-15
Highlights: • The OKG-model is extended to finite thickness layers. • An efficient matrix technique for computing partial intensities is proposed. • Good agreement is obtained for computed partial intensities and experimental data. - Abstract: We present two novel methods for computing energy spectra and angular distributions of electrons emitted from multi-layer solids. They are based on the Ambartsumian–Chandrasekhar (AC) equations obtained by using the invariant imbedding method. The first method is analytical and relies on a linearization of AC equations and the use of the small-angle approximation. The corresponding solution is in good agreement with that computed by using the Oswald–Kasper–Gaukler (OKG) model, which is extended to the case of layers of finite thickness. The second method is based on the discrete ordinate formalism and relies on a transformation of the AC equations to the algebraic Ricatti and Lyapunov equations, which are solved by using the backward differential formula. Unlike the previous approach, this method can handle both linear and nonlinear equations. We analyze the applicability of the proposed methods to practical problems of computing REELS spectra. To demonstrate the efficiency of the proposed methods, several computational examples are considered. Obtained numerical and analytical solutions show good agreement with the experimental data and Monte-Carlo simulations. In addition, the impact of nonlinear terms in the Ambartsumian–Chandrasekhar equations is analyzed.
Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E
2016-04-01
The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile.
Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian
2013-01-01
Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...
Walsh, Jonathan A., E-mail: walshjon@mit.edu [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Palmer, Todd S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, Todd J. [XTD-IDA: Theoretical Design, Integrated Design and Assessment, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2015-12-15
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
Computation of optimal unstable structures for a numerical weather prediction model
Buizza, R.; Tribbia, J.; Molteni, F.; Palmer, T.
1993-10-01
Numerical experiments have been performed to compute the fastest growing perturbations in a finite time interval for a complex numerical weather prediction model. The models used are the tangent forward and adjoint versions of the adiabatic primitive-equation model of the Integrated Forecasting System developed at the European Centre for Medium-Range Weather Forecasts and Météo France. These have been run with a horizontal truncation T21, with 19 vertical levels. The fastest growing perturbations are the singular vectors of the propagator of the forward tangent model with the largest singular values. An iterative Lanczos algorithm has been used for the numerical computation of the perturbations. Sensitivity of the calculations to different time intervals and to the norm used in the definition of the adjoint model have been analysed. The impact of normal mode initialization has also been studied. Two classes of fastest growing perturbations have been found; one is characterized by a maximum amplitude in the middle troposphere, while the other is confined to model layers close to the surface. It is shown that the latter is damped by the boundary layer physics in the full model. The linear evolution of the perturbations has been compared to the non-linear evolution when the perturbations are superimposed on a basic state in the T63, 19-level version of the ECMWF model.
Lu, Tianfeng [Univ. of Connecticut, Storrs, CT (United States)
2017-02-16
The goal of the proposed research is to create computational flame diagnostics (CFLD) that are rigorous numerical algorithms for systematic detection of critical flame features, such as ignition, extinction, and premixed and non-premixed flamelets, and to understand the underlying physicochemical processes controlling limit flame phenomena, flame stabilization, turbulence-chemistry interactions and pollutant emissions etc. The goal has been accomplished through an integrated effort on mechanism reduction, direct numerical simulations (DNS) of flames at engine conditions and a variety of turbulent flames with transport fuels, computational diagnostics, turbulence modeling, and DNS data mining and data reduction. The computational diagnostics are primarily based on the chemical explosive mode analysis (CEMA) and a recently developed bifurcation analysis using datasets from first-principle simulations of 0-D reactors, 1-D laminar flames, and 2-D and 3-D DNS (collaboration with J.H. Chen and S. Som at Argonne, and C.S. Yoo at UNIST). Non-stiff reduced mechanisms for transportation fuels amenable for 3-D DNS are developed through graph-based methods and timescale analysis. The flame structures, stabilization mechanisms, local ignition and extinction etc., and the rate controlling chemical processes are unambiguously identified through CFLD. CEMA is further employed to segment complex turbulent flames based on the critical flame features, such as premixed reaction fronts, and to enable zone-adaptive turbulent combustion modeling.
Michael eNivala
2012-05-01
Full Text Available Intracellular calcium (Ca cycling dynamics in cardiac myocytes is regulated by a complex network of spatially distributed organelles, such as sarcoplasmic reticulum (SR, mitochondria, and myofibrils. In this study, we present a mathematical model of intracellular Ca cycling and numerical and computational methods for computer simulations. The model consists of a coupled Ca release unit (CRU network, which includes a SR domain and a myoplasm domain. Each CRU contains 10 L-type Ca channels and 100 ryanodine receptor channels, with individual channels simulated stochastically using a varient of Gillespie’s method, modified here to handle time-dependent transition rates. Both the SR domain and the myoplasm domain in each CRU are modeled by 5x5x5 voxels to maintain proper Ca diffusion. Advanced numerical algorithms implemented on graphical processing units were used for fast computational simulations. For a myocyte containing 100x20x10 CRUs, a one-second heart time simulation takes about 10 minutes of machine time on a single NVIDIA Tesla C2050. Examples of simulated Ca cycling dynamics, such as Ca sparks, Ca waves, and Ca alternans, are shown.
Van De Wiel, Marco
2016-04-01
Computer simulations and numerical experiments have become an increasingly important part of geomorphological investigation in the last decades. Process-based numerical models attempt to simulate real-world processes in a virtual environment which can be easily manipulated and studied. Conceptually, the experimental design of these simulation studies broadly falls in one of three categories: predictive modelling, explanatory modelling, and exploratory modelling. However, the epistemologies of these three modes of modelling are as of yet incomplete and not fully understood. Not only do the three modes of modelling have different underlying assumptions, they also have different criteria to establish validity and different limitations on the interpretations and inferences that can be made. These differences are usually only implicitly recognized, if at all, in computational geomorphology studies. This presentation provides an explicit, though not necessarily exhaustive, overview of the epistemological differences between the three modes of computational modelling, and of the limitations this imposes on what can and cannot be learned from simulation experiments.
L. A. F. de Souza
Full Text Available The experimental results of testing structures or structural parts are limited and, sometimes, difficult to interpret. Thus, the development of mathematical-numerical models is needed to complement the experimental analysis and allow the generalization of results for different structures and types of loading. This article makes two computational studies of reinforced concrete structures problems found in the literature, using the Finite Element Method. In these analyses, the concrete is simulated with the damage classical model proposed by Mazars and the steel by a bilinear elastoplastic constitutive model. Numerical results show the validity of the application of constitutive models which consider the coupling of theories with the technique of finite element discretization in the simulation of linear and two-dimensional reinforced concrete structures.
Fokas, A S; Marinakis, V
2004-01-01
The modern imaging techniques of Positron Emission Tomography and of Single Photon Emission Computed Tomography are not only two of the most important tools for studying the functional characteristics of the brain, but they now also play a vital role in several areas of clinical medicine, including neurology, oncology and cardiology. The basic mathematical problems associated with these techniques are the construction of the inverse of the Radon transform and of the inverse of the so called attenuated Radon transform respectively. We first show that, by employing mathematical techniques developed in the theory of nonlinear integrable equations, it is possible to obtain analytic formulas for these two inverse transforms. We then present algorithms for the numerical implementation of these analytic formulas, based on approximating the given data in terms of cubic splines. Several numerical tests are presented which suggest that our algorithms are capable of producing accurate reconstruction for realistic phanto...
Shi, Guangyuan; Li, Song; Huang, Ke; Li, Zile; Zheng, Guoxing
2016-10-01
We have developed a new numerical ray-tracing approach for LIDAR signal power function computation, in which the light round-trip propagation is analyzed by geometrical optics and a simple experiment is employed to acquire the laser intensity distribution. It is relatively more accurate and flexible than previous methods. We emphatically discuss the relationship between the inclined angle and the dynamic range of detector output signal in biaxial LIDAR system. Results indicate that an appropriate negative angle can compress the signal dynamic range. This technique has been successfully proved by comparison with real measurements.
Efficient numerical method for computation of thermohydrodynamics of laminar lubricating films
Elrod, Harold G.
1989-01-01
The purpose of this paper is to describe an accurate, yet economical, method for computing temperature effects in laminar lubricating films in two dimensions. The procedure presented here is a sequel to one presented in Leeds in 1986 that was carried out for the one-dimensional case. Because of the marked dependence of lubricant viscosity on temperature, the effect of viscosity variation both across and along a lubricating film can dwarf other deviations from ideal constant-property lubrication. In practice, a thermohydrodynamics program will involve simultaneous solution of the film lubrication problem, together with heat conduction in a solid, complex structure. The extent of computation required makes economy in numerical processing of utmost importance. In pursuit of such economy, we here use techniques similar to those for Gaussian quadrature. We show that, for many purposes, the use of just two properly positioned temperatures (Lobatto points) characterizes well the transverse temperature distribution.
Numerical Simulation of Bird Flight Using Both CFD and Computational Flight Dynamics
Ueno, Yosuke; Nakamura, Yoshiaki
A numerical simulation method taking into account both aerodynamics and flight dynamics has been developed to simulate the flight of a low speed flying object, where it undergoes unsteady deformation. This method can also be applied to simulate the unsteady motion of small vehicles such as micro air vehicles (MAV). In the present study, we take up a bird and demonstrate its flight in the air. In particular the effect of fluid forces on the bird's flying motion is examined in detail, based on CFD×CFD: Computational Fluid Dynamics (CFD) and Computational Flight Dynamics. It is found from simulated results that this bird can generate lift and thrust enough to fly by flapping its wing. In addition, it can make a level flight by adjusting its oscillation frequency. Thus, the present method is promising to study the aerodynamics and flight dynamics of a moving object with its shape morphing.
Achieving high performance in numerical computations on RISC workstations and parallel systems
Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)
1997-08-20
The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.
A Method to Predict the Reliability of Military Ground Vehicles Using High Performance Computing
2006-11-01
optimization software, called RBDO . All three were ported from the University of Iowa to TARDEC’s HPC center and installed for run. (See figure 3... RBDO demands multiple reliability analyses at a given design. In the pilot study, refined reliability analyses for n number of the active/violate...preprocessor step to determine ‘hot spots’ and pre- Pre- processor Morpher Based Geometry Morpher Mesh ANSYS, NASTRAN, or ABAQUS DRAW DSO RBDO /PBDO
2008-01-01
The inherent noise in positron emission tomography (PET) leads to the instability of quantitative indicators, which may affect the diagnostic accuracy for differentiating malignant and benign lesions in the management of lung cancer. In this paper, the reliability of retention index (RI) is systematically investigated by using computer simulation for the dual-time-point imaging protocol. The area under the receiver operating characteristic (ROC) curve is used to evaluate the optimal protocol. Results demonstrate that the reliability of RI is affected by several factors including noise level, lesion type, and imaging schedule. The Ris with small absolute values suffer from worse reliability than those larger ones. The results of ROC curves show that over delayed second scan cannot help to improve the diag- nostic performance further, while an early first scan is expected. The method of optimization based on ROC analysis can be easily extended to comprise as many lesions as possible.
Katsaounis, T D [Department of Mathematics, University of Crete, 714 09 Heraklion, Crete (Greece)
2005-02-25
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. In summary, the book focuses on the computational and implementational issues involved in solving partial differential equations. The potential reader should have a basic knowledge of PDEs and the finite difference and finite element methods. The examples presented are solved within the programming framework of Diffpack and the reader should have prior experience with the particular software in order to take full advantage of the book. Overall
Young, A. Peter
2009-03-01
Systems with disorder and ``frustration'' occur in many branches of science. There has been considerable effort to understand one such type of system, known as the ``spin glass'', because it can be probed in fine detail experimentally by applying a magnetic field, and because it can be modeled by simple-looking Hamiltonians which are amenable to numerical simulation. Analytical work is very difficult and has been carried out mainly on models with unphysical features such as infinite-range interactions. Hence, much of what we know about spin glasses and related systems comes from numerical simulations on simplified models. In this talk I will describe some of the difficulties in performing reliable spin glass simulations. Then I will discuss several questions concerning phase transitions in spin glasses and related systems that have been addressed by simulations in recent years including (i) whether there is universality, (ii) whether there is a ``vortex glass'' transition in a disordered type-II superconductor in a magnetic field, (iii) whether ``chiralities'' play a crucial role in Heisenberg spin glasses, and (iv) whether there is a line of transitions (AT line) in a magnetic field.
A Computation Infrastructure for Knowledge-Based Development of Reliable Software Systems
2006-11-10
Lecture Notes in Computer Science , pages 449-465, 2005. "* Mark Bickford and David Guaspari, A Programming Logic for...Proving in Higher Order Logics, volume 2152 of Lecture Notes in Computer Science , pages 105-120. Springer Verlag, 2001. [CAB+86] Robert L. Constable...checking and model checking. In Rajeev Alur and Thomas A. Henzinger, editors, Computer-Aided Verification, volume 1102 of Lecture Notes in Computer Science
Zaifang Zhang
2015-02-01
Full Text Available Computer numerical control machine tool is a typical complex product related with multidisciplinary fields, complex structure, and high-performance requirements. It is difficult to identify the overall optimal solution of the machine tool structure for their multiple objectives. A new integrated multidisciplinary design optimization method is then proposed by using a Latin hypercube sampling, a Kriging approximate model, and a multi-objective genetic algorithm. Design space and parametric model are built by choosing appropriate design variables and their value ranges. Samples in design space are generated by optimal Latin hypercube method, and design variable contributions for design performance are discussed for aiding the designer’s judgments. The Kriging model is built by using polynomial approximation according to the response outputs of these samples. The multidisciplinary design model is established based on three optimization objectives, that is, setting mass, optimum deformation, and first-order natural frequency, and two constraints, that is, second-order natural frequency and third-order natural frequency. The optimal solution is identified by using a multi-objective genetic algorithm. The proposed method is applied in a multidisciplinary optimization case study for a typical computer numerical control machine tool. In the optimal solution, the mass decreases by 3.35% and the first-order natural frequency increases by 4.34% in contrast to the original solution.
Numerical Computation and Investigation of the Characteristics of Microscale Synthetic Jets
Ann Lee
2011-01-01
Full Text Available A synthetic jet results from periodic oscillations of a membrane in a cavity. Jet is formed when fluid is alternately sucked into and ejected from a small cavity by the motion of membrane bounding the cavity. A novel moving mesh algorithm to simulate the formation of jet is presented. The governing equations are transformed into the curvilinear coordinate system in which the grid velocities evaluated are then fed into the computation of the flow in the cavity domain thus allowing the conservation equations of mass and momentum to be solved within the stationary computational domain. Numerical solution generated using this moving mesh approach is compared with an experimental result measuring the instantaneous velocity fields obtained by μPIV measurements in the vicinity of synthetic jet orifice 241 μm in diameter issuing into confined geometry. Comparisons between experimental and numerical results on the streamwise component of velocity profiles at the orifice exit and along the centerline of the pulsating jet in microchannel as well as the location of vortex core indicate that there is good agreement, thereby demonstrating that the moving mesh algorithm developed is valid.
Dragan, Vasile; Ivanov, Ivan
2011-04-01
In this article, the problem of the numerical computation of the stabilising solution of the game theoretic algebraic Riccati equation is investigated. The Riccati equation under consideration occurs in connection with the solution of the H ∞ control problem for a class of stochastic systems affected by state-dependent and control-dependent white noise and subjected to Markovian jumping. The stabilising solution of the considered game theoretic Riccati equation is obtained as a limit of a sequence of approximations constructed based on stabilising solutions of a sequence of algebraic Riccati equations of stochastic control with definite sign of the quadratic part. The proposed algorithm extends to this general framework the method proposed in Lanzon, Feng, Anderson, and Rotkowitz (Lanzon, A., Feng, Y., Anderson, B.D.O., and Rotkowitz, M. (2008), 'Computing the Positive Stabilizing Solution to Algebraic Riccati Equations with an Indefinite Quadratic Term Viaa Recursive Method,' IEEE Transactions on Automatic Control, 53, pp. 2280-2291). In the proof of the convergence of the proposed algorithm different concepts associated the generalised Lyapunov operators as stability, stabilisability and detectability are widely involved. The efficiency of the proposed algorithm is demonstrated by several numerical experiments.
Lim, Fong Yin; Bao, Weizhu
2008-12-01
We propose efficient and accurate numerical methods for computing the ground-state solution of spin-1 Bose-Einstein condensates subjected to a uniform magnetic field. The key idea in designing the numerical method is based on the normalized gradient flow with the introduction of a third normalization condition, together with two physical constraints on the conservation of total mass and conservation of total magnetization. Different treatments of the Zeeman energy terms are found to yield different numerical accuracies and stabilities. Numerical comparison between different numerical schemes is made, and the best scheme is identified. The numerical scheme is then applied to compute the condensate ground state in a harmonic plus optical lattice potential, and the effect of the periodic potential, in particular to the relative population of each hyperfine component, is investigated through comparison to the condensate ground state in a pure harmonic trap.
Dolenko, T. A.; Burikov, S. A.; Vervald, E. N.; Efitorov, A. O.; Laptinskiy, K. A.; Sarmanova, O. E.; Dolenko, S. A.
2017-02-01
Elaboration of methods for the control of biochemical reactions with deoxyribonucleic acid (DNA) strands is necessary for the solution of one of the basic problems in the creation of biocomputers—improvement in the reliability of molecular DNA computing. In this paper, the results of the solution of the four-parameter inverse problem of laser Raman spectroscopy—the determination of the type and concentration of each of the DNA nitrogenous bases in multi-component solutions—are presented.
Numerical computation of the EOB potential q using self-force results
Akcay, Sarp
2015-01-01
The effective-one-body theory (EOB) describes the conservative dynamics of compact binary systems in terms of an effective Hamiltonian approach. The Hamiltonian for moderately eccentric motion of two non-spinning compact objects in the extreme mass-ratio limit is given in terms of three potentials: $a(v), \\bar{d}(v), q(v)$. By generalizing the first law of mechanics for (non-spinning) black hole binaries to eccentric orbits, [\\prd{\\bf92}, 084021 (2015)] recently obtained new expressions for $\\bar{d}(v)$ and $q(v)$ in terms of quantities that can be readily computed using the gravitational self-force approach. Using these expressions we present a new computation of the EOB potential $q(v)$ by combining results from two independent numerical self-force codes. We determine $q(v)$ for inverse binary separations in the range $1/1200 \\le v \\lesssim 1/6$. Our computation thus provides the first-ever strong-field results for $q(v)$. We also obtain $\\bar{d}(v)$ in our entire domain to a fractional accuracy of $\\gtrsim...
Kerfriden, Pierre; Goury, Olivier; Khac Chi, Hoang; Bordas, Stéphane
2014-01-01
Computational homogenisation is a widely spread technique to calculate the overall properties of a composite material from the knowledge of the constitutive laws of its microscopic constituents [1, 2]. Indeed, it relies on fewer assumptions than analytical or semi-analytical homogenisation approaches and can be used to coarse-grain a large range of micro-mechanical models. However, this accuracy comes at large computational costs, which prevents computational homogenisation from b...
Gonzalez-Vega, Laureano
1999-01-01
Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)
Garcia, Jr., W. J.; Viecelli, J. A.
1976-06-01
This report is intended to be a ''user manual'' for the Lawrence Livermore Laboratory version of the Eulerian incompressible hydrodynamic computer code ABMAC. The theory of the numerical model is discussed in general terms. The format for data input and data printout is described in detail. A listing and flow chart of the computer code are provided.
Kramer, Jessica M; Liljenquist, Kendra; Coster, Wendy J
2016-03-01
This study aimed to explore the test-retest reliability of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test for autism spectrum disorders (PEDI-CAT [ASD]), the concurrent validity of this test with the Vineland Adaptive Behavior Scales (VABS-II), and parents' perceptions of usability. A convenience sample of participants (n=39) was recruited nationally through disability organizations. Parents of young people aged 10 to 18 years (mean age 14y 10mo, SD 2y 8mo; 34 males, five females) who reported a diagnosis of autism were eligible to participate. Parents completed the VABS-II questionnaire once and the PEDI-CAT (ASD) twice (n=29) no more than 3 weeks apart (mean 12d) using computer-simulated administration. Parents also answered questions about the usability of these instruments. We examined score reliability using intraclass correlation coefficients (ICCs) and we explored the relationship between instruments using Spearman's rank correlation coefficients. Parent responses were grouped by common content; content categories were triangulated by an additional reviewer. Intraclass correlation coefficients indicate excellent reliability for all PEDI-CAT (ASD) domain scores (ICC ≥ 0.86). PEDI-CAT (ASD) and VABS-II domain scores correlated as expected or stronger than expected (0.57-0.81). Parents reported that the computer-based PEDI-CAT (ASD) was easy to use and included fewer irrelevant questions than the VABS-II instrument. These findings suggest that the PEDI-CAT (ASD) is a reliable assessment that parents can easily use. The PEDI-CAT (ASD) operationalizes the International Classification of Function, Disability and Health for Children and Youth constructs of 'activity' and 'participation', and this preliminary research suggests that the instrument's constructs are related to those of VABS-II. © 2015 Mac Keith Press.
Kingston, Greer B.; Rajabalinejad, Mohammadreza; Gouldby, Ben P.; Gelder, van Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex geote
Computing interval-valued reliability measures: application of optimal control methods
Kozin, Igor; Krymsky, Victor
2017-01-01
The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...
Kingston, Greer B.; Rajabali Nejad, Mohammadreza; Gouldby, Ben P.; van Gelder, Pieter H.A.J.M.
2011-01-01
With the continual rise of sea levels and deterioration of flood defence structures over time, it is no longer appropriate to define a design level of flood protection, but rather, it is necessary to estimate the reliability of flood defences under varying and uncertain conditions. For complex
Fourie, Zacharias; Damstra, Janalt; Gerrits, Pieter; Ren, Yijin
2010-01-01
It is important to have accurate and reliable measurements of soft tissue thickness for specific landmarks of the face and scalp when producing a facial reconstruction. In the past several methods have been created to measure facial soft tissue thickness (FSTT) in cadavers and in the living. The con
Namlu, Aysen Gurcan; Odabasi, Hatice Ferhan
2007-01-01
This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property,…
Duan, Wenbo; Kirby, Ray; Mudge, Peter; Gan, Tat-Hean
2016-12-01
Ultrasonic guided waves are often used in the detection of defects in oil and gas pipelines. It is common for these pipelines to be buried underground and this may restrict the length of the pipe that can be successfully tested. This is because acoustic energy travelling along the pipe walls may radiate out into the surrounding medium. Accordingly, it is important to develop a better understanding of the way in which elastic waves propagate along the walls of buried pipes, and so in this article a numerical model is developed that is suitable for computing the eigenmodes for uncoated and coated buried pipes. This is achieved by combining a one dimensional eigensolution based on the semi-analytic finite element (SAFE) method, with a perfectly matched layer (PML) for the infinite medium surrounding the pipe. This article also explores an alternative exponential complex coordinate stretching function for the PML in order to improve solution convergence. It is shown for buried pipelines that accurate solutions may be obtained over the entire frequency range typically used in long range ultrasonic testing (LRUT) using a PML layer with a thickness equal to the pipe wall thickness. This delivers a fast and computationally efficient method and it is shown for pipes buried in sand or soil that relevant eigenmodes can be computed and sorted in less than one second using relatively modest computer hardware. The method is also used to find eigenmodes for a buried pipe coated with the viscoelastic material bitumen. It was recently observed in the literature that a viscoelastic coating may effectively isolate particular eigenmodes so that energy does not radiate from these modes into the surrounding [elastic] medium. A similar effect is also observed in this article and it is shown that this occurs even for a relatively thin layer of bitumen, and when the shear impedance of the coating material is larger than that of the surrounding medium.
A numerical method for computing unsteady 2-D boundary layer flows
Krainer, Andreas
1988-01-01
A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.
Numerical method for computing Maass cusp forms on triply punctured two-sphere
Chan, K. T.; Kamari, H. M. [Department of Physics, Faculty of Science, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor (Malaysia); Zainuddin, H. [Department of Physics, Faculty of Science, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor, Malaysia and Institute for Mathematical Research, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor (Malaysia)
2014-03-05
A quantum mechanical system on a punctured surface modeled on hyperbolic space has always been an important subject of research in mathematics and physics. This corresponding quantum system is governed by the Schrödinger equation whose solutions are the Maass waveforms. Spectral studies on these Maass waveforms are known to contain both continuous and discrete eigenvalues. The discrete eigenfunctions are usually called the Maass Cusp Forms (MCF) where their discrete eigenvalues are not known analytically. We introduce a numerical method based on Hejhal and Then algorithm using GridMathematica for computing MCF on a punctured surface with three cusps namely the triply punctured two-sphere. We also report on a pullback algorithm for the punctured surface and a point locater algorithm to facilitate the complete pullback which are essential parts of the main algorithm.
Jun LI; Ying-wei KANG; Guang-yi CAO; Xin-jian ZHU; Heng-yong TU; Jian LI
2008-01-01
A detailed mathematical model of a direct internal reforming solid oxide fuel cell (DIR-SOFC) incorporating with simulation of chemical and physical processes in the fuel cell is presented. The model is developed based on the reforming and electrochemical reaction mechanisms, mass and energy conservation, and heat transfer. A computational fluid dynamics (CFD) method is used for solving the complicated multiple partial differential equations (PDEs) to obtain the numerical approximations.The resulting distributions of chemical species concentrations, temperature and current density in a cross-flow DIR-SOFC are given and analyzed in detail. Further, the influence between distributions of chemical species concentrations, temperature and current density during the simulation is illustrated and discussed. The heat and mass transfer, and the kinetics of reforming and electrochemical reactions have significant effects on the parameter distributions within the cell. The results show the particularchar acteristics of the DIR-SOFC among fuel cells, and can aid in stack design and control.
Programming for computations Python : a gentle introduction to numerical simulations with Python
Linge, Svein
2016-01-01
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
Linge, Svein
2016-01-01
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
Kaltenbacher, Manfred
2015-01-01
Like the previous editions also the third edition of this book combines the detailed physical modeling of mechatronic systems and their precise numerical simulation using the Finite Element (FE) method. Thereby, the basic chapter concerning the Finite Element (FE) method is enhanced, provides now also a description of higher order finite elements (both for nodal and edge finite elements) and a detailed discussion of non-conforming mesh techniques. The author enhances and improves many discussions on principles and methods. In particular, more emphasis is put on the description of single fields by adding the flow field. Corresponding to these field, the book is augmented with the new chapter about coupled flow-structural mechanical systems. Thereby, the discussion of computational aeroacoustics is extended towards perturbation approaches, which allows a decomposition of flow and acoustic quantities within the flow region. Last but not least, applications are updated and restructured so that the book meets mode...
Venturi, Daniele
2016-11-01
The fundamental importance of functional differential equations has been recognized in many areas of mathematical physics, such as fluid dynamics, quantum field theory and statistical physics. For example, in the context of fluid dynamics, the Hopf characteristic functional equation was deemed by Monin and Yaglom to be "the most compact formulation of the turbulence problem", which is the problem of determining the statistical properties of the velocity and pressure fields of Navier-Stokes equations given statistical information on the initial state. However, no effective numerical method has yet been developed to compute the solution to functional differential equations. In this talk I will provide a new perspective on this general problem, and discuss recent progresses in approximation theory for nonlinear functionals and functional equations. The proposed methods will be demonstrated through various examples.
Kikuchi, Satoru; Saito, Kazuyuki; Takahashi, Masaharu; Ito, Koichi; Ikehira, Hiroo
This paper presents the computational electromagnetic dosimetry inside an anatomically based pregnant woman models exposed to electromagnetic wave during magnetic resonance imaging. The two types of pregnant woman models corresponding to early gestation and 26 weeks gestation were used for this study. The specific absorption rate (SAR) in and around a fetus were calculated by radiated electromagnetic wave from highpass and lowpass birdcage coil. Numerical calculation results showed that high SAR region is observed at the body in the vicinity of gaps of the coil, and is related to concentrated electric field in the gaps of human body such as armpit and thigh. Moreover, it has confirmed that the SAR in the fetus is less than International Electrotechnical Commission limit of 10W/kg, when whole-body average SARs are 2W/kg and 4W/kg, which are the normal operating mode and first level controlled operating mode, respectively.
胡俊; 王宇晗; 王涛; 蔡建国
2001-01-01
To improve the reusable and configurable ability of computer numerical control ( CNC ) software, a new method to construct reusable model of CNC software with object-oriented (OO) technology is proposed. Based on analyzing function of CNC software, the article presents how to construct a general class library of CNC software with OO technology. Most function modules of CNC software can he reused because of inheritable capability of classes. Besides, the article analyzes the object relational model in request/report mode, and multitask concurrent management model, which can he applied on double-CPU hardware platform and Windows 95/NT environment. Finally, the method has been successfully applied on a turning CNC system and a milling CNC system, and some function modules have been reused.
Numerical reconstruction of pulsatile blood flow from 4D computer tomography angiography data
Lovas, Attila; Csobo, Elek; Szilágyi, Brigitta; Sótonyi, Péter
2015-01-01
We present a novel numerical algorithm developed to reconstuct pulsatile blood flow from ECG-gated CT angiography data. A block-based optimization method was constructed to solve the inverse problem corresponding to the Riccati-type ordinary differential equation that can be deduced from conservation principles and Hooke's law. Local flow rate for 5 patients was computed in 10cm long aorta segments that are located 1cm below the heart. The wave form of the local flow rate curves seems to be realistic. Our approach is suitable for estimating characteristics of pulsatile blood flow in aorta based on ECG gated CT scan thereby contributing to more accurate description of several cardiovascular lesions.
NUMERICAL COMPUTATIONS OF CO-EXISTING SUPER-CRITICAL AND SUB-CRITICAL FLOWS BASED UPON CRD SCHEMES
Horie, Katsuya; Okamura, Seiji; Kobayashi, Yusuke; Hyodo, Makoto; Hida, Yoshihisa; Nishimoto, Naoshi; Mori, Akio
Stream flows in steep gradient bed form complicating flow configurations, where co-exist super-critical and sub-critical flows. Computing numerically such flows are the key to successful river management. This study applied CRD schemes to 1D and 2D stream flow computations and proposed genuine ways to eliminate expansion shock waves. Through various cases of computing stream flows conducted, CRD schemes showed that i) conservativeness of discharge and accuracy of four significant figures are ensured, ii) artificial viscosity is not explicitly used for computational stabilization, and thus iii) 1D and 2D computations based upon CRD schemes are applicable to evaluating complicating stream flows for river management.
Fuzzy Deduction Material Removal Rate Optimization for Computer Numerical Control Turning
Tian-Syung Lan
2010-01-01
Full Text Available Problem statement: Material Removal Rate (MRR is often a major consideration in the modern Computer Numerical Control (CNC turning industry. Most existing optimization researches for CNC finish turning were either accomplished within certain manufacturing circumstances, or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme is deemed to be necessary proposed for the industry. Approach: In this study, four parameters (cutting depth, feed rate, speed, tool nose runoff with three levels (low, medium, high were considered to optimize the MRR in finish turning based on L9(34 orthogonal array. Additionally, nine fuzzy control rules using triangle membership function with respective to five linguistic grades for the MRR is constructed. Considering four input and twenty output intervals, the defuzzification using center of gravity was thus completed for the Taguchi experiment. Therefore, the optimum general deduction parameters can then be received. Results: The confirmation experiment for optimum general deduction parameters was furthermore performed on an ECOCA-3807 CNC lathe. It was shown that the material removal rates from the fuzzy Taguchi deduction optimization parameters are all significantly advanced comparing to those from the benchmark. Conclusion: This study not only proposed a general deduction optimization scheme using orthogonal array, but also contributed the satisfactory fuzzy linguistic approach for the MRR in CNC turning with profound insight.
Numerical computation of the Shock Tube Problem by means of wave digital principles
A. Mengel
2006-01-01
Full Text Available Partial differential equations can be solved numerically by means of wave digital principles. The great advantage of this method is the simultaneous achievement of high robustness, massive parallelism full localness and high accuracy. Among others this method will be applied in order to solve the Euler-equations according to one dimension in space. Especially the so called Shock Tube Problem will be examined. The analytical solution of this problem contains two discontinuities, namely a shock and a contact discontinuity. These result in oscillations which are due to numerical integration methods of higher order. Also solutions of the Wave Digital Method contain these oscillations, contrary to what had been observed of Yuhui Zhu (2000. This behaviour is also known as Gibbs Phenomena. The Navier-Stokes-equations, which are from a physical point of view more exactly, additionally take viscosity terms into account. This leads to smooth solutions near shocks. It will be shown that this approach leads to the suppression of the oscillations near the shock. Furthermore it will be shown that quite good results for the computation of velocity and pressure can be obtained.
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Arun Arjunan
2015-08-01
Full Text Available Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consisting of several millions of nodes and elements. Therefore, efficient meshing procedures are necessary to obtain better solution times and to effectively utilise computational resources. Such models should also demonstrate effective Fluid-Structure Interaction (FSI along with acoustic-fluid coupling to simulate a realistic scenario. In this contribution, the development of a finite element frequency-dependent mesh model that can characterize the sound insulation of metal-framed walls is presented. Preliminary results on the application of the proposed model to study the geometric contribution of stud frames on the overall acoustic performance of metal-framed walls are also presented. It is considered that the presented numerical model can be used to effectively visualize the noise behaviour of advanced materials and multi-material structures.
Numerical computations of interior transmission eigenvalues for scattering objects with cavities
Peters, Stefan; Kleefeld, Andreas
2016-04-01
In this article we extend the inside-outside duality for acoustic transmission eigenvalue problems by allowing scattering objects that may contain cavities. In this context we provide the functional analytical framework necessary to transfer the techniques that have been used in Kirsch and Lechleiter (2013 Inverse Problems, 29 104011) to derive the inside-outside duality. Additionally, extensive numerical results are presented to show that we are able to successfully detect interior transmission eigenvalues with the inside-outside duality approach for a variety of obstacles with and without cavities in three dimensions. In this context, we also discuss the advantages and disadvantages of the inside-outside duality approach from a numerical point of view. Furthermore we derive the integral equations necessary to extend the algorithm in Kleefeld (2013 Inverse Problems, 29 104012) to compute highly accurate interior transmission eigenvalues for scattering objects with cavities, which we will then use as reference values to examine the accuracy of the inside-outside duality algorithm.
Lee, Chang Hoon; Baek, Sang Yeup; Shin, In Sup; Moon, Shin Myung; Moon, Jae Phil; Koo, Hoon Young; Kim, Ju Shin [Seoul National University, Seoul (Korea, Republic of); Hong, Jung Sik [Seoul National Polytechnology University, Seoul (Korea, Republic of); Lim, Tae Jin [Soongsil University, Seoul (Korea, Republic of)
1996-08-01
The objective of this project is to develop a methodology of the dynamic reliability analysis for NPP. The first year`s research was focused on developing a procedure for analyzing failure data of running components and a simulator for estimating the reliability of series-parallel structures. The second year`s research was concentrated on estimating the lifetime distribution and PM effect of a component from its failure data in various cases, and the lifetime distribution of a system with a particular structure. Computer codes for performing these jobs were also developed. The objectives of the third year`s research is to develop models for analyzing special failure types (CCFs, Standby redundant structure) that were nor considered in the first two years, and to complete a methodology of the dynamic reliability analysis for nuclear power plants. The analysis of failure data of components and related researches for supporting the simulator must be preceded for providing proper input to the simulator. Thus this research is divided into three major parts. 1. Analysis of the time dependent life distribution and the PM effect. 2. Development of a simulator for system reliability analysis. 3. Related researches for supporting the simulator : accelerated simulation analytic approach using PH-type distribution, analysis for dynamic repair effects. 154 refs., 5 tabs., 87 figs. (author)
Reliability-Based Design of Wind Turbine Foundations – Computational Modelling
Vahdatirad, Mohammad Javad
of fossil fuels causing pollution, environmental degradation, and climate change, and finally mixed messages regarding declining domestic and foreign oil reserves. Therefore, the wind power industry is becoming a key player as the green energy producer in many developed countries. However, consumers demand...... increased cost-effectiveness in wind turbines, and an optimized design must be implemented on the expensive structural components. The traditional wind turbine foundation typically expends 25-30% of the total wind turbine budget; thus it is one of the most costly fabrication components. Therefore......, a reduction in foundation cost, and optimizing foundation structural design is the best solution to cost effectiveness. An optimized wind turbine foundation design should provide a suitable target reliability level. Unfortunately, the reliability level is not identified in most current deterministic design...
Cienfuegos, R.; Duarte, L.; Hernandez, E.
2008-12-01
Charasteristic frequencies of gravity waves generated by wind and propagating towards the coast are usually comprised between 0.05Hz and 1Hz. Nevertheless, lower frequecy waves, in the range of 0.001Hz and 0.05Hz, have been observed in the nearshore zone. Those long waves, termed as infragravity waves, are generated by complex nonlinear mechanisms affecting the propagation of irregular waves up to the coast. The groupiness of an incident random wave field may be responsible for producing a slow modulation of the mean water surface thus generating bound long waves travelling at the group speed. Similarly, a quasi- periodic oscillation of the break-point location, will be accompained by a slow modulation of set-up/set-down in the surf zone and generation and release of long waves. If the primary structure of the carrying incident gravity waves is destroyed (e.g. by breaking), forced long waves can be freely released and even reflected at the coast. Infragravity waves can affect port operation through resonating conditions, or strongly affect sediment transport and beach morphodynamics. In the present study we investigate infragravity wave generation mechanisms both, from experiments and numerical computations. Measurements were conducted at the 70-meter long wave tank, located at the Instituto Nacional de Hidraulica (Chile), prepared with a beach of very mild slope of 1/80 in order to produce large surf zone extensions. A random JONSWAP type wave field (h0=0.52m, fp=0.25Hz, Hmo=0.17m) was generated by a piston wave-maker and measurements of the free surface displacements were performed all over its length at high spatial resolution (0.2m to 1m). Velocity profiles were also measured at four verticals inside the surf zone using an ADV. Correlation maps of wave group envelopes and infragravity waves are computed in order to identify long wave generation and dynamics in the experimental set-up. It appears that both mechanisms (groupiness and break-point oscillation) are
Harris, Robert C; Boschitsch, Alexander H; Fenley, Marcia O
2017-08-08
Many researchers compute surface maps of the electrostatic potential (φ) with the Poisson-Boltzmann (PB) equation to relate the structural information obtained from X-ray and NMR experiments to biomolecular functions. Here we demonstrate that the usual method of obtaining these surface maps of φ, by interpolating from neighboring grid points on the solution grid generated by a PB solver, generates large errors because of the large discontinuity in the dielectric constant (and thus in the normal derivative of φ) at the surface. The Cartesian Poisson-Boltzmann solver contains several features that reduce the numerical noise in surface maps of φ: First, CPB introduces additional mesh points at the Cartesian grid/surface intersections where the PB equation is solved. This procedure ensures that the solution for interior mesh points only references nodes on the interior or on the surfaces; similarly for exterior points. Second, for added points on the surface, a second order least-squares reconstruction (LSR) is implemented that analytically incorporates the discontinuities at the surface. LSR is used both during the solution phase to compute φ at the surface and during postprocessing to obtain φ, induced charges, and ionic pressures. Third, it uses an adaptive grid where the finest grid cells are located near the molecular surface.
Kong, Song-Charng; Reitz, Rolf D.
2003-06-01
This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.
Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.
2016-10-01
We present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to the proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. While these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.
Energy conserving numerical methods for the computation of complex vortical flows
Allaneau, Yves
One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our
Multidisciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Khabaza, I M
1960-01-01
Numerical Analysis is an elementary introduction to numerical analysis, its applications, limitations, and pitfalls. Methods suitable for digital computers are emphasized, but some desk computations are also described. Topics covered range from the use of digital computers in numerical work to errors in computations using desk machines, finite difference methods, and numerical solution of ordinary differential equations. This book is comprised of eight chapters and begins with an overview of the importance of digital computers in numerical analysis, followed by a discussion on errors in comput
Bauer, Eric; Eustace, Dan
2012-01-01
"While geographic redundancy can obviously be a huge benefit for disaster recovery, it is far less obvious what benefit is feasible and likely for more typical non-catastrophic hardware, software, and human failures. Georedundancy and Service Availability provides both a theoretical and practical treatment of the feasible and likely benefits of geographic redundancy for both service availability and service reliability. The text provides network/system planners, IS/IT operations folks, system architects, system engineers, developers, testers, and other industry practitioners with a general discussion about the capital expense/operating expense tradeoff that frames system redundancy and georedundancy"--
Cuthbert, Jeffrey P; Whiteneck, Gale G; Corrigan, John D; Bogner, Jennifer
2016-01-01
Provide test-retest reliability (>5 months) of the Ohio State University Traumatic Brain Injury Identification Method modified for use as a computer-assisted telephone interview (CATI) to capture traumatic brain injury (TBI) and other substantial bodily injuries among a representative sample of noninstitutionalized adults living in Colorado. Four subsamples of 50 individuals, including people with no major lifetime injury, a major lifetime injury but no TBI, TBI with no loss of consciousness, and TBI with loss of consciousness, were interviewed using the CATI Ohio State University Traumatic Brain Injury Identification Method between 6 and 18 months after an initial interview. Stratified random sample of Coloradans (n = 200) selected from a larger study of TBI. Cumulative, Severity and Age-related indices were assessed for long-term reliability. Cumulative indices were those that summed the total number of specific TBI severities across the lifetime; Severity indices included measures of the most severe type of injury incurred throughout the lifetime; and Age-related indices assessed the timing of specific injury types across the lifespan. Test-retest reliabilities ranged from poor to excellent. The indices demonstrating the greatest reliability were Severity measures, with intraclass correlations for ordinal indices ranging from 0.62 to 0.78 and Cohen κ ranging from 0.50 to 0.62. One Cumulative outcome demonstrated high reliability (0.70 for number of TBIs with loss of consciousness ≥30 minutes), while the remaining Cumulative outcomes demonstrated low reliability, ranging from 0.06 to 0.21. Age-related test-retest reliabilities were fair to poor, with intraclass correlations of 0.38 to 0.49 and Cohen κ of 0.32 and 0.34. The CATI-modified Ohio State University Traumatic Brain Injury Identification Method used in this study is an effective measure for evaluating the maximum TBI severity incurred throughout the lifetime within a general population survey. The
Herl, H. E.; O'Neil, H. F., Jr.; Chung, G. K. W. K.; Schacter, J.
1999-01-01
Presents results from two computer-based knowledge-mapping studies developed by the National Center for Research on Evaluation, Standards, and Student Testing (CRESST): in one, middle and high school students constructed group maps while collaborating over a network, and in the second, students constructed individual maps while searching the Web.…
Direct Numerical Simulation of a Turbulent Reactive Plume on a Parallel Computer
Cook, Andrew W.; Riley, James J.
1996-12-01
A computational algorithm is described for direct numerical simulation (DNS) of a reactive plume in spatially evolving grid turbulence. The algorithm uses sixth-order compact differencing in conjunction with a fifth-order compact boundary scheme which has been developed and found to be stable. A compact filtering method is discussed as a means of stabilizing calculations where the viscous/diffusive terms are differenced in their conservative form. This approach serves as an alternative to nonconservative differencing, previously advocated as a means of damping the 2δ waves. In numerically solving the low Mach number equations the time derivative of the density field in the pressure Poisson equation was found to be the most destabilizing part of the calculation. Even-ordered finite difference approximations to this derivative were found to be more stable (allow for larger density gradients) than odd-ordered approximations. Turbulence at the inlet boundary is generated by scanning through an existing three-dimensional field of fully developed turbulence. In scanning through the inlet field, it was found that a high order interpolation, e.g., cubic-spline interpolation, is necessary in order to provide continuous velocity derivatives. Regarding pressure, a Neumann inlet condition combined with a Dirichlet outlet condition was found to work well. The chemistry follows the single-step, irreversible, global reaction: Fuel + ( r) Oxidizer → (1 + r)Product + Heat, with parameters chosen to match experimental data as far as allowed by resolution constraints. Simulation results are presented for four different cases in order to examine the effects of heat release, Damköhler number, and Arrhenius kinetics on the flow physics. Statistical data from the DNS are compared to theory and wind tunnel data and found in reasonable agreement with regard to growth of turbulent length scales, decay of turbulent kinetic energy, decay of centerline scalar concentration, decrease in
Chang, Chi-Cheng; Tseng, Kuo-Hung; Chou, Pao-Nan; Chen, Yi-Hui
2011-01-01
This study examined the reliability and validity of Web-based portfolio peer assessment. Participants were 72 second-grade students from a senior high school taking a computer course. The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters…
Haro, A
2004-01-01
We develop numerical algorithms for the computation of invariant manifolds in quasi-periodically forced systems. We show how to compute invariant tori and invariant manifolds associated to them. In particular, the stable and unstable manifolds of invariant tori, but also {\\sl non-resonant} invariant manifolds associated to spaces invariant under the linearization. These non-resonant manifolds include the slow manifolds which dominate the asymptotic behavior. The algorithms are based on the parameterization method. Rigorous results about this method are proved in in a companion paper. In this paper, we concentrate on numerical issues of algorithm. Examples of implementations of the algorithms appear in another companion paper.
Liodakis, Emmanouil; Doxastaki, Iosifina; Chu, Kongfai; Krettek, Christian; Gaulke, Ralph; Citak, Musa [Hannover Medical School, Trauma Department, Hannover (Germany); Kenawey, Mohamed [Sohag University Hospital, Orthopaedic Surgery Department, Sohag (Egypt)
2012-03-15
Various methods have been described to define the femoral neck and distal tibial axes based on a single CT image. The most popular are the Hernandez and Weiner methods for defining the femoral neck axis and the Jend, Ulm, and bimalleolar methods for defining the distal tibial axis. The purpose of this study was to calculate the intra- and interobserver reliability of the above methods and to determine intermethod differences. Three physicians separately measured the rotational profile of 44 patients using CT examinations on two different occasions. The average age of patients was 36.3 {+-} 14.4 years, and there were 25 male and 19 female patients. After completing the first two sessions of measurements, one observer chose certain cuts at the levels of the femoral neck, femoral condylar area, tibial plateau, and distal tibia. The three physicians then repeated all measurements using these CT cuts. The greatest interclass correlation coefficients were achieved with the Hernandez (0.99 intra- and 0.93 interobserver correlations) and bimalleolar methods (0.99 intra- and 0.92 interobserver correlations) for measuring the femoral neck and distal tibia axes, respectively. A statistically significant decrease in the interobserver median absolute differences could be achieved through the use of predefined CT scans only for measurements of the femoral condylar axis and the distal tibial axis using the Ulm method. The bimalleolar axis method underestimated the tibial torsion angle by an average of 4.8 and 13 compared to the Ulm and Jend techniques, respectively. The methods with the greatest inter- and intraobserver reliabilities were the Hernandez and bimalleolar methods for measuring femoral anteversion and tibial torsion, respectively. The high intermethod differences make it difficult to compare measurements made with different methods. (orig.)
Developing a Reliable Methodology for Assessing the Computer Network Operations Threat of Iran
2005-09-01
business activities under a confusing regulatory framework. During the 2000 presidential elections, Tehran police closed all the cyber- cafes with...permits as the reason why the cafes were shut down even though there were not any laws requiring permits. Actions like these create an atmosphere of...Jurisprudence 43 and Its Principles at the International Islamic University of Malaysia , issued a fatwa giving permission to hack into computers. The
Schramm, Alexandra; Wormanns, Dag; Leschber, Gunda; Merk, Johannes
2011-01-01
For resection of lung metastases computed tomography (CT) is needed to determine the operative strategy. A computer-aided detection (CAD) system, a software tool for automated detection of lung nodules, analyses the CT scans in addition to the radiologists and clearly marks lesions. The aim of this feasibility study was to evaluate the reliability of CAD in detecting lung metastases. Preoperative CT scans of 18 patients, who underwent surgery for suspected lung metastases, were analysed with CAD (September-December 2009). During surgery all suspected lesions were traced and resected. Histological examination was performed and results compared to radiologically suspicious nodes. Radiological analysis assisted by CAD detected 64 nodules (mean 3.6, range 1-7). During surgery 91 nodules (mean 5.0, range 1-11) were resected, resulting in 27 additionally palpated nodules. Histologically all these additional nodules were benign. In contrast, all 30 nodules shown to be metastases by histological studies were correctly described by CAD. The CAD system is a sensible and useful tool for finding pulmonary lesions. It detects more and smaller lesions than conventional radiological analysis. In this feasibility study we were able to show a greater reliability of the CAD analysis. A further and prospective study to confirm these data is ongoing.
Lars Viklund
1995-01-01
Full Text Available ObjectMath is a language for scientific computing that integrates object-oriented constructs with features for symbolic and numerical computation. Using ObjectMath, complex mathematical models may be implemented in a natural way. The ObjectMath programming environment provides tools for generating efficient numerical code from such models. Symbolic computation is used to rewrite and simplify equations before code is generated. One novelty of the ObjectMath approach is that it provides a comman language and an integrated environment for this kind of mixed symbolic/numerical computation. The motivation for this work is the current low-level state of the art in programming for scientific computing. Much numerical software is still being developed the traditional way in Fortran. This is especially true in application areas such as machine elements analysis, where complex nonlinear problems are the norm. We believe that tools like ObjectMath can increase productivity and quality, thus enabling users to solve problems that are too complex to handle with traditional tools.
Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herberger, Sarah Elizabeth Marie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.
Kako, T.; Watanabe, T. [eds.
2000-06-01
This is the proceeding of 'study on numerical methods related to plasma confinement' held in National Institute for Fusion Science. In this workshop, theoretical and numerical analyses of possible plasma equilibria with their stability properties are presented. There are also various lectures on mathematical as well as numerical analyses related to the computational methods for fluid dynamics and plasma physics. Separate abstracts were presented for 13 of the papers in this report. The remaining 6 were considered outside the subject scope of INIS. (J.P.N.)
SaiToh, Akira
2013-01-01
The time-dependent matrix-product-state (TDMPS) simulation method has been used for numerically simulating quantum computing for a decade. We introduce our C++ library ZKCM_QC developed for multiprecision TDMPS simulations of quantum circuits. Besides its practical usability, the library is useful for evaluation of the method itself. With the library, we can capture two types of numerical errors in the TDMPS simulations: one due to rounding errors caused by the shortage in mantissa portions of floating-point numbers; the other due to truncations of nonnegligible Schmidt coefficients and their corresponding Schmidt vectors. We numerically analyze these errors in TDMPS simulations of quantum algorithms.
王凌云; 黄红辉; 王大中
2014-01-01
To solve the problem of advanced digital manufacturing technology in the practical application, a knowledge engineering technology was introduced into the computer numerical control (CNC) programming. The knowledge acquisition, knowledge representation and reasoning used in CNC programming were researched. The CNC programming system functional architecture of impeller parts based on knowledge based engineering (KBE) was constructed. The structural model of the general knowledge-based system (KBS) was also constructed. The KBS of CNC programming system was established through synthesizing database technology and knowledge base theory. And in the context of corporate needs, based on the knowledge-driven manufacturing platform (i.e. UG CAD/CAM), VC++6.0 and UG/Open, the KBS and UG CAD/CAM were integrated seamlessly and the intelligent CNC programming KBE system for the impeller parts was developed by integrating KBE and UG CAD/CAM system. A method to establish standard process templates was proposed, so as to develop the intelligent CNC programming system in which CNC machining process and process parameters were standardized by using this KBE system. For the impeller parts processing, the method applied in the development of the prototype system is proven to be viable, feasible and practical.
New method for computer numerical control machine tool calibration: Relay method
LIU Huanlao; SHI Hanming; LI Bin; ZHOU Huichen
2007-01-01
Relay measurement method,which uses the kilogram-meter (KGM) measurement system to identify volumetric errors on the planes of computer numerical con trol (CNC) machine tools,is verified through experimental tests.During the process,all position errors on the entire plane table are measured by the equipment,which is limited to a small field.All errors are obtained first by measuring the error of the basic position near the original point.On the basis of that positional error,the positional errors far away from the original point are measured.Using this analogy,the error information on the positional points on the entire plane can be obtained.The process outlined above is called the relay meth od.Test results indicate that the accuracy and repeatability are high,and the method can be used to calibrate geometric errors on the plane of CNC machine tools after backlash errors have been well compensated.
Imachi, Hiroto
2015-01-01
Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.
Numerical Simulation of Mixing in a Micro-well Scale Bioreactor by Computational Fluid Dynamics
无
2002-01-01
The introduction of the multi-well plate miniaturisation technology with its associated automated dispensers, readers and integrated systems coupled with advances in life sciences has a propelling effect on the rate at which new potential drug molecules are discovered. The translation of these discoveries to real outcome now demands parallel approaches which allow large numbers of process options to be rapidly assessed. The engineering challenges in achieving this provide the motivation for the proposed work. In this work we used computational fluid dynamics(CFD) analysis to study flow conditions in a gas-liquid contactor which has the potential to be used as a fermenter on a multi-well format. The bioreactor had a working volume of 6.5 mL with the major dimensions equal to those of a single well of a 24-well plate. The 6.5 mL bioreactor was mechanically agitated and aerated by a single sparger placed beneath the bottom impeller. Detailed numerical procedure for solving the governing flow equations is given. The CFD results are combined with population balance equations to establish the size of the bubbles and their distribution in the bioreactor, Power curves with and without aeration are provided based on the simulated results.
Hur, Jin-Huek; Lee, Tae-Gu; Moon, Sun-Ae; Lee, Sang-Jae; Yoo, Hoseon; Moon, Seung-Jae; Lee, Jae-Heon
2008-09-01
The thermal reliability of a closed-type BLDC motor for a high-speed fan is analyzed by an accelerated-life testing and numerical methods in this paper. Since a module and a motor part are integrated in a closed case, heat generated from a rotor in a motor and electronic components in the PCB module cannot be effectively removed to the outside. Therefore, the module will easily fail due to high temperature. The experiment for measuring the temperature and the surface heat flux of the electronic components is carried out to predict their surface temperature distributions and main heat sources. The accelerated-life test is accomplished to formulate the life equation depending on the environmental temperature. Moreover, the temperature of the PCB module is different from the environmental temperature since the heat generated from the motor cannot be effectively dissipated owing to the motor’s structure. Therefore a numerical method is used to predict the temperature of the PCB module, which is one of the life equation parameter, according to the environment. By numerically obtaining the maxima of the thermal stress and strain of the electronic components according to the operation environments with the temperature results, the fatigue cycle can be estimated.
V. S.S. Yadavalli
2002-09-01
Full Text Available Bayesian estimation is presented for the stationary rate of disappointments, D∞, for two models (with different specifications of intermittently used systems. The random variables in the system are considered to be independently exponentially distributed. Jeffreys’ prior is assumed for the unknown parameters in the system. Inference about D∞ is being restrained in both models by the complex and non-linear definition of D∞. Monte Carlo simulation is used to derive the posterior distribution of D∞ and subsequently the highest posterior density (HPD intervals. A numerical example where Bayes estimates and the HPD intervals are determined illustrates these results. This illustration is extended to determine the frequentistical properties of this Bayes procedure, by calculating covering proportions for each of these HPD intervals, assuming fixed values for the parameters.
van Dyk, Danny; Geveler, Markus; Mallach, Sven; Ribbrock, Dirk; Göddeke, Dominik; Gutwenger, Carsten
2009-12-01
We present HONEI, an open-source collection of libraries offering a hardware oriented approach to numerical calculations. HONEI abstracts the hardware, and applications written on top of HONEI can be executed on a wide range of computer architectures such as CPUs, GPUs and the Cell processor. We demonstrate the flexibility and performance of our approach with two test applications, a Finite Element multigrid solver for the Poisson problem and a robust and fast simulation of shallow water waves. By linking against HONEI's libraries, we achieve a two-fold speedup over straight forward C++ code using HONEI's SSE backend, and additional 3-4 and 4-16 times faster execution on the Cell and a GPU. A second important aspect of our approach is that the full performance capabilities of the hardware under consideration can be exploited by adding optimised application-specific operations to the HONEI libraries. HONEI provides all necessary infrastructure for development and evaluation of such kernels, significantly simplifying their development. Program summaryProgram title: HONEI Catalogue identifier: AEDW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 No. of lines in distributed program, including test data, etc.: 216 180 No. of bytes in distributed program, including test data, etc.: 1 270 140 Distribution format: tar.gz Programming language: C++ Computer: x86, x86_64, NVIDIA CUDA GPUs, Cell blades and PlayStation 3 Operating system: Linux RAM: at least 500 MB free Classification: 4.8, 4.3, 6.1 External routines: SSE: none; [1] for GPU, [2] for Cell backend Nature of problem: Computational science in general and numerical simulation in particular have reached a turning point. The revolution developers are facing is not primarily driven by a change in (problem-specific) methodology, but rather by the fundamental paradigm shift of the
N. K. Khalid
2008-01-01
Full Text Available Problem statement: In DNA based computation and DNA nanotechnology, the design of good DNA sequences has turned out to be an essential problem and one of the most practical and important research topics. Basically, the DNA sequence design problem is a multi-objective problem and it can be evaluated using four objective functions, namely, Hmeasure, similarity, continuity and hairpin. Approach: There are several ways to solve multi-objective problem, however, in order to evaluate the correctness of PSO algorithm in DNA sequence design, this problem is converted into single objective problem. Particle Swarm Optimization (PSO is proposed to minimize the objective in the problem, subjected to two constraints: melting temperature and GCcontent. A model is developed to present the DNA sequence design based on PSO computation. Results: Based on experiments and researches done, 20 particles are used in the implementation of the optimization process, where the average values and the standard deviation for 100 runs are shown along with comparison to other existing methods. Conclusion: The results achieve verified that PSO can suitably solves the DNA sequence design problem using the proposed method and model, comparatively better than other approaches.
Yoshinobu Tamura
2015-06-01
Full Text Available At present, many cloud services are managed by using open source software, such as OpenStack and Eucalyptus, because of the unification management of data, cost reduction, quick delivery and work savings. The operation phase of cloud computing has a unique feature, such as the provisioning processes, the network-based operation and the diversity of data, because the operation phase of cloud computing changes depending on many external factors. We propose a jump diffusion model with two-dimensional Wiener processes in order to consider the interesting aspects of the network traffic and big data on cloud computing. In particular, we assess the stability of cloud software by using the sample paths obtained from the jump diffusion model with two-dimensional Wiener processes. Moreover, we discuss the optimal maintenance problem based on the proposed jump diffusion model. Furthermore, we analyze actual data to show numerical examples of dependability optimization based on the software maintenance cost considering big data on cloud computing.
El Paso Community Coll., TX.
Curriculum guides are provided for plastics technology, industrial maintenance, and computer numerical control. Each curriculum is divided into a number of courses. For each course these instructor materials are presented in the official course outline: course description, course objectives, unit titles, texts and materials, instructor resources,…
Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems
Alstrøm, Poul
2001-01-01
This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system...
Numerical computation of inventory policies, based on the EOQ/sigma-x value for order-point systems
Alstrøm, Poul
2000-01-01
This paper examines the numerical computation of two control parameters, order size and order point in the well-known inventory control model, an (s,Q)system with a beta safety strategy. The aim of the paper is to show that the EOQ/sigma-x value is both sufficient for controlling the system...
Givi, Peyman; Madnia, Cyrus K.; Steinberger, C. J.; Frankel, S. H.
1992-01-01
The principal objective is to extend the boundaries within which large eddy simulations (LES) and direct numerical simulations (DNS) can be applied in computational analyses of high speed reacting flows. A summary of work accomplished during the last six months is presented.
System reliability with correlated components: Accuracy of the Equivalent Planes method
Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.
2015-01-01
Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing th
M Pomarède
2016-09-01
Full Text Available Numerical simulation of Vortex-Induced-Vibrations (VIV of a rigid circular elastically-mounted cylinder submitted to a fluid cross-flow has been extensively studied over the past decades, both experimentally and numerically, because of its theoretical and practical interest for understanding Flow-Induced-Vibrations (FIV problems. In this context, the present article aims to expose a numerical study based on fully-coupled fluid-solid computations compared to previously published work [34], [36]. The computational procedure relies on a partitioned method ensuring the coupling between fluid and structure solvers. The fluid solver involves a moving mesh formulation for simulation of the fluid structure interface motion. Energy exchanges between fluid and solid models are ensured through convenient numerical schemes. The present study is devoted to a low Reynolds number configuration. Cylinder motion magnitude, hydrodynamic forces, oscillation frequency and fluid vortex shedding modes are investigated and the “lock-in” phenomenon is reproduced numerically. These numerical results are proposed for code validation purposes before investigating larger industrial applications such as configurations involving tube arrays under cross-flows [4].
Resnik Linda
2012-09-01
Full Text Available Abstract Background The Computer Adaptive Test version of the Community Reintegration of Injured Service Members measure (CRIS-CAT consists of three scales measuring Extent of, Perceived Limitations in, and Satisfaction with community integration. The CRIS-CAT was developed using item response theory methods. The purposes of this study were to assess the reliability, concurrent, known group and predictive validity and respondent burden of the CRIS-CAT. The CRIS-CAT was developed using item response theory methods. The purposes of this study were to assess the reliability, concurrent, known group and predictive validity and respondent burden of the CRIS-CAT. Methods This was a three-part study that included a 1 a cross-sectional field study of 517 homeless, employed, and Operation Enduring Freedom / Operation Iraqi Freedom (OEF/OIF Veterans; who completed all items in the CRIS item set, 2 a cohort study with one year follow-up study of 135 OEF/OIF Veterans, and 3 a 50-person study of CRIS-CAT administration. Conditional reliability of simulated CAT scores was calculated from the field study data, and concurrent validity and known group validity were examined using Pearson product correlations and ANOVAs. Data from the cohort were used to examine the ability of the CRIS-CAT to predict key one year outcomes. Data from the CRIS-CAT administration study were used to calculate ICC (2,1 minimum detectable change (MDC, and average number of items used during CAT administration. Results Reliability scores for all scales were above 0.75, but decreased at both ends of the score continuum. CRIS-CAT scores were correlated with concurrent validity indicators and differed significantly between the three Veteran groups (P 0.9. MDCs were 5.9, 6.2, and 3.6, respectively for Extent, Perceived and Satisfaction subscales. Number of items (mn, SD administered at Visit 1 were 14.6 (3.8 10.9 (2.7 and 10.4 (1.7 respectively for Extent, Perceived and Satisfaction
Reliability estimation using kriging metamodel
Cho, Tae Min; Ju, Byeong Hyeon; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)
2006-08-15
In this study, the new method for reliability estimation is proposed using kriging metamodel. Kriging metamodel can be determined by appropriate sampling range and sampling numbers because there are no random errors in the Design and Analysis of Computer Experiments(DACE) model. The first kriging metamodel is made based on widely ranged sampling points. The Advanced First Order Reliability Method(AFORM) is applied to the first kriging metamodel to estimate the reliability approximately. Then, the second kriging metamodel is constructed using additional sampling points with updated sampling range. The Monte-Carlo Simulation(MCS) is applied to the second kriging metamodel to evaluate the reliability. The proposed method is applied to numerical examples and the results are almost equal to the reference reliability.
J. Doornberg; A. Lindenhovius; P. Kloen; C.N. van Dijk; D. Zurakowski; D. Ring
2006-01-01
Background: Complex fractures of the distal part of the humerus can be difficult to characterize on plain radiographs and two-dimensional computed tomography scans. We tested the hypothesis that three-dimensional reconstructions of computed tomography scans improve the reliability and accuracy of fr
Heike Horn
Full Text Available Few data are available regarding the reliability of fluorescence in-situ hybridization (FISH, especially for chromosomal deletions, in high-throughput settings using tissue microarrays (TMAs. We performed a comprehensive FISH study for the detection of chromosomal translocations and deletions in formalin-fixed and paraffin-embedded (FFPE tumor specimens arranged in TMA format. We analyzed 46 B-cell lymphoma (B-NHL specimens with known karyotypes for translocations of IGH-, BCL2-, BCL6- and MYC-genes. Locus-specific DNA probes were used for the detection of deletions in chromosome bands 6q21 and 9p21 in 62 follicular lymphomas (FL and six malignant mesothelioma (MM samples, respectively. To test for aberrant signals generated by truncation of nuclei following sectioning of FFPE tissue samples, cell line dilutions with 9p21-deletions were embedded into paraffin blocks. The overall TMA hybridization efficiency was 94%. FISH results regarding translocations matched karyotyping data in 93%. As for chromosomal deletions, sectioning artefacts occurred in 17% to 25% of cells, suggesting that the proportion of cells showing deletions should exceed 25% to be reliably detectable. In conclusion, FISH represents a robust tool for the detection of structural as well as numerical aberrations in FFPE tissue samples in a TMA-based high-throughput setting, when rigorous cut-off values and appropriate controls are maintained, and, of note, was superior to quantitative PCR approaches.
Horn, Heike; Bausinger, Julia; Staiger, Annette M; Sohn, Maximilian; Schmelter, Christopher; Gruber, Kim; Kalla, Claudia; Ott, M Michaela; Rosenwald, Andreas; Ott, German
2014-01-01
Few data are available regarding the reliability of fluorescence in-situ hybridization (FISH), especially for chromosomal deletions, in high-throughput settings using tissue microarrays (TMAs). We performed a comprehensive FISH study for the detection of chromosomal translocations and deletions in formalin-fixed and paraffin-embedded (FFPE) tumor specimens arranged in TMA format. We analyzed 46 B-cell lymphoma (B-NHL) specimens with known karyotypes for translocations of IGH-, BCL2-, BCL6- and MYC-genes. Locus-specific DNA probes were used for the detection of deletions in chromosome bands 6q21 and 9p21 in 62 follicular lymphomas (FL) and six malignant mesothelioma (MM) samples, respectively. To test for aberrant signals generated by truncation of nuclei following sectioning of FFPE tissue samples, cell line dilutions with 9p21-deletions were embedded into paraffin blocks. The overall TMA hybridization efficiency was 94%. FISH results regarding translocations matched karyotyping data in 93%. As for chromosomal deletions, sectioning artefacts occurred in 17% to 25% of cells, suggesting that the proportion of cells showing deletions should exceed 25% to be reliably detectable. In conclusion, FISH represents a robust tool for the detection of structural as well as numerical aberrations in FFPE tissue samples in a TMA-based high-throughput setting, when rigorous cut-off values and appropriate controls are maintained, and, of note, was superior to quantitative PCR approaches.
Jette, Alan M.; McDonough, Christine M.; Haley, Stephen M.; Ni, Pengsheng; Olarsch, Sippy; Latham, Nancy; Hambleton, Ronald K.; Felson, David; Kim, Young-jo; Hunter, David
2012-01-01
Objective To develop and evaluate a prototype measure (OA-DISABILITY-CAT) for osteoarthritis research using Item Response Theory (IRT) and Computer Adaptive Test (CAT) methodologies. Study Design and Setting We constructed an item bank consisting of 33 activities commonly affected by lower extremity (LE) osteoarthritis. A sample of 323 adults with LE osteoarthritis reported their degree of limitation in performing everyday activities and completed the Health Assessment Questionnaire-II (HAQ-II). We used confirmatory factor analyses to assess scale unidimensionality and IRT methods to calibrate the items and examine the fit of the data. Using CAT simulation analyses, we examined the performance of OA-DISABILITY-CATs of different lengths compared to the full item bank and the HAQ-II. Results One distinct disability domain was identified. The 10-item OA-DISABILITY-CAT demonstrated a high degree of accuracy compared with the full item bank (r=0.99). The item bank and the HAQ-II scales covered a similar estimated scoring range. In terms of reliability, 95% of OA-DISABILITY reliability estimates were over 0.83 versus 0.60 for the HAQ-II. Except at the highest scores the 10-item OA-DISABILITY-CAT demonstrated superior precision to the HAQ-II. Conclusion The prototype OA-DISABILITY-CAT demonstrated promising measurement properties compared to the HAQ-II, and is recommended for use in LE osteoarthritis research. PMID:19216052
Computational modelling of Yorùbá numerals in a number-to-text conversion system
Olúgbénga O. Akinadé
2014-08-01
Full Text Available In this paper, we examine the processes underlying the Yorùbá numeral system and describe a computational system that is capable of converting cardinal numbers to their equivalent Standard Yorùbá number names. First, we studied the mathematical and linguistic basis of the Yorùbá numeral system so as to formalise its arithmetic and syntactic procedures. Next, the process involved in formulating a Context-Free Grammar (CFG to capture the structure of the Yorùbá numeral system was highlighted. Thereafter, the model was reduced into a set of computer programs to implement the numerical to lexical conversion process. System evaluation was done by ranking the output from the software and comparing the output with the representations given by a group of Yorùbá native speakers. The result showed that the system gave correct representation for numbers and produced a recall of 100% with respect to the collected corpus. Our future study is focused on developing a text normalisation system that will produce number names for other numerical expressions such as ordinal numbers, date, time, money, ratio, etc. in Yorùbá text.
Solving American Option Pricing Models by the Front Fixing Method: Numerical Analysis and Computing
R. Company
2014-01-01
analysis of the method is provided. The method preserves positivity and monotonicity of the numerical solution. Consistency and stability properties of the scheme are studied. Explicit calculations avoid iterative algorithms for solving nonlinear systems. Theoretical results are confirmed by numerical experiments. Comparison with other approaches shows that the proposed method is accurate and competitive.
Transport Loss Estimation of Fine Particulate Matter in Sampling Tube Based on Numerical Computation
Luo, L.; Cheng, Z.
2016-12-01
In-situ measurement of PM2.5 physical and chemical properties is one substantial approach for the mechanism investigation of PM2.5 pollution. Minimizing PM2.5 transport loss in sampling tube is essential for ensuring the accuracy of the measurement result. In order to estimate the integrated PM2.5 transport efficiency in sampling tube and optimize tube designs, the effects of different tube factors (length, bore size and bend number) on the PM2.5 transport were analyzed based on the numerical computation. The results shows that PM2.5 mass concentration transport efficiency of vertical tube with flowrate at 20.0 L·min-1, bore size at 4 mm, length at 1.0 m was 89.6%. However, the transport efficiency will increase to 98.3% when the bore size is increased to 14 mm. PM2.5 mass concentration transport efficiency of horizontal tube with flowrate at 1.0 L·min-1, bore size at 4mm, length at 10.0 m is 86.7%, increased to 99.2% with length at 0.5 m. Low transport efficiency of 85.2% for PM2.5 mass concentration is estimated in bend with flowrate at 20.0 L·min-1, bore size at 4mm, curvature angle at 90o. Laminar flow of air in tube through keeping the ratio of flowrate (L·min-1) and bore size (mm) less than 1.4 is beneficial to decrease the PM2.5 transport loss. For the target of PM2.5 transport efficiency higher than 97%, it is advised to use vertical sampling tubes with length less than 6.0 m for the flowrates of 2.5, 5.0, 10.0 L·min-1 and bore size larger than 12 mm for the flowrates of 16.7 or 20.0 L·min-1. For horizontal sampling tubes, tube length is decided by the ratio of flowrate and bore size. Meanwhile, it is suggested to decrease the amount of the bends in tube of turbulent flow.
Dimar, John R; Carreon, Leah Y; Labelle, Hubert; Djurasovic, Mladen; Weidenbaum, Mark; Brown, Courtney; Roussouly, Pierre
2008-10-01
Sagittal imbalance is a significant factor in determining clinical treatment outcomes in patients with deformity. Measurement of sagittal alignment using the traditional Cobb technique is frequently hampered by difficulty in visualizing landmarks. This report compares traditional manual measurement techniques to a computer-assisted sagittal plane measurement program which uses a radius arc methodology. The intra and inter-observer reliability of the computer program has been shown to be 0.92-0.99. Twenty-nine lateral 90 cm radiographs were measured by a computer program for an array of sagittal plane measurements. Ten experienced orthopedic spine surgeons manually measured the same parameters twice, at least 48 h apart, using a digital caliper and a standardized radiographic manual. Intraclass correlations were used to determine intra- and interobserver reliability between different manual measures and between manual measures and computer assisted-measures. The inter-observer reliability between manual measures was poor, ranging from -0.02 to 0.64 for the different sagittal measures. The intra-observer reliability in manual measures was better ranging from 0.40 to 0.93. Comparing manual to computer-assisted measures, the ICC ranged from 0.07 to 0.75. Surgeons agreed more often with each other than with the machine when measuring the lumbar curve, the thoracic curve, and the spino-sacral angle. The reliability of the computer program is significantly higher for all measures except for lumbar lordosis. A computer-assisted program produces a reliable measurement of the sagittal profile of the spine by eliminating the need for distinctly visible endplates. The use of a radial arc methodology allows for infinite data points to be used along the spine to determine sagittal measurements. The integration of this technique with digital radiography's ability to adjust image contrast and brightness will enable the superior identification of key anatomical parameters normally
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
M.M. Khader
2015-01-01
Full Text Available In this paper, two efficient numerical methods for solving system of fractional differential equations (SFDEs are considered. The fractional derivative is described in the Caputo sense. The first method is based upon Chebyshev approximations, where the properties of Chebyshev polynomials are utilized to reduce SFDEs to system of algebraic equations. Special attention is given to study the convergence and estimate the error of the presented method. The second method is the fractional finite difference method (FDM, where we implement the Grünwald–Letnikov’s approach. We study the stability of the obtained numerical scheme. The numerical results show that the approaches are easy to implement implement for solving SFDEs. The methods introduce a promising tool for solving many systems of linear and non-linear fractional differential equations. Numerical examples are presented to illustrate the validity and the great potential of both proposed techniques.
Cammarota, Chiara; Seoane, Beatriz
2016-11-01
As a guideline for experimental tests of the ideal glass transition (random-pinning glass transition, RPGT) that shall be induced in a system by randomly pinning particles, we performed first-principle computations within the hypernetted chain approximation and numerical simulations of a hard-sphere model of a glass former. We obtain confirmation of the expected enhancement of glassy behavior under the procedure of random pinning. We present the analytical phase diagram as a function of c and of the packing fraction ϕ , showing a line of RPGT ending in a critical point. We also obtain microscopic results on cooperative length scales characterizing medium-range amorphous order in hard-sphere glasses and indirect quantitative information on a key thermodynamic quantity defined in proximity to ideal glass transitions, the amorphous surface tension. Finally, we present numerical results of pair correlation functions able to differentiate the liquid and the glass phases, as predicted by the analytic computations.
Numerical Model of Air Valve For Computation of One-dimensional Flow
Daniel HIMR
2014-06-01
Full Text Available The paper is focused on a numerical simulation of unsteady flow in a pipeline. The special attention is paid to a numerical model of an air valve, which has to include all possible regimes: critical/subcritical inflow and critical/subcritical outflow of air. Thermodynamic equation of subcritical mass flow was simplified to get more friendly shape of relevant equations, which enables easier solution of the problem.
On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equations
H. Montazeri
2012-01-01
Full Text Available We consider a system of nonlinear equations F(x=0. A new iterative method for solving this problem numerically is suggested. The analytical discussions of the method are provided to reveal its sixth order of convergence. A discussion on the efficiency index of the contribution with comparison to the other iterative methods is also given. Finally, numerical tests illustrate the theoretical aspects using the programming package Mathematica.
Barnabe, Cheryl; Toepfer, Dominique; Marotte, Hubert; Hauge, Ellen-Margrethe; Scharmga, Andrea; Kocijan, Roland; Kraus, Sebastian; Boutroy, Stephanie; Schett, Georg; Keller, Kresten Krarup; de Jong, Joost; Stok, Kathryn S; Finzel, Stephanie
2016-10-01
High-resolution peripheral quantitative computed tomography (HR-pQCT) sensitively detects erosions in rheumatoid arthritis (RA); however, nonpathological cortical bone disruptions are potentially misclassified as erosive. Our objectives were to set and test a definition for pathologic cortical bone disruptions in RA and to standardize reference landmarks for measuring erosion size. HR-pQCT images of metacarpophalangeal joints of RA and control subjects were used in an iterative process to achieve consensus on the definition and reference landmarks. Independent readers (n = 11) applied the definition to score 58 joints and measure pathologic erosions in 2 perpendicular multiplanar reformations for their maximum width and depth. Interreader reliability for erosion detection and variability in measurements between readers [root mean square coefficient of variation (RMSCV), intraclass correlation (ICC)] were calculated. Pathologic erosions were defined as cortical breaks extending over a minimum of 2 consecutive slices in perpendicular planes, with underlying trabecular bone loss and a nonlinear shape. Interreader agreement for classifying pathologic erosions was 90.2%, whereas variability for width and depth erosion assessment was observed (RMSCV perpendicular width 12.3%, axial width 20.6%, perpendicular depth 24.0%, axial depth 22.2%; ICC perpendicular width 0.206, axial width 0.665, axial depth 0.871, perpendicular depth 0.783). Mean erosion width was 1.84 mm (range 0.16-8.90) and mean depth was 1.86 mm (range 0.30-8.00). We propose a new definition for erosions visualized with HR-pQCT imaging. Interreader reliability for erosion detection is good, but further refinement of selection of landmarks for erosion size measurement, or automated volumetric methods, will be pursued.
Masson, Yder; Romanowicz, Barbara
2016-11-01
We derive a fast discrete solution to the scattering problem. This solution allows us to compute accurate synthetic seismograms or waveforms for arbitrary locations of sources and receivers within a medium containing localized perturbations. The key to efficiency is that wave propagation modeling does not need to be carried out in the entire volume that encompasses the sources and the receivers but only within the sub-volume containing the perturbations or scatterers. The proposed solution has important applications, for example, it permits the imaging of remote targets located in regions where no sources or receivers are present. Our solution relies on domain decomposition: within a small volume that contains the scatterers, wave propagation is modeled numerically, while in the surrounding volume, where the medium isn't perturbed, the response is obtained through wavefield extrapolation. The originality of this work is the derivation of discrete formulas for representation theorems and Kirchhoff-Helmholtz integrals that naturally adapt to the numerical scheme employed for modeling wave propagation. Our solution applies, for example, to finite difference methods or finite/spectral elements methods. The synthetic seismograms obtained with our solution can be considered "exact" as the total numerical error is comparable to that of the method employed for modeling wave propagation. We detail a basic implementation of our solution in the acoustic case using the finite difference method and present numerical examples that demonstrate the accuracy of the method. We show that ignoring some terms accounting for higher order scattering effects in our solution has a limited effect on the computed seismograms and significantly reduces the computational effort. Finally, we show that our solution can be used to compute localised sensitivity kernels and we discuss applications to target oriented imaging. Extension to the elastic case is straightforward and summarised in a
Yan Ran
2016-01-01
Full Text Available As everyone knows, assembly quality plays a very important role in final product quality. Since computer numerical control machine tool is a large system with complicated structure and function, and there are complex association relationships among quality characteristics in assembly process, then it is difficult and inaccurate to analyze the whole computer numerical control machine tool quality characteristic association at one time. In this article, meta-action assembly unit is proposed as the basic analysis unit, of which quality characteristic association is studied to guarantee the whole computer numerical control machine tool assembly quality. First, based on “Function-Motion-Action” decomposition structure, the definitions of meta-action and meta-action assembly unit are introduced. Second, manufacturing process association and meta-action assembly unit quality characteristic association are discussed. Third, after understanding the definitions of information entropy and relative entropy, the concrete meta-action assembly unit quality characteristic association analysis steps based on relative entropy are described in detail. And finally, the lifting piston translation assembly unit of automatic pallet changer is taken as an example, the association degree between internal leakage and the influence factors of part quality characteristics and mate-relationships among them are calculated to figure out the most influential factors, showing the correctness and feasibility of this method.
Efficient numerical methods to compute unsteady subsonic flows on unstructured grids
Lucas, P.
2010-01-01
Over the last four decades the increase in computer power and the advances in solver technology has resulted in an estimated reduction of 10 orders in magnitude to compute flow problems. However, to solve the instationairy Reynolds-averaged Navier-Stokes equations, even today, a massive amount of C
Multiple scattering of elastic waves: a numerical method for computing the effective wavenumbers
Chekroun, Mathieu; Lombard, Bruno; Piraux, Joël
2012-01-01
Elastic wave propagation is studied in a heterogeneous 2-D medium consisting of an elastic matrix containing randomly distributed circular elastic inclusions. The aim of this study is to determine the effective wavenumbers when the incident wavelength is similar to the radius of the inclusions. A purely numerical methodology is presented, with which the limitations usually associated with low scatterer concentrations can be avoided. The elastodynamic equations are integrated by a fourth-order time-domain numerical scheme. An immersed interface method is used to accurately discretize the interfaces on a Cartesian grid. The effective field is extracted from the simulated data, and signal-processing tools are used to obtain the complex effective wavenumbers. The numerical reference solution thus-obtained can be used to check the validity of multiple scattering analytical models. The method is applied to the case of concrete. A parametric study is performed on longitudinal and transverse incident plane waves at v...
Chunsheng Liu
Full Text Available It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio were exposed to 5 or 25 µg/L 17β-estradiol (E2, 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1 and cyp1a1 (cytochrome P450 1A1, were examined. Relative distance (RD computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5] were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.
Liu, Chunsheng; Xu, Hongyan; Lam, Siew Hong; Gong, Zhiyuan
2013-01-01
It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio) were exposed to 5 or 25 µg/L 17β-estradiol (E2), 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1) and cyp1a1 (cytochrome P450 1A1), were examined. Relative distance (RD) computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5]) were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.
Wollny, Ines; Hartung, Felix; Kaliske, Michael
2016-05-01
In order to gain a deeper knowledge of the interactions in the coupled tire-pavement-system, e.g. for the future design of durable pavement structures, the paper presents recent results of research in the field of theoretical-numerical asphalt pavement modeling at material and structural level, whereby the focus is on a realistic and numerically efficient computation of pavements under rolling tire load by using the finite element method based on an Arbitrary Lagrangian Eulerian (ALE) formulation. Inelastic material descriptions are included into the ALE frame efficiently by a recently developed unsplit history update procedure. New is also the implementation of a viscoelastic cohesive zone model into the ALE pavement formulation to describe the interaction of the single pavement layers. The viscoelastic cohesive zone model is further extended to account for the normal pressure dependent shear behavior of the bonding layer. Another novelty is that thermo-mechanical effects are taken into account by a coupling of the mechanical ALE pavement computation to a transient thermal computation of the pavement cross-section to obtain the varying temperature distributions of the pavement due to climatic impact. Then, each ALE pavement simulation considers the temperature dependent asphalt material model that includes elastic, viscous and plastic behavior at finite strains and the temperature dependent viscoelastic cohesive zone formulation. The temperature dependent material parameters of the asphalt layers and the interfacial layers are fitted to experimental data. Results of coupled tire-pavement computations are presented to demonstrate potential fields of application.
Cerrada, Christian J; Weinberg, Janice; Sherman, Karen J; Saper, Robert B
2014-04-09
Little is known about the reliability of different methods of survey administration in low back pain trials. This analysis was designed to determine the reliability of responses to self-administered paper surveys compared to computer assisted telephone interviews (CATI) for the primary outcomes of pain intensity and back-related function, and secondary outcomes of patient satisfaction, SF-36, and global improvement among participants enrolled in a study of yoga for chronic low back pain. Pain intensity, back-related function, and both physical and mental health components of the SF-36 showed excellent reliability at all three time points; ICC scores ranged from 0.82 to 0.98. Pain medication use showed good reliability; kappa statistics ranged from 0.68 to 0.78. Patient satisfaction had moderate to excellent reliability; ICC scores ranged from 0.40 to 0.86. Global improvement showed poor reliability at 6 weeks (ICC = 0.24) and 12 weeks (ICC = 0.10). CATI shows excellent reliability for primary outcomes and at least some secondary outcomes when compared to self-administered paper surveys in a low back pain yoga trial. Having two reliable options for data collection may be helpful to increase response rates for core outcomes in back pain trials. ClinicalTrials.gov: NCT01761617. Date of trial registration: December 4, 2012.
An analytically based numerical method for computing view factors in real urban environments
Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun
2016-11-01
A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors (ψ ga and ψ wa ) and wall-view factors (ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.
Numerical Computation of the Chemically Reacting Flow around the National Aero-Space Plane
Tannehill, J. C.
1999-01-01
This final report summarizes the research accomplished. The research performed during the grant period can be divided into the following major areas: (1) Computation of chemically reacting Supersonic combustion ramjet (scramjet) flowfields. (2) Application of a two-equation turbulence model to supersonic combustion flowfields. (3) Computation of the integrated aerodynamic and propulsive flowfields of a generic hypersonic space plane. (4) Computation of hypersonic flows with finite-catalytic walls. (5) Development of an upwind Parabolized Navier-Stokes (PNS) code for thermo-chemical nonequilibrium flows.
Villa, Oreste; Chavarría-Miranda, Daniel; Gurumoorthi, Vidhya; Marquez, Andres; Krishnamoorthy, Sriram
2009-05-03
Floating-point addition and multiplication are not necessarily associative. When performing those operations over large numbers of operands with different magnitudes, the order in which individual operations are performed can affect the final result. On massively multithreaded systems, when performing parallel reductions, the non-deterministic nature of numerical operation interleaving can lead to non-deterministic numerical results. We have investigated the effect of this problem on the convergence of a conjugate gradient calculation used as part of a power grid analysis application.
True, Hans; Engsig-Karup, Allan Peter; Bigoni, Daniele
2014-01-01
The paper contains a report of the experiences with numerical analyses of railway vehicle dynamical systems, which all are nonlinear, non-smooth and stiff high-dimensional systems. Some results are shown, but the emphasis is on the numerical methods of solution and lessons learned. But for two...... examples the dynamical problems are formulated as systems of ordinary differential-algebraic equations due to the geometric constraints. The non-smoothnesses have been neglected, smoothened or entered into the dynamical systems as switching boundaries with relations, which govern the continuation...
Reliability based design optimization: Formulations and methodologies
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed
Berkstresser, B. K.
1975-01-01
NASA is conducting a Terminal Configured Vehicle program to provide improvements in the air transportation system such as increased system capacity and productivity, increased all-weather reliability, and reduced noise. A typical jet transport has been equipped with highly flexible digital display and automatic control equipment to study operational techniques for conventional takeoff and landing aircraft. The present airborne computer capability of this aircraft employs a multiple computer simple redundancy concept. The next step is to proceed from this concept to a reconfigurable computer system which can degrade gracefully in the event of a failure, adjust critical computations to remaining capacity, and reorder itself, in the case of transients, to the highest order of redundancy and reliability.
Scott, L Ridgway
2011-01-01
Computational science is fundamentally changing how technological questions are addressed. The design of aircraft, automobiles, and even racing sailboats is now done by computational simulation. The mathematical foundation of this new approach is numerical analysis, which studies algorithms for computing expressions defined with real numbers. Emphasizing the theory behind the computation, this book provides a rigorous and self-contained introduction to numerical analysis and presents the advanced mathematics that underpin industrial software, including complete details that are missing from m
Sato, Jun-Ichi; Washizawa, Yoshikazu
2015-08-01
We propose two methods to improve code modulation visual evoked potential brain computer interfaces (cVEP BCIs). Most of BCIs average brain signals from several trials in order to improve the classification performance. The number of averaging defines the trade-off between input speed and accuracy, and the optimal averaging number depends on individual, signal acquisition system, and so forth. Firstly, we propose a novel dynamic method to estimate the averaging number for cVEP BCIs. The proposed method is based on the automatic repeat request (ARQ) that is used in communication systems. The existing cVEP BCIs employ rather longer code, such as 63-bit M-sequence. The code length also defines the trade-off between input speed and accuracy. Since the reliability of the proposed BCI can be controlled by the proposed ARQ method, we introduce shorter codes, 32-bit M-sequence and the Kasami-sequence. Thanks to combine the dynamic averaging number estimation method and the shorter codes, the proposed system exhibited higher information transfer rate compared to existing cVEP BCIs.
2011-11-15
hole binary system. This work resulted in a fast publication in Physical Review . More work is ongoing in the fields of computational mathematics, civil engineering, mechanical engineering, physics, and geophysics.
谢良喜; 孔建益; 万晓红
2009-01-01
建立了矩形截面叶片密封的数值模型,分析了预压缩量、密封油压等对密封表面接触压力和机械效率的影响.结果表明,叶片密封在径向区的压力分布为非线性曲线分布;密封可靠性和密封面机械效率不仅与预压缩量密切相关,还取决于密封油压.同样预压缩量下,密封油压较低时,密封可靠性更容易保证,但摩擦导致的机械效率损失也会相对较多;密封油压较高时,密封可靠性会有所降低,但摩擦对机械效率的影响也会有所减小.%A numerical model was established to study the effects of the initial interference and the seal pressure on the contact pressure and the mechanical efficiency of flexible rectangular section seals for vane. The computing results show that the curves of the contact pressure are nonlinear,the seal reliability and the mechanical efficiency are decided by the initial interference and the seal pressure. With the same initial interference, a vane seal can be reliable under a lower seal pressure, although with a lower mechanical efficiency. When the seal pressure is higher, seal failure can occur at some seal surface. So it may be necessary to modify the initial interference when the seal pressure changes.
A Computer-Based Content Analysis of Interview Texts: Numeric Description and Multivariate Analysis.
Bierschenk, B.
1977-01-01
A method is described by which cognitive structures in verbal data can be identified and categorized through numerical analysis and quantitative description. Transcriptions of interviews (in this case, the verbal statements of 40 researchers) are manually coded and subjected to analysis following the AaO (Agent action Object) paradigm. The texts…
Equilibrium gas flow computations. II - An analysis of numerical formulations of conservation laws
Vinokur, Marcel; Liu, Yen
1988-01-01
Modern numerical techniques employing properties of flux Jacobian matrices are extended to general, equilibrium gas laws. Generalizations of the Beam-Warming scheme, Steger-Warming and van Leer flux-vector splittings, and Roe's approximate Riemann solver are presented for three-dimensional, time-varying grids. The approximations inherent in previous generalizations are discussed.
Ahmed M. Elsayed
2013-01-01
Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.
Numerical computation of the restoring force in a cylindrical bearing containing magnetic liquid
Greconici Marian
2008-01-01
Full Text Available Present paper deals with the second order of magnetic levitation, applied to a cylindrical bearing holding a magnetized shaft and the magnetic liquid The magnetic restoring force acting on the shaft of the cylindrical bearing. was numerically evaluated, the liquid being considered a nonlinear medium.
Computer Network Reliability Optimization Talking%试论计算机网络可靠性优化技术
陆军; 朱文锋
2011-01-01
随着互联网技术的迅速发展，计算机信息技术已经渗透到现实生活、生产的各个方面和环节。计算机网络从技术角度来说，是作为一种布局，将经有关联但相距遥远的事物通过通信线路连接起来，但是对网络的思考决不是传统的二维平面思维甚至三维的球面思维所能达到的，而且，越来越开放的计算机信息系统也带来了众多安全隐患，人们的各类信息安全受到了挑战，黑客攻击与反黑客保护、破坏与反破坏的斗争日益激烈，网络安全越来越受到人们重视和关注。本文就网络可靠性进行分析，望能给大家带来帮助。%With the rapid development of Internet technology,computer information technology has penetrated into real life,all aspects of production and links.Computer network from a technical point of view,as a layout,but will be associated by a distant things connected via communication lines,but never thought of the network is the traditional two-dimensional thinking,even thinking can achieve three-dimensional spherical,and,more open computer information system has also brought a number of security risks,it has been all kinds of information security challenges,hacker attacks and anti-hacking protection,destruction and damage to the struggle against increasingly fierce,network security increasingly The more attention and concern by the people.In this paper,network reliability analysis,hope to give you help.
Numerical errors in the computation of subfilter scalar variance in large eddy simulations
Kaul, C. M.; Raman, V.; Balarac, G.; Pitsch, H.
2009-05-01
Subfilter scalar variance is a key quantity for scalar mixing at the small scales of a turbulent flow and thus plays a crucial role in large eddy simulation of combustion. While prior studies have mainly focused on the physical aspects of modeling subfilter variance, the current work discusses variance models in conjunction with the numerical errors due to their implementation using finite-difference methods. A priori tests on data from direct numerical simulation of homogeneous turbulence are performed to evaluate the numerical implications of specific model forms. Like other subfilter quantities, such as kinetic energy, subfilter variance can be modeled according to one of two general methodologies. In the first of these, an algebraic equation relating the variance to gradients of the filtered scalar field is coupled with a dynamic procedure for coefficient estimation. Although finite-difference methods substantially underpredict the gradient of the filtered scalar field, the dynamic method is shown to mitigate this error through overestimation of the model coefficient. The second group of models utilizes a transport equation for the subfilter variance itself or for the second moment of the scalar. Here, it is shown that the model formulation based on the variance transport equation is consistently biased toward underprediction of the subfilter variance. The numerical issues in the variance transport equation stem from discrete approximations to chain-rule manipulations used to derive convection, diffusion, and production terms associated with the square of the filtered scalar. These approximations can be avoided by solving the equation for the second moment of the scalar, suggesting that model's numerical superiority.
赵坚行; 伍艳玲; 周琳
2000-01-01
The three dimensional swirling recirculating isothermal turbulent flows of agas turbine combustor dome swirl cup are studied numerically by a arbitrarycurvilinear coordinate system.The dual-stage swirler has numerous passagesand its geometry is so complex that it makes the grid generationdifficult. The computational meshes are generated by the numericalsolution of partial differential equations.The swirl cup is treated as aninternal obstacle and suitably meshing around it. Using a coordinatetransformation relations,the transport equations are transformed from acylindrical system to a general curvilinear system. Turbulence is modeledby the standard k- model along with the wall functiontreatment for near-wall regions. The finite differencing equations aresolved by Hybrid scheme, SIMPLE algorithm and the staggered curvilinearnon-orthogonal grid system. Predictions show that the recirculation zone at the centerline is createdunder both the co-swirl and counter-swirl condition. The recirculation zonefor the counter-swirl is larger than that for the co-swirl. The low-velocitygas flow with large turbulence intensities and high dissipation rates areoccured inside the recirculation zone. Calculations of two inlet conditionsare shown the numerical procedure is reliable.%本文采用贴体坐标系(Body-Fitted Coordinate System)数值研究先进燃烧室涡流杯内速度场和紊流特性，并利用TTM法生成贴体网格。由于双级旋流器形状复杂，本文提出了型线定点法来确定边界网格、uv线法生成交错网格以及整体不分区进行流场计算。计算结果表明：计算方法合理，计算机程序是可靠的。
Press, William H.; Teukolsky, Saul A.; Vettering, William T.; Flannery, Brian P.
2003-05-01
The two Numerical Recipes books are marvellous. The principal book, The Art of Scientific Computing, contains program listings for almost every conceivable requirement, and it also contains a well written discussion of the algorithms and the numerical methods involved. The Example Book provides a complete driving program, with helpful notes, for nearly all the routines in the principal book. The first edition of Numerical Recipes: The Art of Scientific Computing was published in 1986 in two versions, one with programs in Fortran, the other with programs in Pascal. There were subsequent versions with programs in BASIC and in C. The second, enlarged edition was published in 1992, again in two versions, one with programs in Fortran (NR(F)), the other with programs in C (NR(C)). In 1996 the authors produced Numerical Recipes in Fortran 90: The Art of Parallel Scientific Computing as a supplement, called Volume 2, with the original (Fortran) version referred to as Volume 1. Numerical Recipes in C++ (NR(C++)) is another version of the 1992 edition. The numerical recipes are also available on a CD ROM: if you want to use any of the recipes, I would strongly advise you to buy the CD ROM. The CD ROM contains the programs in all the languages. When the first edition was published I bought it, and have also bought copies of the other editions as they have appeared. Anyone involved in scientific computing ought to have a copy of at least one version of Numerical Recipes, and there also ought to be copies in every library. If you already have NR(F), should you buy the NR(C++) and, if not, which version should you buy? In the preface to Volume 2 of NR(F), the authors say 'C and C++ programmers have not been far from our minds as we have written this volume, and we think that you will find that time spent in absorbing its principal lessons will be amply repaid in the future as C and C++ eventually develop standard parallel extensions'. In the preface and introduction to NR
Application of high-performance computing to numerical simulation of human movement
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I. [VNIIEF (Russian Federation)] [and others
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.
2014-01-01
The book presents state-of-the-art works in computational engineering. Focus is on mathematical modeling, numerical simulation, experimental validation and visualization in engineering sciences. In particular, the following topics are presented: constitutive models and their implementation into finite element codes, numerical models in nonlinear elasto-dynamics including seismic excitations, multiphase models in structural engineering and multiscale models of materials systems, sensitivity and reliability analysis of engineering structures, the application of scientific computing in urban water management and hydraulic engineering, and the application of genetic algorithms for the registration of laser scanner point clouds.
Numerical Computations of Transonic Critical AerodynamicBehavior of a Realistic Artillery Projectile
Ahmed F. M. Kridi
2009-01-01
Full Text Available The determination of aerodynamic coefficients by shell designers is a critical step in the development of any projectile design. Of particular interest is the determination of the aerodynamic coefficients at transonic speeds. It is in this speed regime that the critical aerodynamic behavior occurs and a rapid change in the aerodynamic coefficients is observed. Two-dimensional, transonic, flow field computations over projectiles have been made using Euler equations which were used for solution with no special treatment required. In this work a solution algorithm is based on finite difference MacCormacks technique for solving mixed subsonic-supersonic flow problem. Details of the asymmetrically located shock waves on the projectiles have been determined. Computed surface pressures have been compared with experimental data and are found to be in good agreement. The pitching moment coefficient, determined from the computed flow fields, shows the critical aerodynamic behavior observed in free flights.
A Computationally-Efficient Numerical Model to Characterize the Noise Behavior of Metal-Framed Walls
Arjunan, Arun; Wang, Chang; English, Martin; Stanford, Mark; Lister, Paul
2015-01-01
Architects, designers, and engineers are making great efforts to design acoustically-efficient metal-framed walls, minimizing acoustic bridging. Therefore, efficient simulation models to predict the acoustic insulation complying with ISO 10140 are needed at a design stage. In order to achieve this, a numerical model consisting of two fluid-filled reverberation chambers, partitioned using a metal-framed wall, is to be simulated at one-third-octaves. This produces a large simulation model consi...
Unified algorithm for partial differential equations and examples of numerical computation
Watanabe, Tsuguhiro [National Inst. for Fusion Science, Toki, Gifu (Japan)
1999-04-01
A new unified algorithm is proposed to solve partial differential equations which describe nonlinear boundary value problems, eigenvalue problems and time developing boundary value problems. The algorithm is composed of implicit difference scheme and multiple shooting scheme and is named as HIDM (Higher order Implicit Difference Method). A new prototype computer programs for 2-dimensional partial differential equations is constructed and tested successfully to several problems. Extension of the computer programs to 3 or more higher order dimension problems will be easy due to the direct product type difference scheme. (author)
A numerical scheme using multi-shockpeakons to compute solutions of the Degasperis-Procesi equation
Hakon A. Hoel
2007-07-01
Full Text Available We consider a numerical scheme for entropy weak solutions of the DP (Degasperis-Procesi equation $u_t - u_{xxt} + 4uu_x = 3u_{x}u_{xx}+ uu_{xxx}$. Multi-shockpeakons, functions of the form $$ u(x,t =sum_{i=1}^n(m_i(t -hbox{sign}(x-x_i(ts_i(te^{-|x-x_i(t|}, $$ are solutions of the DP equation with a special property; their evolution in time is described by a dynamical system of ODEs. This property makes multi-shockpeakons relatively easy to simulate numerically. We prove that if we are given a non-negative initial function $u_0 in L^1(mathbb{R}cap BV(mathbb{R}$ such that $u_{0} - u_{0,x}$ is a positive Radon measure, then one can construct a sequence of multi-shockpeakons which converges to the unique entropy weak solution in $mathbb{R}imes[0,T$ for any $T>0$. From this convergence result, we construct a multi-shockpeakon based numerical scheme for solving the DP equation.
Ko, Soon Heum [Linkoeping University, Linkoeping (Sweden); Kim, Na Yong; Nikitopoulos, Dimitris E.; Moldovan, Dorel [Louisiana State University, Baton Rouge (United States); Jha, Shantenu [Rutgers University, Piscataway (United States)
2014-01-15
Numerical approaches are presented to minimize the statistical errors inherently present due to finite sampling and the presence of thermal fluctuations in the molecular region of a hybrid computational fluid dynamics (CFD) - molecular dynamics (MD) flow solution. Near the fluid-solid interface the hybrid CFD-MD simulation approach provides a more accurate solution, especially in the presence of significant molecular-level phenomena, than the traditional continuum-based simulation techniques. It also involves less computational cost than the pure particle-based MD. Despite these advantages the hybrid CFD-MD methodology has been applied mostly in flow studies at high velocities, mainly because of the higher statistical errors associated with low velocities. As an alternative to the costly increase of the size of the MD region to decrease statistical errors, we investigate a few numerical approaches that reduce sampling noise of the solution at moderate-velocities. These methods are based on sampling of multiple simulation replicas and linear regression of multiple spatial/temporal samples. We discuss the advantages and disadvantages of each technique in the perspective of solution accuracy and computational cost.
Wilke, DN
2012-07-01
Full Text Available , and is based on a Taylor series expansion using a pure imaginary step. The complex-step method is not subject to subtraction errors as with finite difference approaches when computing first order sensitivities and therefore allows for much smaller step sizes...
A Bicharacteristic Scheme for the Numerical Computation of Two-Dimensional Converging Shock Waves
Meier, U E; Meier, Uwe E.; Demmig, Frank
1997-01-01
A 2d unsteady bicharacteristic scheme with shock fitting is presented and its characteristic step, shock point step and boundary step are described. The bicharacteristic scheme is compared with an UNO scheme and the Moretti scheme. Its capabilities are illustrated by computing a converging, deformed shock wave.
Accurate Numerical Methods for Computing 2D and 3D Robot Workspace
Yi Cao
2011-12-01
Full Text Available Exact computation of the shape and size of robot manipulator workspace is very important for its analysis and optimum design. First, the drawbacks of the previous methods based on Monte Carlo are pointed out in the paper, and then improved strategies are presented systematically. In order to obtain more accurate boundary points of two-dimensional (2D robot workspace, the Beta distribution is adopted to generate random variables of robot joints. And then, the area of workspace is acquired by computing the area of the polygon what is a closed path by connecting the boundary points together. For comparing the errors of workspaces which are generated by the previous and the improved method from shape and size, one planar robot manipulator is taken as example. A spatial robot manipulator is used to illustrate that the methods can be used not only on planar robot manipulator, but also on the spatial. The optimal parameters are proposed in the paper to computer the shape and size of 2D and 3D workspace. Finally, we provided the computation time and discussed the generation of 3D workspace which is based on 3D reconstruction from the boundary points.
Stone, James
2011-04-01
Numerical methods have proved crucial for the study of the nonlinear regime of the magnetorotational instability (MRI) and resulting dynamo action. After a brief introduction to the methods, a variety of results from new simulations of the MRI in both local (shearing box approximation) and global domains will be presented. Previous work on the saturation level and numerical convergence in both stratified and unstratified domains with no net flux (both with and without explicit dissipation) will be described, and the connection to dynamo theory will be mentioned. Results from several groups in which the size of the computational domain, and the vertical boundary conditions, are varied will be discussed. Finally, new work on the direct comparison between high-resolution global and shearing box simulations will be presented, and new studies of stratified disks with radiative transfer will be introduced.
Seo, Yong-Joon; Kwon, Taek-Ka; Han, Jung-Suk; Lee, Jai-Bong; Kim, Sung-Hun
2014-01-01
PURPOSE The purpose of this study was to evaluate the intra-rater reliability and inter-rater reliability of three different methods using a drawing protractor, a digital protractor after tracing, and a CAD system. MATERIALS AND METHODS Twenty-four artificial abutments that had been prepared by dental students were used in this study. Three dental students measured the convergence angles by each method three times. Bland-Altman plots were applied to examine the overall reliability by comparing the traditional tracing method with a new method using the CAD system. Intraclass Correlation Coefficients (ICC) evaluated intra-rater reliability and inter-rater reliability. RESULTS All three methods exhibited high intra-rater and inter-rater reliability (ICC>0.80, P<.05). Measurements with the CAD system showed the highest intra-rater reliability. In addition, it showed improved inter-rater reliability compared with the traditional tracing methods. CONCLUSION Based on the results of this study, the CAD system may be an easy and reliable tool for measuring the abutment convergence angle. PMID:25006382
Bao, Weizhu
2013-01-01
We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three-dimensional (3D) Gross-Pitaevskii equation (GPE) with an angular momentum rotation term and/or long-range DDI, state the twodimensional (2D) GPE obtained from the 3D GPE via dimension reduction under anisotropic external potential, and review some dynamical laws related to the 2D and 3D GPEs. By introducing a rotating Lagrangian coordinate system, the original GPEs are reformulated to GPEs without the angular momentum rotation, which is replaced by a time-dependent potential in the new coordinate system. We then cast the conserved quantities and dynamical laws in the new rotating Lagrangian coordinates. Based on the new formulation of the GPE for rotating BECs in the rotating Lagrangian coordinates, a time-splitting spectral method is presented for computing the dynamics of rotating BECs. The new numerical method is explicit, simple to implement, unconditionally stable, and very efficient in computation. It is spectral-order accurate in space and second-order accurate in time and conserves the mass on the discrete level. We compare our method with some representative methods in the literature to demonstrate its efficiency and accuracy. In addition, the numerical method is applied to test the dynamical laws of rotating BECs such as the dynamics of condensate width, angular momentum expectation, and center of mass, and to investigate numerically the dynamics and interaction of quantized vortex lattices in rotating BECs without or with the long-range DDI.Copyright © by SIAM.
Barth, Johannes; Neyton, Lionel; Métais, Pierre; Panisset, Jean-Claude; Baverel, Laurent; Walch, Gilles; Lafosse, Laurent
2017-08-01
The aim of the study was to develop a computed tomography (CT)-based measurement protocol for coracoid graft (CG) placement in both axial and sagittal planes after a Latarjet procedure and to test its intraobserver and interobserver reliability. Fifteen postoperative CT scans were included to assess the intraobserver and interobserver reproducibility of a standardized protocol among 3 senior and 3 junior shoulder surgeons. The evaluation sequence included CG positioning, its contact area with the glenoid, and the angle of its screws in the axial plane. The percentage of CG positioned under the glenoid equator was also analyzed in the sagittal plane. The intraobserver and interobserver agreement was measured by the intraclass correlation coefficient (ICC), and the values were interpreted according to the Landis and Koch classification. The ICC was substantial to almost perfect for intraobserver agreement and fair to almost perfect for interobserver agreement in measuring the angle of screws in the axial plane. The intraobserver agreement was slight to almost perfect and the interobserver agreement slight to substantial regarding CG positioning in the same plane. The intraobserver agreement and interobserver agreement were both fair to almost perfect concerning the contact area. The ICC was moderate to almost perfect for intraobserver agreement and slight to almost perfect for interobserver agreement in analyzing the percentage of CG under the glenoid equator. The variability of ICC values observed implies that caution should be taken in interpreting results regarding the CG position on 2-dimensional CT scans. This discrepancy is mainly explained by the difficulty in orienting the glenoid in the sagittal plane before any other parameter is measured. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Khan, L; Mitera, G; Probyn, L; Ford, M; Christakis, M; Finkelstein, J; Donovan, A; Zhang, L; Zeng, L; Rubenstein, J; Yee, A; Holden, L; Chow, E
2011-12-01
The primary objective of this pilot study was to examine the inter-rater reliability in scoring the computed tomography (ct) imaging features of spinal metastases in patients referred for radiotherapy (rt) for bone pain. In a retrospective review, 3 musculoskeletal radiologists and 2 orthopedic spinal surgeons independently evaluated ct imaging features for 41 patients with spinal metastases treated with rt in an outpatient radiation clinic from January 2007 to October 2008. The evaluation used spinal assessment criteria that had been developed in-house, with reference to osseous and soft tissue tumour extent,presence of a pathologic fracture,severity of vertebral height loss, andpresence of kyphosis.The Cohen kappa coefficient between the two specialties was calculated. Mean patient age was 69.2 years (30 men, 11 women). The mean total daily oral morphine equivalent was 73.4 mg. Treatment dose-fractionation schedules included 8 Gy/1 (n = 28), 20 Gy/5 (n = 12), and 20 Gy/8 (n = 1). Areas of moderate agreement in identifying the ct imaging appearance of spinal metastasis included extent of vertebral body involvement (κ = 0.48) and soft-tissue component (κ = 0.59). Areas of fair agreement included extent of pedicle involvement (κ = 0.28), extent of lamina involvement (κ = 0.35), and presence of pathologic fracture (κ = 0.20). Areas of poor agreement included nerve-root compression (κ = 0.14) and vertebral body height loss (κ = 0.19). The range of agreement between musculoskeletal radiologists and orthopedic surgeons for most spinal assessment criteria is moderate to poor. A consensus for managing challenging vertebral injuries secondary to spinal metastases needs to be established so as to best triage patients to the most appropriate therapeutic modality.
Sawicki, Marcin; Walecka, A. [Pomeranian Medical University, Department of Diagnostic Imaging and Interventional Radiology, Szczecin (Poland); Bohatyrewicz, R.; Solek-Pastuszka, J. [Pomeranian Medical University, Clinic of Anesthesiology and Intensive Care, Szczecin (Poland); Safranow, K. [Pomeranian Medical University, Department of Biochemistry and Medical Chemistry, Szczecin (Poland); Walecki, J. [The Centre of Postgraduate Medical Education, Warsaw (Poland); Rowinski, O. [Medical University of Warsaw, 2nd Department of Clinical Radiology, Warsaw (Poland); Czajkowski, Z. [Regional Joint Hospital, Szczecin (Poland); Guzinski, M. [Wroclaw Medical University, Department of General Radiology, Interventional Radiology and Neuroradiology, Wroclaw (Poland); Burzynska, M. [Wroclaw Medical University, Department of Anesthesiology and Intensive Therapy, Wroclaw (Poland); Wojczal, J. [Medical University of Lublin, Department of Neurology, Lublin (Poland)
2014-08-15
The standardized diagnostic criteria for computed tomographic angiography (CTA) in diagnosis of brain death (BD) are not yet established. The aim of the study was to compare the sensitivity and interobserver agreement of the three previously used scales of CTA for the diagnosis of BD. Eighty-two clinically brain-dead patients underwent CTA with a delay of 40 s after contrast injection. Catheter angiography was used as the reference standard. CTA results were assessed by two radiologists, and the diagnosis of BD was established according to 10-, 7-, and 4-point scales. Catheter angiography confirmed the diagnosis of BD in all cases. Opacification of certain cerebral vessels as indicator of BD was highly sensitive: cortical segments of the middle cerebral artery (96.3 %), the internal cerebral vein (98.8 %), and the great cerebral vein (98.8 %). Other vessels were less sensitive: the pericallosal artery (74.4 %), cortical segments of the posterior cerebral artery (79.3 %), and the basilar artery (82.9 %). The sensitivities of the 10-, 7-, and 4-point scales were 67.1, 74.4, and 96.3 %, respectively (p < 0.001). Percentage interobserver agreement in diagnosis of BD reached 93 % for the 10-point scale, 89 % for the 7-point scale, and 95 % for the 4-point scale (p = 0.37). In the application of CTA to the diagnosis of BD, reducing the assessment of vascular opacification scale from a 10- to a 4-point scale significantly increases the sensitivity and maintains high interobserver reliability. (orig.)
Faletti, Riccardo; Gatti, Marco; Salizzoni, Stefano; Bergamasco, Laura; Bonamini, Rodolfo; Garabello, Domenica; Marra, Walter Grosso; La Torre, Michele; Morello, Mara; Veglia, Simona; Fonio, Paolo; Rinaldi, Mauro
2016-08-01
To assess the accuracy and reproducibly of cardiovascular magnetic resonance (CMR) in the measurement of the aortic annulus and in process of valve sizing as compared to intra-operative sizing, cardiovascular computed tomography (CCT) and transesophageal echocardiography (TEE). Retrospective study on 42 patients who underwent aortic valve replacement from September 2010 to September 2015, with available records of pre surgery annulus assessment by CMR, CCT and TEE and of peri-operative assessment. In CCT and CMR, the annular plane was considered a virtual ring formed by the lowest hinge points of the valvular attachments to the aorta. In TEE the annulus was measured at the base of leaflet insertion in the mid-esophageal long-axis view using the X-plane technique. Two double-blinded operators performed the assessments for each imaging technique. Intra-operative evaluation was performed using Hegar dilators. Continuous variables were studied with within-subject ANOVA, Bland-Altman (BA) plots, Wilcoxon's and Friedman's tests; trends were explored with scatter plots. Categorical variables were studied with Fisher's exact test. The intra- and inter-operator reliability was satisfying. There were no significant differences between the annulus dimensions measured by CMR and either one of the three references. Valve sizing for CoreValve by CMR had the same good agreement with CCT and TEE, with a 78 % match rate; for SAPIEN XT the agreement was slightly better (82 %) for CCT than for TEE (66 %). MR performs well when compared to the surgical reference of intra-operative sizing and stands up to the level of the most used imaging references (CCT and TEE).
基于计算机仿真的风电机组系统可靠性综合%System Reliability Synthesis of Wind Turbine Based on Computer Simulation
郭建英; 孙永全; 王铭义; 丁喜波
2012-01-01
Only relying on life testing to conduct reliability assessment is limited, and it is an intractable problem to synthesize system reliability which consisted of units with different life distributions. In order to solve the above problems, numerical computation method based on the computer simulation is proposed. Use the unit original information sufficiently and adopt simulation testing instead of life testing, to generate sufficient amount value of simulation of system life by means of logical operation, on the basis of reliability models, and then analyze the simulation data, deduce the distribution of the system life, following test of goodness of fit, estimate the point estimation and confidence interval of the parameters and reliability measure. Allow for practical applicability of engineering, concerns of the above, can be carried out in computer. This approach could work well in the engineering to solve the complex system reliability synthesis, and be applied to system reliability synthesis and Prediction of wind turbine.%单纯依赖寿命试验对复杂系统进行可靠性评定已受到多种制约,单元为不同分布时的复杂系统可靠性综合与预测问题更是一个棘手的难题.为解决这一问题,提出一种基于计算机模拟仿真的数值解析方法.充分利用单元可靠性信息,用模拟仿真替代寿命试验,根据系统可靠性模型通过逻辑运算生成足够量的系统寿命模拟值,并据此推断系统寿命分布类型、完成拟合优度检验、解析分布参数及可靠性测度的点估计和置信区间.考虑工程实用性,上述全部过程均在计算机上编程实现.这一方法能够有效地解决复杂系统可靠性多级综合所面临的诸多难题,并在风电机组系统可靠性综合与预测中应用.
Katsaounis, T. D.
2005-02-01
The scope of this book is to present well known simple and advanced numerical methods for solving partial differential equations (PDEs) and how to implement these methods using the programming environment of the software package Diffpack. A basic background in PDEs and numerical methods is required by the potential reader. Further, a basic knowledge of the finite element method and its implementation in one and two space dimensions is required. The authors claim that no prior knowledge of the package Diffpack is required, which is true, but the reader should be at least familiar with an object oriented programming language like C++ in order to better comprehend the programming environment of Diffpack. Certainly, a prior knowledge or usage of Diffpack would be a great advantage to the reader. The book consists of 15 chapters, each one written by one or more authors. Each chapter is basically divided into two parts: the first part is about mathematical models described by PDEs and numerical methods to solve these models and the second part describes how to implement the numerical methods using the programming environment of Diffpack. Each chapter closes with a list of references on its subject. The first nine chapters cover well known numerical methods for solving the basic types of PDEs. Further, programming techniques on the serial as well as on the parallel implementation of numerical methods are also included in these chapters. The last five chapters are dedicated to applications, modelled by PDEs, in a variety of fields. The first chapter is an introduction to parallel processing. It covers fundamentals of parallel processing in a simple and concrete way and no prior knowledge of the subject is required. Examples of parallel implementation of basic linear algebra operations are presented using the Message Passing Interface (MPI) programming environment. Here, some knowledge of MPI routines is required by the reader. Examples solving in parallel simple PDEs using
Shershnev, Anton A.; Kudryavtsev, Alexey N.; Kashkovsky, Alexander V.; Khotyanovsky, Dmitry V.
2016-10-01
The present paper describes HyCFS code, developed for numerical simulation of compressible high-speed flows on hybrid CPU/GPU (Central Processing Unit / Graphical Processing Unit) computational clusters on the basis of full unsteady Navier-Stokes equations, using modern shock capturing high-order TVD (Total Variation Diminishing) and WENO (Weighted Essentially Non-Oscillatory) schemes on general curvilinear structured grids. We discuss the specific features of hybrid architecture and details of program implementation and present the results of code verification.
Computation of Nonlinear Backscattering Using a High-Order Numerical Method
Fibich, G.; Ilan, B.; Tsynkov, S.
2001-01-01
The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.
Numerical computation of solar heat storage in phase change material/concrete wall
Mustapha Faraji
2014-01-01
Full Text Available A one-dimensional mathematical model was developed in order to analyze and optimize the latent heat storage wall. Two layers of phase change material (PCM are sandwiched within a concrete wall. The governing equations for energy transport were developed by using the enthalpy method and discretized with volume control scheme. A series of numerical investigations were conducted. The effect of the melting temperature on the possibility of increasing the energy performance of the proposed heating system was analyzed. Results are obtained for thermal gain and temperature fluctuation. The charging/discharging process was also presented and analyzed.
Numerical Solution of the 3-D Navier-Stokes Equations on the CRAY-1 Computer.
1979-01-01
for scientific computa- - ’ " ...... , z Cordinates in Cartesian tions; the CRAY-I, STAR 100 and ILLIAC IV Frame., , Transformed Coordinate Syst among...one fraccional time step. 1n2m n 4 2 (16) Each difference operator contains a-predictor and corrector. During a specific numerical sweep, the flux...82171979. WUr nthe (RAY-’I, " Report #134, Systems 31 . unin, P G. ,"Prelmdmr RfMinvrIng ILboratory, University of11. ~~. AUrning a.n, i l~1iaAhiigan
Devendra Kumar
2014-06-01
Full Text Available In this paper, we present a reliable algorithm based on the homotopy analysis transform method (HATM to solve the linear and nonlinear Klein–Gordon equations. The Klein–Gordon equation is the equation of motion of a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. It describes the quantum amplitude for finding a point particle in various places, the relativistic wave function, but the particle propagates both forwards and backwards in time. The HATM is a combined form of the Laplace transform method and homotopy analysis method. The method provides the solution in the form of a rapidly convergent series. Some numerical examples are used to illustrate the preciseness and effectiveness of the proposed method. The results show that the HATM is very efficient, simple and can be applied to other nonlinear problems.
Tanja eKäser
2013-08-01
Full Text Available This article presents the design and a first pilot evaluation of the computer-based training program Calcularis for children with developmental dyscalculia (DD or difficulties in learning mathematics. The program has been designed according to insights on the typical and atypical development of mathematical abilities. The learning process is supported through multimodal cues, which encode different properties of numbers. To offer optimal learning conditions, a user model completes the program and allows flexible adaptation to a child’s individual learning and knowledge profile. 32 children with difficulties in learning mathematics completed the 6 to 12-weeks computer training. The children played the game for 20 minutes per day for 5 days a week. The training effects were evaluated using neuropsychological tests. Generally, children benefited significantly from the training regarding number representation and arithmetic operations. Furthermore, children liked to play with the program and reported that the training improved their mathematical abilities.
Yi-rang YUAN; Chang-feng LI; Yun-xin LIU; Li-qin MA
2009-01-01
We propose a modified upwind finite difference fractional step scheme for the computational fluid mechanics simulations of a three-dimensional photoelectric semiconductor detector. We obtain the optimal l2-norm error estimates by using the techniques including the calculus of variations, the energy methods, the induction hypothesis, and a priori estimates. The proposed scheme is successfully applied to the simulation of the photoelectric semiconductor detectors.
Numerical Computation of Stress Intensity Factors for Bolt-hole Corner Crack in Mechanical Joints
Wang Liqing; Gai Bingzheng
2008-01-01
The three-dimensional finite element method is used to solve the problem of the quarter-elliptical comer crack of the bolt-hole in mechanical joints being subjected to remote tension. The square-root stress singularity around the corner crack front is simulated using the collapsed 20-node quarter point singular elements. The contact interaction between the bolt and the hole boundary is considered in the finite element analysis. The stress intensity factors (SIFs) along the crack front are evaluated by using the displacement correlation technique. The effects of the amount of clearance between the hole and the bolt on the SIFs are investigated. The numerical results indicate that the SIF for mode I decrease with the decreases in clearance, and in the cases of clearance being present, the corner crack is in a mix-mode, even if mode I loading is dominant.
Assessment of three numerical methods for the computation of a low-density plume flow
Penko, Paul F.; Riley, Ben R.; Boyd, Iain D.
1993-01-01
Results from three numerical methods including one based on the Navier-Stokes equations, one based on kinetic theory using the DSMC method, and one based on the Boltzmann equation with a Krook-type collision term are compared to each other and to experimental data for a model problem of heated nitrogen flow in a conical nozzle expanding into a vacuum. The problem simulates flow in a resistojet, a low-thrust, electrothermal rocket. The continuum method is applied to both the internal flow and near-field plume. The DSMC and Boltzmann methods are applied primarily to the plume. Experimental measurements of Pitot pressure and flow angle, taken with an apparatus that duplicates the model nozzle flow, are used in the comparisons.
Biston, Marie-Claude [Equipe d' Accueil no 2941 ' Rayonnement Synchrotron et Recherche Medicale' , Unite IRM, CHU, BP 217, F-38043 Grenoble Cedex 09 (France); Corde, Stephanie [Equipe d' Accueil no 2941 ' Rayonnement Synchrotron et Recherche Medicale' , Unite IRM, CHU, BP 217, F-38043 Grenoble Cedex 09 (France); Camus, Emmanuel [Samba Technologies, ZIRST 53, chemin du Vieux Chene 38240 Meylan (France); Marti-Battle, Ramon [Samba Technologies, ZIRST 53, chemin du Vieux Chene 38240 Meylan (France); Esteve, Francois [Equipe d' Accueil no 2941 ' Rayonnement Synchrotron et Recherche Medicale' , Unite IRM, CHU, BP 217, F-38043 Grenoble Cedex 09 (France); Balosso, Jacques [Equipe d' Accueil no 2941 ' Rayonnement Synchrotron et Recherche Medicale' , Unite IRM, CHU, BP 217, F-38043 Grenoble Cedex 09 (France)
2003-06-07
This work establishes an objective method to measure cell clonogenic survival by computer-assisted image processing using images of cell cultures fixed and stained in Petri dishes. The procedure, developed by Samba Technologies, consists of acquiring Petri dish pictures with a desktop scanner and analysing them by computer, using algorithms based on the 'top hat' filter. The results from the automated count for the cell line SQ20B are compared with those found by two observers, before and after normalization of the counting. After normalization, the shape of the survival curves of the 'manual' counting of the Petri dishes shows a good correlation between both observers. The software enables the small visible differences in count between observers to be eliminated. The comparison between the absolute number of colonies shows an increased difference between the two manual scorings that can be as great as 67 colonies, whereas the difference between the two automated counts is never greater than 8 colonies. These results demonstrate that the 'manual' count is inter- and intra-observer variable, whereas the automatic count performs reproducible cell colony counts, thereby minimizing user-generated bias. The large amount of data produced also gives information about cell and colony characteristics. Thus, this computer-assisted method has considerably improved the reliability of our statistical results.
Pellett, Gerald L.; Wilson, Lloyd G.; Humphreys, William M., Jr.; Bartram, Scott M.; Gartrell, Luther R.; Isaac, K. M.
1995-01-01
Laminar fuel-air counterflow diffusion flames (CFDFs) were studied using axisymmetric convergent-nozzle and straight-tube opposed jet burners (OJBs). The subject diagnostics were used to probe a systematic set of H2/N2-air CFDFs over wide ranges of fuel input (22 to 100% Ha), and input axial strain rate (130 to 1700 Us) just upstream of the airside edge, for both plug-flow and parabolic input velocity profiles. Laser Doppler Velocimetry (LDV) was applied along the centerline of seeded air flows from a convergent nozzle OJB (7.2 mm i.d.), and Particle Imaging Velocimetry (PIV) was applied on the entire airside of both nozzle and tube OJBs (7 and 5 mm i.d.) to characterize global velocity structure. Data are compared to numerical results from a one-dimensional (1-D) CFDF code based on a stream function solution for a potential flow input boundary condition. Axial strain rate inputs at the airside edge of nozzle-OJB flows, using LDV and PIV, were consistent with 1-D impingement theory, and supported earlier diagnostic studies. The LDV results also characterized a heat-release hump. Radial strain rates in the flame substantially exceeded 1-D numerical predictions. Whereas the 1-D model closely predicted the max I min axial velocity ratio in the hot layer, it overpredicted its thickness. The results also support previously measured effects of plug-flow and parabolic input strain rates on CFDF extinction limits. Finally, the submillimeter-scale LDV and PIV diagnostics were tested under severe conditions, which reinforced their use with subcentimeter OJB tools to assess effects of aerodynamic strain, and fueVair composition, on laminar CFDF properties, including extinction.
Numerical computation of pyramidal quantum dots with band non-parabolicity
Gong, Liang; Shu, Yong-chun; Xu, Jing-jun; Wang, Zhan-guo
2013-09-01
This paper presents an effective and feasible eigen-energy scanning method to solve polynomial matrix eigenvalues introduced by 3D quantum dots problem with band non-parabolicity. The pyramid-shaped quantum dot is placed in a computational box with uniform mesh in Cartesian coordinates. Its corresponding Schrödinger equation is discretized by the finite difference method. The interface conditions are incorporated into the discretization scheme without explicitly enforcing them. By comparing the eigenvalues from isolated quantum dots and a vertically aligned regular array of them, we investigate the coupling effect for variable distances between the quantum dots and different size.
WANG Yu; HE Pingting; YE Hong; XIN Zhihong
2007-01-01
Instantaneous flow field and temperature field of the two-phase fluid are measured by particle image velocimetry (PIV) and steady state method during the state of onflow. A turbulent two-phase fluid model of stirred bioreactor with punched impeller is established by the computational fluid dynamics (CFD), using a rotating coordinate system and sliding mesh to describe the relative motion between impeller and baffles. The simulation and experiment results of flow and temperature field prove their warps are less than 10% and the mathematic model can well simulate the fields, which will also provide the study on optimized-design and scale-up of bioreactors with reference value.
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Guangsheng Chen
2015-08-01
Full Text Available This study proposed a dynamic parameters’ identification method for the feeding system of computer numerical control machine tools based on internal sensor. A simplified control model and linear identification model of the feeding system were established, in which the input and output signals are from sensors embedded in computer numerical control machine tools, and the dynamic parameters of the feeding system, including the equivalent inertia, equivalent damping, worktable damping, and the overall stiffness of the mechanical system, were solved by the least square method. Using the high-order Taylor expansion, the nonlinear Stribeck friction model was linearized and the parameters of the Stribeck friction model were obtained by the same way. To verify the validity and effectiveness of the identification method, identification experiments, circular motion testing, and simulations were conducted. The results obtained were stable and suggested that inertia and damping identification experiments converged fast. Stiffness identification experiments showed some deviation from simulation due to the influences of geometric error and nonlinear of stiffness. However, the identification results were still of reference significance and the method is convenient, effective, and suited for industrial condition.
Turri, Fabio; Yanagihara, Jurandir Itizo
2011-06-01
A two-dimensional numeric simulator is developed to predict the nonlinear, convective-reactive, oxygen mass exchange in a cross-flow hollow fiber blood oxygenator. The numeric simulator also calculates the carbon dioxide mass exchange, as hemoglobin affinity to oxygen is affected by the local pH value, which depends mostly on the local carbon dioxide content in blood. Blood pH calculation inside the oxygenator is made by the simultaneous solution of an equation that takes into account the blood buffering capacity and the classical Henderson-Hasselbach equation. The modeling of the mass transfer conductance in the blood comprises a global factor, which is a function of the Reynolds number, and a local factor, which takes into account the amount of oxygen reacted to hemoglobin. The simulator is calibrated against experimental data for an in-line fiber bundle. The results are: (i) the calibration process allows the precise determination of the mass transfer conductance for both oxygen and carbon dioxide; (ii) very alkaline pH values occur in the blood path at the gas inlet side of the fiber bundle; (iii) the parametric analysis of the effect of the blood base excess (BE) shows that (.)V(CO₂) is similar in the case of blood metabolic alkalosis, metabolic acidosis, or normal BE, for a similar blood inlet P(CO₂), although the condition of metabolic alkalosis is the worst case, as the pH in the vicinity of the gas inlet is the most alkaline; (iv) the parametric analysis of the effect of the gas flow to blood flow ratio (QG/QB) shows that (.)V(CO₂) variation with the gas flow is almost linear up to QG/QB = 2.0. (.)V(O₂) is not affected by the gas flow as it was observed that by increasing the gas flow up to eight times, the (.)V(O₂) grows only 1%. The mass exchange of carbon dioxide uses the full length of the hollow-fiber only if Q(G) /Q(B)> 2.0, as it was observed that only in this condition does the local variation of pH and blood P(CO₂) comprise the whole
M. Dostál
2005-01-01
Full Text Available The importance of fuel reliability is growing due to the deregulated electricity market and the demands on operability and availability to the electricity grid of nuclear units. Under these conditions of fuel exploitation, the problems of PCMI (Pellet-Cladding Mechanical Interaction are very important from the point of view of fuel rod integrity and reliability. Severe loading is thermophysically and mechanically expressed as a greater probability of cladding failure especially during power maneuvering. We have to be able to make a realistic prediction of safety margins, which is very difficult by using computer simulation methods. NRI (Nuclear Research Institute has recently been engaged in developing 2D and 3D FEM (Finite Element Method based models dealing with this problem. The latest effort in this field has been to validate 2D r-z models developed in the COSMOS/M system against calculations using the FEMAXI-V code. This paper presents a preliminary comparison between classical FEM based integral code calculations and new models that are still under development. The problem has not been definitely solved. The presented data is of a preliminary nature, and several difficult problems remain to be solved.
Numerical computation of the linear stability of the diffusion model for crystal growth simulation
Yang, C.; Sorensen, D.C. [Rice Univ., Houston, TX (United States); Meiron, D.I.; Wedeman, B. [California Institute of Technology, Pasadena, CA (United States)
1996-12-31
We consider a computational scheme for determining the linear stability of a diffusion model arising from the simulation of crystal growth. The process of a needle crystal solidifying into some undercooled liquid can be described by the dual diffusion equations with appropriate initial and boundary conditions. Here U{sub t} and U{sub a} denote the temperature of the liquid and solid respectively, and {alpha} represents the thermal diffusivity. At the solid-liquid interface, the motion of the interface denoted by r and the temperature field are related by the conservation relation where n is the unit outward pointing normal to the interface. A basic stationary solution to this free boundary problem can be obtained by writing the equations of motion in a moving frame and transforming the problem to parabolic coordinates. This is known as the Ivantsov parabola solution. Linear stability theory applied to this stationary solution gives rise to an eigenvalue problem of the form.
Jiang Lei
2015-01-01
Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.
Nikesh S. Dattani
2012-03-01
Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.
Miroslav Kališnik
2011-05-01
Full Text Available In the introduction the evolution of methods for numerical density estimation of particles is presented shortly. Three pairs of methods have been analysed and compared: (1 classical methods for particles counting in thin and thick sections, (2 original and modified differential counting methods and (3 physical and optical disector methods. Metric characteristics such as accuracy, efficiency, robustness, and feasibility of methods have been estimated and compared. Logical, geometrical and mathematical analysis as well as computer simulations have been applied. In computer simulations a model of randomly distributed equal spheres with maximal contrast against surroundings has been used. According to our computer simulation all methods give accurate results provided that the sample is representative and sufficiently large. However, there are differences in their efficiency, robustness and feasibility. Efficiency and robustness increase with increasing slice thickness in all three pairs of methods. Robustness is superior in both differential and both disector methods compared to both classical methods. Feasibility can be judged according to the additional equipment as well as to the histotechnical and counting procedures necessary for performing individual counting methods. However, it is evident that not all practical problems can efficiently be solved with models.
计算机网络可靠性的优化设计探究%Study on Optimization Design of the reliability of computer net-work
刘亮; 白金牛; 邢俊凤
2013-01-01
在社会经济高速发展的今天，各行各业几乎都可以看到计算机忙碌的身影，但我们经常也会听到由于计算机的信息泄露而引起的重大经济损失事件，由此可见，加强计算机网络可靠性的优化设计，不仅关乎着计算机行业的发展和壮大，也关乎着国民经济的安全。在本文的论述中，我就将计算机网络可靠性优化设计的相关知识进行阐述，希望可以对当前的行业有所帮助。%In the rapid social economic development today, almost all can see the computer busy figure,but weoften hear great economic losses,due to computer information leakage thus,strengthen the optimization design of the reliability of computer network,not only relates to the development of the computer industry and growth,but also with the national economic security.In this article,I will be the computer network re-liability optimizationdesign expertise is expounded,hoping to help the industry.
Fukushima, Toshio
2017-06-01
Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface
Salomons, E.M.
1999-01-01
Downwind sound propagation over a noise screen is investigated by numerical computations and scale model experiments in a wind tunnel. For the computations, the parabolic equation method is used, with a range-dependent sound-speed profile based on wind-speed profiles measured in the wind tunnel and
Salomons, E.M.
1999-01-01
Downwind sound propagation over a noise screen is investigated by numerical computations and scale model experiments in a wind tunnel. For the computations, the parabolic equation method is used, with a range-dependent sound-speed profile based on wind-speed profiles measured in the wind tunnel and
An efficient algorithm for numerical computations of continuous densities of states
Langfeld, K.; Lucini, B.; Pellegrini, R.; Rago, A.
2016-06-01
In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8^4 to 20^4. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18^4. Robust results for the 20^4 volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling techniques are pointed
An efficient algorithm for numerical computations of continuous densities of states
Langfeld, K.; Rago, A. [Plymouth University, Centre for Mathematical Sciences, Plymouth (United Kingdom); Lucini, B. [Swansea University, College of Science, Swansea (United Kingdom); Pellegrini, R. [University of Edinburgh, School of Physics and Astronomy, Edinburgh (United Kingdom)
2016-06-15
In Wang-Landau type algorithms, Monte-Carlo updates are performed with respect to the density of states, which is iteratively refined during simulations. The partition function and thermodynamic observables are then obtained by standard integration. In this work, our recently introduced method in this class (the LLR approach) is analysed and further developed. Our approach is a histogram free method particularly suited for systems with continuous degrees of freedom giving rise to a continuum density of states, as it is commonly found in lattice gauge theories and in some statistical mechanics systems. We show that the method possesses an exponential error suppression that allows us to estimate the density of states over several orders of magnitude with nearly constant relative precision. We explain how ergodicity issues can be avoided and how expectation values of arbitrary observables can be obtained within this framework. We then demonstrate the method using compact U(1) lattice gauge theory as a show case. A thorough study of the algorithm parameter dependence of the results is performed and compared with the analytically expected behaviour. We obtain high precision values for the critical coupling for the phase transition and for the peak value of the specific heat for lattice sizes ranging from 8{sup 4} to 20{sup 4}. Our results perfectly agree with the reference values reported in the literature, which covers lattice sizes up to 18{sup 4}. Robust results for the 20{sup 4} volume are obtained for the first time. This latter investigation, which, due to strong metastabilities developed at the pseudo-critical coupling of the system, so far has been out of reach even on supercomputers with importance sampling approaches, has been performed to high accuracy with modest computational resources. This shows the potential of the method for studies of first order phase transitions. Other situations where the method is expected to be superior to importance sampling
网构软件可靠性演化计算研究%Algorithm for Computing Reliability Evolution of Internetware
张靖; 雷航
2014-01-01
To calculate and evaluate the reliability change of Internetware system in the Internet environment,the reliability transition matrix of Internetware system was constructed using Internetware system architecture and considering the accumulation effects of Internetware reliability change. The reliability evolution calculation model and the evolution trend model were established. On the basis of discrete time Markov chain (DTMC ) theory and convolution principle, the reliability evolution calculation model was proposed,and the corresponding algorithm was designed. According to the fine-grained quantitative analysis of reliability,reliability change,and reliability change accumulation,the computation complexity of the designed algorithm is decreased to O(Nlog2 N). The simulation results using MATLAB show that the proposed model and algorithm can effectively implement the evolution analysis of Internetware reliability,and improve 7% more in reliability evolutionary tendency than the traditional non-homogeneous Poisson process (NHPP).%为了计算和评估网构软件系统Internet网络环境下的可靠性变化，考虑到可靠性变化影响的积累效应，在网构软件系统结构的基础上构建网构软件系统可靠性转移矩阵，建立了网构软件可靠性演化计算以及演化趋势模型；利用DTMC理论和卷积原理，提出了可靠性演化计算模型，并设计了相应算法，从可靠性及其变化、可靠性变化积累计算3方面精细地进行可靠性演化趋势分析，算法计算复杂度降低到O（Nlog2 N）。采用MATLAB模拟实验表明，提出的模型和算法能够有效地进行网构软件可靠性演化分析。
Nourgaliev R.; Knoll D.; Mousseau V.; Berry R.
2007-04-01
The state-of-the-art for Direct Numerical Simulation (DNS) of boiling multiphase flows is reviewed, focussing on potential of available computational techniques, the level of current success for their applications to model several basic flow regimes (film, pool-nucleate and wall-nucleate boiling -- FB, PNB and WNB, respectively). Then, we discuss multiphysics and multiscale nature of practical boiling flows in LWR reactors, requiring high-fidelity treatment of interfacial dynamics, phase-change, hydrodynamics, compressibility, heat transfer, and non-equilibrium thermodynamics and chemistry of liquid/vapor and fluid/solid-wall interfaces. Finally, we outline the framework for the {\\sf Fervent} code, being developed at INL for DNS of reactor-relevant boiling multiphase flows, with the purpose of gaining insight into the physics of multiphase flow regimes, and generating a basis for effective-field modeling in terms of its formulation and closure laws.
LI Shengmao; LI Yan; FENG Fang; WANG Lijun; CHI Yuan
2010-01-01
To invest the condition of ice accretion on the blade used for straight-bladed vertical axis wind turbine(SB-VAWT),wind tunnel tests were carded out on a blade with NACA0015 airfoil by using a small simple icing wind tunnel.Tests were carried out at some typical attack angles under different wind speeds and flow discharges of a water spray with wind.The icing shape and area on blade surface were recorded and measured.Then the numerical computation was carried out to calculate the lift and drag coefficients of the blade before and after ice accretion according to the experiment result,the effect of icing on the aerodynamic characteristics of blade were discussed.
Damyanova, M.; Sabchevski, S.; Zhelyazkov, I.; Vasileva, E.; Balabanova, E.; Dankov, P.; Malinov, P.
2016-03-01
Gyrotrons are the most powerful sources of coherent CW (continuous wave) radiation in the frequency range situated between the long-wavelength edge of the infrared light (far-infrared region) and the microwaves, i.e., in the region of the electromagnetic spectrum which is usually called the THz-gap (or T-gap), since the output power of other devices (e.g., solid-state oscillators) operating in this interval is by several orders of magnitude lower. In the recent years, the unique capabilities of the sub-THz and THz gyrotrons have opened the road to many novel and future prospective applications in various physical studies and advanced high-power terahertz technologies. In this paper, we present the current status and functionality of the problem-oriented software packages (most notably GYROSIM and GYREOSS) used for numerical studies, computer-aided design (CAD) and optimization of gyrotrons for diverse applications. They consist of a hierarchy of codes specialized to modelling and simulation of different subsystems of the gyrotrons (EOS, resonant cavity, etc.) and are based on adequate physical models, efficient numerical methods and algorithms.
Raoelison, R. N.; Sapanathan, T.; Padayodi, E.; Buiron, N.; Rachik, M.
2016-11-01
This paper investigates the complex interfacial kinematics and governing mechanisms during high speed impact conditions. A robust numerical modelling technique using Eulerian simulations are used to explain the material response of the interface subjected to a high strain rate collision during a magnetic pulse welding. The capability of this model is demonstrated using the predictions of interfacial kinematics and revealing the governing mechanical behaviours. Numerical predictions of wave formation resulted with the upward or downward jetting and complex interfacial mixing governed by wake and vortex instabilities corroborate the experimental observations. Moreover, the prediction of the material ejection during the simulation explains the experimentally observed deposited particles outside the welded region. Formations of internal cavities along the interface is also closely resemble the resulted confined heating at the vicinity of the interface appeared from those wake and vortex instabilities. These results are key features of this simulation that also explains the potential mechanisms in the defects formation at the interface. These results indicate that the Eulerian computation not only has the advantage of predicting the governing mechanisms, but also it offers a non-destructive approach to identify the interfacial defects in an impact welded joint.
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
KRAMER, JESSICA M; LILJENQUIST, KENDRA; COSTER, WENDY J
2015-01-01
Aim This study aimed to explore the test–retest reliability of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test for autism spectrum disorders (PEDI-CAT [ASD]), the concurrent validity of this test with the Vineland Adaptive Behavior Scales (VABS-II), and parents’ perceptions of usability. Method A convenience sample of participants (n=39) was recruited nationally through disability organizations. Parents of young people aged 10 to 18 years (mean age 14y 10mo, SD 2y 8mo; 34 males, 5 females) who reported a diagnosis of autism were eligible to participate. Parents completed the VABS-II questionnaire once and the PEDI-CAT (ASD) twice (n=29) no more than 3 weeks apart (mean 12d) using computer-simulated administration. Parents also answered questions about the usability of these instruments. We examined score reliability using intraclass correlation coefficients (ICCs) and we explored the relationship between instruments using Spearman's rank correlation coefficients. Parent responses were grouped by common content; content categories were triangulated by an additional reviewer. Results ICCs indicate excellent reliability for all PEDI-CAT (ASD) domain scores (ICC≥0.86). PEDI-CAT (ASD) and VABS-II domain scores correlated as expected or higher than expected (0.57–0.81). Parents reported that the computer-based PEDI-CAT (ASD) was easy to use and included fewer irrelevant questions than the VABS-II instrument. Interpretation These findings suggest that the PEDI-CAT (ASD) is a reliable assessment that parents can easily use. The PEDI-CAT (ASD) operationalizes the International Classification of Function, Disability, and Health for Children and Youth constructs of ‘activity’ and ‘participation’, and this preliminary research suggests that the instrument's constructs are related to those of VABS-II. PMID:26104112
数控机床液压系统的可靠性验证试验方法%Reliability Validation Test for Hydraulic System of Numerical Control Machine
尹鹏程
2011-01-01
根据数控机床液压系统的主要故障模式,建立液压系统的可靠性模型,给出液压系统的可靠性特征量；基于可靠性试验的基本思路,提出数控机床液压系统的可靠性试验方法并给出示例,为数控机床液压系统的可靠性验证试验提供技术途径.%The functions, principles and main failure modes of hydraulic system of numerical control machine were introduced. A reliability model for this type of hydraulic system was established. Reliability characteristic parameters of this hydraulic system were given. Based on the idea of characteristic parameter measurement, the reliability test methods of the hydraulic system were put forward. An application example was given. The research work provides technology way for reliability validation test methods of the hydraulic system of numerical control machine.
Shea, J. D.; Kosmas, P.; Van Veen, B. D.; Hagness, S. C.
2010-07-01
The detection of early-stage tumors in the breast by microwave imaging is challenged by both the moderate endogenous dielectric contrast between healthy and malignant glandular tissues and the spatial resolution available from illumination at microwave frequencies. The high endogenous dielectric contrast between adipose and fibroglandular tissue structures increases the difficulty of tumor detection due to the high dynamic range of the contrast function to be imaged and the low level of signal scattered from a tumor relative to the clutter scattered by normal tissue structures. Microwave inverse scattering techniques, used to estimate the complete spatial profile of the dielectric properties within the breast, have the potential to reconstruct both normal and cancerous tissue structures. However, the ill-posedness of the associated inverse problem often limits the frequency of microwave illumination to the UHF band within which early-stage cancers have sub-wavelength dimensions. In this computational study, we examine the reconstruction of small, compact tumors in three-dimensional numerical breast phantoms by a multiple-frequency inverse scattering solution. Computer models are also employed to investigate the use of exogenous contrast agents for enhancing tumor detection. Simulated array measurements are acquired before and after the introduction of the assumed contrast effects for two specific agents currently under consideration for breast imaging: microbubbles and carbon nanotubes. Differential images of the applied contrast demonstrate the potential of the approach for detecting the preferential uptake of contrast agents by malignant tissues.
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo
2014-01-01
The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Julien Guevar
Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Numerical modelling of elastic space tethers
Kristiansen, Kristian Uldall; Palmer, P. L.; Roberts, R. M.
2012-01-01
In this paper the importance of the ill-posedness of the classical, non-dissipative massive tether model on an orbiting tether system is studied numerically. The computations document that via the regularisation of bending resistance a more reliable numerical integrator can be produced. Furthermore......, the numerical experiments of an orbiting tether system show that bending may introduce significant forces in some regions of phase space. Finally, numerical evidence for the existence of an almost invariant slow manifold of the singularly perturbed, regularised, non-dissipative massive tether model is provided...
Challenge in Numerical Software for Microcomputers
Cody, W J
1977-09-02
Microcomputers are now capable of serious numerical computation using programmed floating-point arithmetic and Basic compilers. Unless numerical software designers for these machines exploit experience gained in providing software for larger machines, history will repeat with the initial spread of treacherous software. This paper discusses good software, especially for the elementary functions, in terms of reliability and robustness. The emphasis. is on insight rather than detailed algorithms, to show why certain things are important and how they may be achieved.
Magarelli, Nicola; Sergio, Pietro; Bonomo, Lorenzo [Catholic University, Department of Radiology, Rome (Italy); Milano, Giuseppe; Santagada, Domenico A.; Fabbriciani, Carlo [Catholic University, Department of Orthopaedics, Rome (Italy)
2009-11-15
To evaluate the intra-observer and interobserver reliability of the 'Pico' computed tomography (CT) method of quantifying glenoid bone defects in anterior glenohumeral instability. Forty patients with unilateral anterior shoulder instability underwent CT scanning of both shoulders. Images were processed in multiplanar reconstruction (MPR) to provide an en face view of the glenoid. In accordance with the Pico method, a circle was drawn on the inferior part of the healthy glenoid and transferred to the injured glenoid. The surface of the missing part of the circle was measured, and the size of the glenoid bone defect was expressed as a percentage of the entire circle. Each measurement was performed three times by one observer and once by a second observer. Intra-observer and interobserver reliability were analyzed using intraclass correlation coefficients (ICCs), 95% confidence intervals (CIs), and standard errors of measurement (SEMs). Analysis of intra-observer reliability showed ICC values of 0.94 (95% CI = 0.89-0.96; SEM = 1.1%) for single measurement, and 0.98 (95% CI = 0.96-0.99; SEM = 1.0%) for average measurement. Analysis of interobserver reliability showed ICC values of 0.90 (95% CI = 0.82-0.95; SEM = 1.0%) for single measurement, and 0.95 (95% CI = 0.90-0.97; SEM = 1.0%) for average measurement. Measurement of glenoid bone defect in anterior shoulder instability can be assessed with the Pico method, based on en face images of the glenoid processed in MPR, with a very good intra-observer and interobserver reliability. (orig.)
Barkatali, Bilal M; Imalingat, Herbert; Childs, James; Baumann, Andreas; Paton, Robin
2016-11-01
Following surgical reduction of an irreducible hip in developmental dysplasia of the hip, imaging is required to ascertain successful reduction. Recent studies have compared MRI versus computed tomography (CT) in terms of cost, time, sensitivity and specificity. This is the first study to compare intraobserver and interobserver reliability for both modalities. Nineteen CT scans of 38 hips in 10 patients and nine MRI scans of 18 hips in six patients were reviewed on two separate occasions by three clinicians. Image clarity, confidence of diagnosis, time taken to perform the scan as well as radiation dose for CT were recorded. Intraobserver and interobserver reliability κ values were calculated. There were 14 female patients and one male patient. The mean age at the time of the scan was 12 months (range 3-25 months). Intraobserver reliability was greater than 0.8 (both CT and MRI). Interobserver reliability was greater than 0.8 (both CT and MRI). Image clarity was higher for CT for two out of the three clinicians (9.47 vs. 6.33 P0.05). The mean radiation dose for CT was 91.75 DLP (dose length product, mGy×cm) (95% confidence interval±26.95). Our results show that MRI is equal to CT as an imaging modality in the assessment of postreduction hips in developmental dysplasia of the hip. Intraobserver and interobserver reliability was excellent for both. The image clarity was higher for CT, but this method of imaging carries a significant risk of radiation exposure. We recommend that MRI should supersede CT as an imaging modality for this clinical situation.
Cohen, Y C; Hassin-Baer, S; Olmer, L; Barishev, R; Goldhammer, Y; Freedman, L; Mozes, B
2000-10-01
Kurtzke's EDSS remains the most widely-used measure for clinical evaluation of MS patients. However, several studies have demonstrated the limited reliability of this tool. We introduce a computerized instrument, MS-CANE (Multiple Sclerosis Computer-Aided Neurological Examination), for clinical evaluation and follow up of patients with multiple sclerosis (MS) and to compare its reliability to that of conventional Expanded Disability Status Scale (EDSS) assessment. We developed a computerized interactive instrument, based on the following principles: structured gathering of neurological findings, reduction of compound notions to their basic components, use of precise definitions, priority setting and automated calculations of EDSS and functional systems scores. An expert panel examined the consistency of MS-CANE with Kurtzke's specifications. To determine the effect of MS-CANE on the reliability of EDSS assessment, 56 MS patients underwent paired conventional EDSS and MS-CANE-based evaluations. The inter-observer agreement in both methods was determined and compared using the kappa statistic. The expert panel judged the tool to be compatible with the basic concepts of Kurtzke's EDSS. The use of MS-CANE increased the reliability of EDSS assessment: Kappa statistic was found to be 0.42 (i.e. moderate agreement) for conventional EDSS assessment versus 0.69 (i.e. substantial agreement) for MS-CANE (P=0.002). We conclude that the use of this tool may contribute towards a standardized and reliable assessment of EDSS. Within clinical trials, this could increase the power to detect effects, thus reducing trial duration and the cohort size required. Multiple Sclerosis (2000) 6 355 - 361
Sergi, Pier Nicola, E-mail: p.sergi@sssup.it [Translational Neural Engineering Laboratory, The Biorobotics Institute, Scuola Superiore Sant' Anna, Viale Rinaldo Piaggio 34, Pontedera, 56025 (Italy); Jensen, Winnie [Department of Health Science and Technology, Fredrik Bajers Vej 7, 9220 Aalborg (Denmark); Yoshida, Ken [Department of Biomedical Engineering, Indiana University - Purdue University Indianapolis, 723 W. Michigan St., SL220, Indianapolis, IN 46202 (United States)
2016-02-01
Tungsten is an elective material to produce slender and stiff microneedles able to enter soft tissues and minimize puncture wounds. In particular, tungsten microneedles are used to puncture peripheral nerves and insert neural interfaces, bridging the gap between the nervous system and robotic devices (e.g., hand prostheses). Unfortunately, microneedles fail during the puncture process and this failure is not dependent on stiffness or fracture toughness of the constituent material. In addition, the microneedles' performances decrease during in vivo trials with respect to the in vitro ones. This further effect is independent on internal biotic effects, while it seems to be related to external biotic causes. Since the exact synergy of phenomena decreasing the in vivo reliability is still not known, this work explored the connection between in vitro and in vivo behavior of tungsten microneedles through the study of interactions between biotic and abiotic factors. A hybrid computational approach, simultaneously using theoretical relationships and in silico models of nerves, was implemented to model the change of reliability varying the microneedle diameter, and to predict in vivo performances by using in vitro reliability and local differences between in vivo and in vitro mechanical response of nerves. - Highlights: • We provide phenomenological Finite Element (FE) models of peripheral nerves to study the interactions with W microneedles • We provide a general interaction-based approach to model the reliability of slender microneedles • We evaluate the reliability of W microneedels to puncture in vivo nerves • We provide a novel synergistic hybrid approach (theory + simulations) involving interactions among biotic and abiotic factors • We validate the hybrid approach by using experimental data from literature.
Kim, Jungkwun; Yoon, Yong-Kyu; Allen, Mark G.
2016-03-01
This paper presents a computer-numerical-controlled ultraviolet light-emitting diode (CNC UV-LED) lithography scheme for three-dimensional (3D) microfabrication. The CNC lithography scheme utilizes sequential multi-angled UV light exposures along with a synchronized switchable UV light source to create arbitrary 3D light traces, which are transferred into the photosensitive resist. The system comprises a switchable, movable UV-LED array as a light source, a motorized tilt-rotational sample holder, and a computer-control unit. System operation is such that the tilt-rotational sample holder moves in a pre-programmed routine, and the UV-LED is illuminated only at desired positions of the sample holder during the desired time period, enabling the formation of complex 3D microstructures. This facilitates easy fabrication of complex 3D structures, which otherwise would have required multiple manual exposure steps as in the previous multidirectional 3D UV lithography approach. Since it is batch processed, processing time is far less than that of the 3D printing approach at the expense of some reduction in the degree of achievable 3D structure complexity. In order to produce uniform light intensity from the arrayed LED light source, the UV-LED array stage has been kept rotating during exposure. UV-LED 3D fabrication capability was demonstrated through a plurality of complex structures such as V-shaped micropillars, micropanels, a micro-‘hi’ structure, a micro-‘cat’s claw,’ a micro-‘horn,’ a micro-‘calla lily,’ a micro-‘cowboy’s hat,’ and a micro-‘table napkin’ array.
Jiao, Qingbin; Bayanheshig; Tan, Xin; Zhu, Jiwei
2014-03-01
Mass transfer coefficient is an important parameter in the process of mass transfer. It can reflect the degree of enhancement of mass transfer process in liquid-solid reaction and in non-reactive systems like dissolution and leaching, and further verify the issues by experiments in the reaction process. In the present paper, a new computational model quantitatively solving ultrasonic enhancement on mass transfer coefficient in liquid-solid reaction is established, and the mass transfer coefficient on silicon surface with a transducer at frequencies of 40 kHz, 60 kHz, 80 kHz and 100 kHz has been numerically simulated. The simulation results indicate that mass transfer coefficient increases with the increasing of ultrasound power, and the maximum value of mass transfer coefficient is 1.467 × 10(-4) m/s at 60 kHz and the minimum is 1.310 × 10(-4) m/s at 80 kHz in the condition when ultrasound power is 50 W (the mass transfer coefficient is 2.384 × 10(-5) m/s without ultrasound). The extrinsic factors such as temperature and transducer diameter and distance between reactor and ultrasound source also influence the mass transfer coefficient on silicon surface. Mass transfer coefficient increases with the increasing temperature, with the decreasing distance between silicon and central position, with the decreasing of transducer diameter, and with the decreasing of distance between reactor and ultrasound source at the same ultrasonic power and frequency. The simulation results indicate that the computational model can quantitatively solve the ultrasonic enhancement on mass transfer coefficient.
Spoerl, Andreas
2008-06-05
Quantum computers are one of the next technological steps in modern computer science. Some of the relevant questions that arise when it comes to the implementation of quantum operations (as building blocks in a quantum algorithm) or the simulation of quantum systems are studied. Numerical results are gathered for variety of systems, e.g. NMR systems, Josephson junctions and others. To study quantum operations (e.g. the quantum fourier transform, swap operations or multiply-controlled NOT operations) on systems containing many qubits, a parallel C++ code was developed and optimised. In addition to performing high quality operations, a closer look was given to the minimal times required to implement certain quantum operations. These times represent an interesting quantity for the experimenter as well as for the mathematician. The former tries to fight dissipative effects with fast implementations, while the latter draws conclusions in the form of analytical solutions. Dissipative effects can even be included in the optimisation. The resulting solutions are relaxation and time optimised. For systems containing 3 linearly coupled spin (1)/(2) qubits, analytical solutions are known for several problems, e.g. indirect Ising couplings and trilinear operations. A further study was made to investigate whether there exists a sufficient set of criteria to identify systems with dynamics which are invertible under local operations. Finally, a full quantum algorithm to distinguish between two knots was implemented on a spin(1)/(2) system. All operations for this experiment were calculated analytically. The experimental results coincide with the theoretical expectations. (orig.)
Hiroaki Kijima
2014-01-01
Full Text Available The reliability of proximal femoral fracture classifications using 3DCT was evaluated, and a comprehensive “area classification” was developed. Eleven orthopedists (5–26 years from graduation classified 27 proximal femoral fractures at one hospital from June 2013 to July 2014 based on preoperative images. Various classifications were compared to “area classification.” In “area classification,” the proximal femur is divided into 4 areas with 3 boundary lines: Line-1 is the center of the neck, Line-2 is the border between the neck and the trochanteric zone, and Line-3 links the inferior borders of the greater and lesser trochanters. A fracture only in the first area was classified as a pure first area fracture; one in the first and second area was classified as a 1-2 type fracture. In the same way, fractures were classified as pure 2, 3-4, 1-2-3, and so on. “Area classification” reliability was highest when orthopedists with varying experience classified proximal femoral fractures using 3DCT. Other classifications cannot classify proximal femoral fractures if they exceed each classification’s particular zones. However, fractures that exceed the target zones are “dangerous” fractures. “Area classification” can classify such fractures, and it is therefore useful for selecting osteosynthesis methods.
Arensman, Remco M.; Pisters, Martijn F.; Man-van Ginkel, de Janneke M.; Schuurmans, Marieke J; Jette, Alan M.; Bie, de Rob A.
2016-01-01
Background. Adequate and user-friendly instruments for assessing physical function and disability in older adults are vital for estimating and predicting health care needs in clinical practice. The Late-Life Function and Disability Instrument Computer Adaptive Test (LLFDICAT) is a promising instrume
Yang Zeqing
2013-07-01
Full Text Available In order to design the feed system of high speed Computer Numerical Control (CNC lathe, the static and dynamic characteristics of feed system driven by linear motor in high speed CNC lathe were analyzed. The slide board was taking as the main moving part of the feed system, and the guide rail was the main support component of the linear motor feed system. The mechanical structure static stiffness of feed system is researched through the slide board statics analysis. The simulation results show that the maximum deformation of the slide board occurs in the middle of the slide board where the linear motor is placed. The linear motor feed system control model was established based on analysis of high-speed linear feed system control principle, and the linear motor feed system transfer function was established, and servo dynamic stiffness factors were analyzed. The control parameters of the servo system and actuating mechanism parameters of feed system on the effect of the linear motor servo dynamic stiffness were analyzed using MATLAB software. The simulation results show that the position loop proportional gain, speed loop proportional gain and speed loop integral response time are the biggest influence factors on servo dynamic stiffness. The displacement response is reduced under the cutting interference force step inputting, the servo dynamic stiffness is increased, the number of system oscillation is also reduced, and the system tends to be stable.
Lautenschläger, Janin; Lautenschläger, Christian; Tadic, Vedrana; Süße, Herbert; Ortmann, Wolfgang; Denzler, Joachim; Stallmach, Andreas; Witte, Otto W; Grosskreutz, Julian
2015-11-01
The function of intact organelles, whether mitochondria, Golgi apparatus or endoplasmic reticulum (ER), relies on their proper morphological organization. It is recognized that disturbances of organelle morphology are early events in disease manifestation, but reliable and quantitative detection of organelle morphology is difficult and time-consuming. Here we present a novel computer vision algorithm for the assessment of organelle morphology in whole cell 3D images. The algorithm allows the numerical and quantitative description of organelle structures, including total number and length of segments, cell and nucleus area/volume as well as novel texture parameters like lacunarity and fractal dimension. Applying the algorithm we performed a pilot study in cultured motor neurons from transgenic G93A hSOD1 mice, a model of human familial amyotrophic lateral sclerosis. In the presence of the mutated SOD1 and upon excitotoxic treatment with kainate we demonstrate a clear fragmentation of the mitochondrial network, with an increase in the number of mitochondrial segments and a reduction in the length of mitochondria. Histogram analyses show a reduced number of tubular mitochondria and an increased number of small mitochondrial segments. The computer vision algorithm for the evaluation of organelle morphology allows an objective assessment of disease-related organelle phenotypes with greatly reduced examiner bias and will aid the evaluation of novel therapeutic strategies on a cellular level.
Sergi, Pier Nicola; Jensen, Winnie; Yoshida, Ken
2016-02-01
Tungsten is an elective material to produce slender and stiff microneedles able to enter soft tissues and minimize puncture wounds. In particular, tungsten microneedles are used to puncture peripheral nerves and insert neural interfaces, bridging the gap between the nervous system and robotic devices (e.g., hand prostheses). Unfortunately, microneedles fail during the puncture process and this failure is not dependent on stiffness or fracture toughness of the constituent material. In addition, the microneedles' performances decrease during in vivo trials with respect to the in vitro ones. This further effect is independent on internal biotic effects, while it seems to be related to external biotic causes. Since the exact synergy of phenomena decreasing the in vivo reliability is still not known, this work explored the connection between in vitro and in vivo behavior of tungsten microneedles through the study of interactions between biotic and abiotic factors. A hybrid computational approach, simultaneously using theoretical relationships and in silico models of nerves, was implemented to model the change of reliability varying the microneedle diameter, and to predict in vivo performances by using in vitro reliability and local differences between in vivo and in vitro mechanical response of nerves.
Hirose, Kazuyuki; Kobayashi, Daisuke; Ito, Taichi; Endoh, Tetsuo
2017-08-01
The memory reliability of magnetic tunnel junctions has been examined from the aspect of their potential use in disaster-resilient computing. This computing technology requires memories that can keep stored information intact even in power-cut emergency situations. Such a requirement has been quantified as a score of acceptable flip probability, which is the failure in time (FIT) rate of 1 for a single-interface perpendicular magnetic tunnel junction (p-MTJ) with a disk diameter of 20 nm. For comparison with this acceptable probability, p-MTJ memory reliability has been evaluated. The risk of particle radiation bombardments, i.e., alpha particles and neutrons — the well-known soft error sources on the ground — has been evaluated from the aspects of both frequency of bombardments and the hazardous effects of bombardments. This study highlights that high-energy terrestrial neutrons may lead to soft errors in p-MTJs, but the flip probability, or the risk, is expected to be lower than 1 × 10-6 FIT/p-MTJ, which is much smaller than the target probability. It has also been found that the use of p-MTJs can reduce the risk by three orders of magnitude compared with that of the conventional SRAMs. Few risks have been suggested for other radiation particles, such as alpha particles and thermal neutrons.
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
胡宇驰
2001-01-01
应用马尔科夫状态图法，对一个实际的硬件式可修容错计算机系统进行了可靠性评估。并针对两种容错方式分别得出各自的评估数据，通过实际的数据分析了其优缺点及最佳适用范围。%In this paper, the reliability of a fault-tolerance computer system is evaluated by Markov status graph. Majority voting method and single store method are used to evaluate the reliability and usability of the fault-tolerance system. Through practical computation, the comparison data are also given.
Bassam A. Hassan
2010-01-01
Full Text Available Objectives: The purpose of the present study was to investigate the reliability of both periapical radiographs and orthopantomograms for exact detection of tooth root protrusion in the maxillary sinus by correlating the results with cone beam computed tomography.Material and methods: A database of 1400 patients scanned with cone beam computed tomography (CBCT was searched for matching periapical (PA radiographs and orthopantogram (OPG images of maxillary premolars and molars. Matching OPG images datasets of 101 patients with 628 teeth and PA radiographs datasets of 93 patients with 359 teeth were identified. Four observers assessed the relationship between the apex of tooth root and the maxillary sinus per tooth on PA radiographs, OPG and CBCT images using the following classification: root tip is in the sinus (class 1, root tip is against the sinus wall (class 2 and root tip is not in the sinus (class 3.Results: Overall correlation between OPG and CBCT images scores was 50%, 26% and 56.1% for class 1, class 2 and class 3, respectively (Cohen’s kappa [weighted] = 0.1. Overall correlation between PA radiographs and CBCT images was 75.8%, 15.8% and 56.9% for class 1, class 2 and class 3, respectively (Cohen’s kappa [weighted] = 0.24. In both the OPG images and the PA radiographs datasets, class 1 correlation was most frequently observed with the first and second molars.Conclusions: The results demonstrated that both periapical radiographs and orthopantomograms are not reliable in determination of exact relationship between the apex of tooth root and the maxillary sinus floor. Periapical radiography is slightly more reliable than orthopantomography in determining this relationship.
Hassan, Bassam A
2010-01-01
The purpose of the present study was to investigate the reliability of both periapical radiographs and orthopantomograms for exact detection of tooth root protrusion in the maxillary sinus by correlating the results with cone beam computed tomography. A database of 1400 patients scanned with cone beam computed tomography (CBCT) was searched for matching periapical (PA) radiographs and orthopantogram (OPG) images of maxillary premolars and molars. Matching OPG images datasets of 101 patients with 628 teeth and PA radiographs datasets of 93 patients with 359 teeth were identified. Four observers assessed the relationship between the apex of tooth root and the maxillary sinus per tooth on PA radiographs, OPG and CBCT images using the following classification: root tip is in the sinus (class 1), root tip is against the sinus wall (class 2) and root tip is not in the sinus (class 3). Overall correlation between OPG and CBCT images scores was 50%, 26% and 56.1% for class 1, class 2 and class 3, respectively (Cohen's kappa [weighted] = 0.1). Overall correlation between PA radiographs and CBCT images was 75.8%, 15.8% and 56.9% for class 1, class 2 and class 3, respectively (Cohen's kappa [weighted] = 0.24). In both the OPG images and the PA radiographs datasets, class 1 correlation was most frequently observed with the first and second molars. The results demonstrated that both periapical radiographs and orthopantomograms are not reliable in determination of exact relationship between the apex of tooth root and the maxillary sinus floor. Periapical radiography is slightly more reliable than orthopantomography in determining this relationship.
Al-Ruqaie, I; Al-Khalifah, N S; Shanavaskhan, A E
2016-01-01
Varietal identification of olives is an intrinsic and empirical exercise owing to the large number of synonyms and homonyms, intensive exchange of genotypes, presence of varietal clones and lack of proper certification in nurseries. A comparative study of morphological characters of eight olive cultivars grown in Saudi Arabia was carried out and analyzed using NTSYSpc (Numerical Taxonomy System for personal computer) system segregated smaller fruits in one clade and the rest in two clades. Koroneiki, a Greek cultivar with a small sized fruit shared arm with Spanish variety Arbosana. Morphologic analysis using NTSYSpc revealed that biometrics of leaves, fruits and seeds are reliable morphologic characters to distinguish between varieties, except for a few morphologically very similar olive cultivars. The proximate analysis showed significant variations in the protein, fiber, crude fat, ash and moisture content of different cultivars. The study also showed that neither the size of fruit nor the fruit pulp thickness is a limiting factor determining crude fat content of olives.
Hamersvelt, Robbert W. van; Willemink, Martin J.; Takx, Richard A.P.; Eikendal, Anouk L.M.; Budde, Ricardo P.J.; Leiner, Tim; Jong, Pim A. de [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Mol, Christian P.; Isgum, Ivana [University Medical Center Utrecht, Image Sciences Institute, Utrecht (Netherlands)
2014-07-15
To determine inter-observer and inter-examination variability for aortic valve calcification (AVC) and mitral valve and annulus calcification (MC) in low-dose unenhanced ungated lung cancer screening chest computed tomography (CT). We included 578 lung cancer screening trial participants who were examined by CT twice within 3 months to follow indeterminate pulmonary nodules. On these CTs, AVC and MC were measured in cubic millimetres. One hundred CTs were examined by five observers to determine the inter-observer variability. Reliability was assessed by kappa statistics (κ) and intra-class correlation coefficients (ICCs). Variability was expressed as the mean difference ± standard deviation (SD). Inter-examination reliability was excellent for AVC (κ = 0.94, ICC = 0.96) and MC (κ = 0.95, ICC = 0.90). Inter-examination variability was 12.7 ± 118.2 mm{sup 3} for AVC and 31.5 ± 219.2 mm{sup 3} for MC. Inter-observer reliability ranged from κ = 0.68 to κ = 0.92 for AVC and from κ = 0.20 to κ = 0.66 for MC. Inter-observer ICC was 0.94 for AVC and ranged from 0.56 to 0.97 for MC. Inter-observer variability ranged from -30.5 ± 252.0 mm{sup 3} to 84.0 ± 240.5 mm{sup 3} for AVC and from -95.2 ± 210.0 mm{sup 3} to 303.7 ± 501.6 mm{sup 3} for MC. AVC can be quantified with excellent reliability on ungated unenhanced low-dose chest CT, but manual detection of MC can be subject to substantial inter-observer variability. Lung cancer screening CT may be used for detection and quantification of cardiac valve calcifications. (orig.)
Pachêco-Pereira, Camila; Alsufyani, Noura A; Major, Michael P; Flores-Mir, Carlos
2016-06-01
To determine how accurate and reliable oral maxillofacial radiologists (OMFRs) are in screening for adenoid hypertrophy when using cone-beam computed tomography (CBCT) imaging compared with nasopharyngoscopy (NP). CBCT scans of 10 patients with distinct levels of adenoid hypertrophy were randomly selected. Fourteen board-certified OMFRs classified the levels of hypertrophy. The intraclass correlation coefficient (ICC) was used to assess accuracy by comparing their diagnosis against an NP diagnosis, which is the reference standard. OMFRs' interreliability was assessed. Kappa statistics were used to analyze dichotomous data from healthy and unhealthy patients. Overall, the reliability among OMFRs was good (ICC = 0.79 with confidence interval [CI] 0.63-0.93). The "statistical mode" was very good (ICC = 0.81; CI 0.43-0.94). The accuracy of OMFRs against NP was good (ICCmean = 0.69; CI 0.43-0.94). On average, the Kappa statistics (Kmean = 0.77; CI 0.62-0.92) demonstrated a good agreement between OMFRs and NP diagnoses. The individualized results from each evaluator were presented and investigated according to their performance. Compared with the reference standard, the accuracy of OMFRs to classify adenoid hypertrophy on a four-level scale was moderate to strong and improved when adenoid hypertrophy was classified as healthy or unhealthy. The reliability of the OMFRs was greater than 80%, assuring their consistency and reliability on screening adenoids hypertrophy via CBCT. Copyright © 2016 Elsevier Inc. All rights reserved.
Kaiser, B. O.; Scheck-Wenderoth, M.; Cacace, M.; Przybycin, A.; Lewerenz, B.
2012-04-01
Sedimentary basins provide a significant portion of geothermal energy. Making geothermal heat an effective source for sustainable energy supply requires a quantitative reserve assessment. Numerical (mathematical) models of sedimentary basins are useful tools for first-order approximations of the geothermal potential on a regional scale. The challenge for numerical investigations within complex geological sedimentary basins is that the thermal field contains superposed signals originating from several heat transport processes, different in nature but physically coupled. An additional difficulty arising from numerical simulations is the error introduced by discretizing a continuous physical system into its numerical counterpart. Different mesh resolutions may lead to different and sometimes contrasting computational findings, thus making the reliability of coupled numerical simulations at least questionable. By means of 3D numerical simulations we discriminate conductive, forced convective and free thermal convective heat transport within a complex geological setting, the Northeast German Basin. As a second step we explore the sensitivity of each heat transport process with regard to the spatial discretization. The internal geological structure of the NEGB is characterized by the presence of a highly structured Zechstein salt sequence piercing the sedimentary overburden locally. Moreover, the Zechstein salt is impervious to fluid flow and has a relative high thermal conductivity compared to the surrounding clastic sediments. Computational results show that these hydrogeological conditions exerts primary constraints on the internal hydrothermal setting of the basin. The impervious nature of the Zechstein salt inhibits groundwater flow to be effective. Accordingly, conduction is the main heat transport mechanism within the salt. In contrast, forced convective heat transport triggerd by topographic gradients affects mainly the temperature distribution within the post
Srivastava, A; Mazzocco, G; Kel, A; Wyrwicz, L S; Plewczynski, D
2016-03-01
Protein-protein interactions (PPIs) play a vital role in most biological processes. Hence their comprehension can promote a better understanding of the mechanisms underlying living systems. However, besides the cost and the time limitation involved in the detection of experimentally validated PPIs, the noise in the data is still an important issue to overcome. In the last decade several in silico PPI prediction methods using both structural and genomic information were developed for this purpose. Here we introduce a unique validation approach aimed to collect reliable non interacting proteins (NIPs). Thereafter the most relevant protein/protein-pair related features were selected. Finally, the prepared dataset was used for PPI classification, leveraging the prediction capabilities of well-established machine learning methods. Our best classification procedure displayed specificity and sensitivity values of 96.33% and 98.02%, respectively, surpassing the prediction capabilities of other methods, including those trained on gold standard datasets. We showed that the PPI/NIP predictive performances can be considerably improved by focusing on data preparation.
Oristrell, J; Casanovas, A; Jordana, R; Comet, R; Gil, M; Oliva, J C
2012-12-01
There are no simple and validated instruments for evaluating the training of specialists. To analyze the reliability and validity of a computerized self-assessment method to quantify the acquisition of medical competences during the Internal Medicine residency program. All residents of our department participated in the study during a period of 28 months. Twenty-two questionnaires specific for each rotation (the Computer-Book of the Internal Medicine Resident) were constructed with items (questions) corresponding to three competence domains: clinical skills competence, communication skills and teamwork. Reliability was analyzed by measuring the internal consistency of items in each competence domain using Cronbach's alpha index. Validation was performed by comparing mean scores in each competence domain between senior and junior residents. Cut-off levels of competence scores were established in order to identify the strengths and weaknesses of our training program. Finally, self-assessment values were correlated with the evaluations of the medical staff. There was a high internal consistency of the items of clinical skills competences, communication skills and teamwork. Higher scores of clinical skills competence and communication skills, but not in those of teamwork were observed in senior residents than in junior residents. The Computer-Book of the Internal Medicine Resident identified the strengths and weaknesses of our training program. We did not observe any correlation between the results of the self- evaluations and the evaluations made by staff physicians. The items of Computer-Book of the Internal Medicine Resident showed high internal consistency and made it possible to measure the acquisition of medical competences in a team of Internal Medicine residents. This self-assessment method should be complemented with other evaluation methods in order to assess the acquisition of medical competences by an individual resident. Copyright © 2012 Elsevier Espa
Multi-Disciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
胡金蓉; 吴霞
2015-01-01
With the continuous development of computer technol-ogy, computing thinking ability has become a necessary quality that college students must possess. The inherent characteristics of knowledge structure of numerical computing method, as a foun-dation course for science and engineering students, are of great significance to the cultivation of computing thinking. With the course as the breakthrough point, under the guidance of the out-line of cultivating computing thinking ability, this paper discusses the teaching scheme of numerical computing method course based on the cultivation of computing thinking ability, aiming to train and improve students' computing thinking ability based on their mastery of knowledge on numerical computing method.%随着计算机技术的不断发展，计算思维能力已成为大学生必须具备的基本素质之一。数值计算方法作为理工科学生的专业基础课程之一，其固有的知识结构特点对计算思维的培养具有重要意义。本文以该课程作为切入点，在计算思维能力培养概要指导下讨论了基于计算思维能力培养的数值计算方法课程的教学方案，目的是在学生学习数值计算方法内容的同时，训练和提升自己的计算思维能力。
Tang, H.; Sun, W.
2016-12-01
The theoretical computation of dislocation theory in a given earth model is necessary in the explanation of observations of the co- and post-seismic deformation of earthquakes. For this purpose, computation theories based on layered or pure half space [Okada, 1985; Okubo, 1992; Wang et al., 2006] and on spherically symmetric earth [Piersanti et al., 1995; Pollitz, 1997; Sabadini & Vermeersen, 1997; Wang, 1999] have been proposed. It is indicated that the compressibility, curvature and the continuous variation of the radial structure of Earth should be simultaneously taken into account for modern high precision displacement-based observations like GPS. Therefore, Tanaka et al. [2006; 2007] computed global displacement and gravity variation by combining the reciprocity theorem (RPT) [Okubo, 1993] and numerical inverse Laplace integration (NIL) instead of the normal mode method [Peltier, 1974]. Without using RPT, we follow the straightforward numerical integration of co-seismic deformation given by Sun et al. [1996] to present a straightforward numerical inverse Laplace integration method (SNIL). This method is used to compute the co- and post-seismic displacement of point dislocations buried in a spherically symmetric, self-gravitating viscoelastic and multilayered earth model and is easy to extended to the application of geoid and gravity. Comparing with pre-existing method, this method is relatively more straightforward and time-saving, mainly because we sum associated Legendre polynomials and dislocation love numbers before using Riemann-Merlin formula to implement SNIL.